article
stringlengths
0
456k
abstract
stringlengths
0
65.5k
a change - point model for a sequence of independent random variables is a model in which there exist unknown change points , , such that , for each , are identically distributed with a distribution that depends on . here , we consider parametric change - point models in which the distribution of is parametric ; however , the form of the distribution can be different for each .change - point models are used in many fields .for example , broemeling and tsurumi ( ) uses a multiple change - point model for the us demand for money ; lombard ( ) uses a multiple change - point model to model the effect of sudden changes in wind direction on the flight of a projectile ; reed ( ) uses a multiple change - point model in the analysis of forest fire data .a number of authors have used multiple change - point models in the analysis of dna sequences ; see , for example , braun and muller ( ) , fu and curnow ( ) and halpern ( ) .many further examples are provided in the monographs chen and gupta ( ) and csrg and horvth ( ) .the goal of this paper is to establish the asymptotic properties of maximum likelihood estimators of the parameters of a multiple change - point model , under easily verifiable conditions .these results are based on the following model .assume that the vectors in the data set are independently drawn from the parametric model where is a probability density function of a continuous distribution with unknown common parameter for all and unknown within - segment parameters for each ; may have the same functional form for some or all of ; may be a vector ; may be a different vector parameter of different dimensions for each .in this model , there are unknown change points , where the number of change points is assumed to be known .the parameter is common to all segments .there are a number of results available on the asymptotic properties of parameter estimators in change - point models .see , for example , hinkley ( ) , hinkley and hinkley ( ) , battacharya ( ) , fu and curnow ( ) , jandhyala and fotopoulos ( ) and hawkins ( ) ; the two monographs chen and gupta ( ) and csrgo and horvth ( ) have detailed bibliographies on this topic . in particular, hinkley ( ) considers likelihood - based inference for a single change - point model , obtaining the asymptotic distribution of the maximum likelihood estimator of the change point under the assumption that the other parameters in the model are known .hinkley ( ) and hinkley ( ) argue that this asymptotic distribution is also valid when the parameters are unknown .unfortunately , there are problems in extending the approach used in hinkley ( ) to the setting considered here .the method used in hinkley ( ) is based on considering the relative locations of a candidate change point and the true change point .when there is only a single change point , there are only three possibilities : the candidate change point is either greater than , less than or equal to the true change point . however , in models with change points , the relative positions of the candidate change points and the true change points can become quite complicated and the simplicity and elegance of the single change point argument is lost .a second problem arises when extending the argument for the case in which the change points are the only parameters in the model to the case in which there are unknown within - segment parameters .the consistency argument used in the former case is extended to the latter case using a `` consistency assumption '' ( hinkley ( ) , section 4.1 ) ; this condition is discussed in appendix and examples are given which show that this assumption is a strong one that is not generally satisfied in the class of models considered here .there are relatively few results available on the asymptotic properties of maximum likelihood estimators in multiple change - point models .thus , the present paper has done several things . in the general model described above ,in which there is a fixed , but arbitrary , number of change points , we show that the maximum likelihood estimators of the change points are consistent and converge to the true change points at the rate , under relatively weak regularity conditions . as noted above , a simple extension of the approach used in single change - point models is not available ; thus , the second thing achieved by this paper is the introduction of the tools necessary for analyzing the likelihood function in a multiple change - point model . finally , the asymptotic distribution of the maximum likelihood estimators of the parameters of the within - segment distributions is derived for the general case described above , in which the form of the distribution can change from segment to segment and in which , possibly , there are parameters that are common to all segments .the paper is organized as follows . the asymptotic theory of maximum likelihood estimators of a multiple change - point modelis described in section [ sec2 ] .section [ sec3 ] contains a numerical example illustrating these results and section [ sec4 ] contains some discussion of future research which builds on the results given in this paper .appendix discusses the `` consistency assumption '' used in hinkley ( ) ; all technical proofs are given in appendix .consider estimation of the multiple change - point model introduced in section [ sec1 ] .for any change point configuration , the log - likelihood function is given by estimators of all change points , all within - segment parameters and the common parameter are given by where , and are the parameter spaces of , , and , respectively .let note that is taken to be a constant vector as goes to infinity .define the expected information matrix is given by = \pmatrix { e[-\ell^0_{\psi\psi}(\psi,\theta);\phi ] & e[-\ell^0_{\psi\theta}(\psi , \theta);\phi]\vspace*{3pt}\cr e[-\ell^0_{\psi\theta}(\psi,\theta);\phi]^t&e[-\ell^0_{\theta\theta } ( \psi,\theta);\phi]},\ ] ] \\ & & \quad=\bigl(e\bigl[-\ell^{(1)}_{\psi\theta_1}(\psi,\theta_1);\phi\bigr],e\bigl[-\ell ^{(2)}_{\psi\theta_2}(\psi,\theta_2);\phi\bigr],\ldots , e\bigl[-\ell^{(k+1)}_{\psi \theta_{k+1}}(\psi,\theta_{k+1});\phi\bigr]\bigr ) , \\ & & e[-\ell^0_{\theta\theta}(\psi,\theta);\phi]\\ & & \quad = \operatorname{diag}\bigl ( e\bigl[-\ell^{(1)}_{\theta_1\theta_1}(\psi,\theta_1);\phi\bigr ] , e\bigl[-\ell^{(2)}_{\theta_2\theta_2}(\psi,\theta_2);\phi\bigr ] , \ldots , e\bigl[-\ell^{(k+1)}_{\theta_{k+1}\theta_{k+1}}(\psi,\theta_{k+1});\phi\bigr]\bigr),\end{aligned}\ ] ] where denotes a diagonal block matrix whose diagonal blocks are in the bracket , other elements are zeros and the average expected information matrix is given by the asymptotic properties of these estimators are based on the following regularity conditions .other than the parts concerning change points , these conditions are typically similar to those required for the consistency and asymptotic normality of maximum likelihood estimators of parameters in models without change points ; see , for example , wald ( ) .particularly , compactness of parameter spaces is a common assumption in the classical likelihood literature .these conditions are different from those required by ferger ( ) and dring ( ) , who consider estimation of change points in a nonparametric setting in which nothing is assumed about the within - segment distributions , using a type of nonparametric m - estimator based on empirical processes .thus , these authors do not require conditions on the within - segment likelihood functions ; on the other hand , their method does not provide estimators of within - segment parameters .[ a20 ] it is assumed that for , on a set of non - zero measure .this assumption guarantees that the distributions in two neighboring segments are different ; clearly , this is required for the change points to be well defined .[ a21 ] it is assumed that : for , and are contained in , where is a compact subset of ; and are contained in where is a compact subset of ; here , are non - negative integers ; is third - order continuously differentiable with respect to ; the expectations of the first and second order derivatives of with respect to exist for in its parameter space .compactness of the parameter space is used to establish the consistency of the maximum likelihood estimators of ; see , for example , bahadur ( ) for further discussion of this condition and its necessity in general models .if we assume further conditions on models , the compactness of the parameter space may be avoided . but this appears to be a substantial task for future work .differentiability of the log - likelihood function is used to justify certain taylor series expansions .both parts of assumption [ a21 ] are relatively weak and are essentially the same as conditions used in parametric models without change points ; see , for example , schervish ( ) , section 7.3 .part 3 is very weak and is used in the proof of theorem [ thm3 ] .[ a22]it is assumed that : for any and any integers satisfying , \}\biggr)^2\biggr\}\leq c(t - s)^r , \ ] ] where and is a constant ; for any and any integers satisfying , -v(\psi,\theta_j;\psi^0,\theta^0_j)\}\biggr)^2\biggr\}\\ & & \quad\leq d(t - s)^r , \end{aligned}\ ] ] where is introduced in equation , and is a constant .parts 1 and 2 of assumption [ a22 ] are technical requirements on the behavior of the log - likelihood function between and within segments , respectively .this condition is used to ensure that the information regarding the within- and between - segment parameters grows quickly enough to establish consistency and asymptotic normality of the parameter estimators .these conditions are relatively weak ; it is easy to check that they are satisfied by at least all distributions in the exponential family .consider a probability density function of exponential family form : it is then straightforward that the schwarz inequality gives \}\biggr)^2 \\ & & \quad \leq\biggl[1+\sum_{q=1}^{m}w_{q}(\eta)^2\biggr]\\ & & \qquad{}\times\biggl\ { \biggl[\sum_{i = s+1}^{t}\bigl(\log h(x_i)-e(\log h(x_i))\bigr)\biggr]^2+\sum_{q=1}^{m}\biggl[\sum _ { i = s+1}^{t}\bigl(t_{q}(x_i)-e(t_{q}(x_i))\bigr)\biggr]^2\biggr\}.\end{aligned}\ ] ] therefore , part 1 of assumption [ a22 ] is satisfied with because the function assumed to be continuous can achieve its maximum on the compact parameter space . similarly , part 2 of assumption [ a22 ] is also satisfied with .the main results of this paper are given in the following three theorems .[ thm1 ] under assumption , part 1 of assumption and part 1 of assumption , and as , that is , and , where for and .note that , are not consistent ( hinkley ( ) ) ; it is the estimators of the change - point fractions , that are consistent .the consistency of , and is the same as the corresponding result in classical likelihood theory for independent , identically distributed data .[ thm2 ] under assumptions , we have where that is , for . we now consider the asymptotic distribution of , where . [ thm3 ] under assumptions , where is the -dimensional multivariate normal distribution with mean vector zero and covariance matrix .the proofs of theorems [ thm1][thm3 ] are based on the following approach .define a function by (\psi^0,\theta_i^0;x)\,dx\biggr\}\nonumber\\ & & { } + \frac{1}{n}\sum_{j=1}^{k+1}\sum_{i = n_{j-1}+1}^{n_j}\ { \log f_j(\psi,\theta_j;x_i)-e[\log f_j(\psi,\theta_j;x_i)]\}\\ & & { } -\frac{1}{n}\sum_{j=1}^{k+1}\sum_{i = n_{j-1}^0 + 1}^{n_j^0}\ { \log f_j(\psi^0,\theta_j^0;x_i)-e[\log f_j(\psi^0,\theta_j^0;x_i)]\ } , \nonumber\end{aligned}\ ] ] where is the number of observations in the set \cap[n_{i-1}^0 + 1,n_i^0] ] .note that is a weighted sum of the negative kullback leibler distances ; it will be shown that approaches as .also , with equality if and only if almost everywhere ( kullback and leibler ( ) ) .lemma [ lem1 ] gives a bound for .[ lem1 ] under assumption and part 1 of assumption , there exist two positive constants and such that , for any and , where and .lemma [ lem2 ] describes between - segment properties and within - segment properties of this model .[ lem2 ] under part 1 of assumption [ a21 ] , the following two results follow from parts 1 and 2 of assumption [ a22 ] respectively : for any , any and any positive number , there exist a constant , independent of , and a constant , such that \}\biggr|>\varepsilon\biggr ) \nonumber \\[-8pt ] \\[-8pt ] \nonumber & & \quad \leq a_j\frac { ( m_2-m_1)^r}{\varepsilon^2}.\end{aligned}\ ] ] for any and any positive number , there exist a constant , independent of , and a constant , such that \nonumber \\[-8pt ] \\[-8pt ] \nonumber & & { } \hspace*{134pt}-v(\psi,\theta_j;\psi^0,\theta^0_j)\}>\varepsilon\biggr)\leq b_j\frac { ( n^0_j - n^0_{j-1})^r}{\varepsilon^2}.\end{aligned}\ ] ] in practical applications , it is useful to have an estimator of . let & \hat{e}[-\hat{\ell}_{\psi\theta}(\hat{\psi},\hat{\theta});\hat{\phi}]\vspace*{3pt}\cr \hat{e}[-\hat{\ell}_{\psi\theta}(\hat{\psi},\hat{\theta});\hat{\phi } ] ^t&\hat{e}[-\hat{\ell}_{\theta\theta}(\hat{\psi},\hat{\theta});\hat { \phi}]},\\ \hat{e}[-\hat{\ell}_{\psi\psi}(\hat{\psi},\hat{\theta});\hat{\phi } ] & = & \sum_{j=1}^{k+1}\sum_{i=\hat{n}_{j-1}+1}^{\hat{n}_j}\frac { 1}{f_j^2(\hat{\psi},\hat{\theta}_j;x_i)}{f_j}_{\psi}(\hat{\psi},\hat { \theta}_j;x_i){f_j}_{\psi}^t(\hat{\psi},\hat{\theta}_j;x_i),\\ \hat{e}[-\hat{\ell}_{\psi\theta_j}(\hat{\psi},\hat{\theta});\hat{\phi } ] & = & \sum_{i=\hat{n}_{j-1}+1}^{\hat{n}_j}\frac{1}{f_j^2(\hat{\psi},\hat { \theta}_j;x_i)}{f_j}_{\psi}(\hat{\psi},\hat{\theta}_j;x_i){f_j}_{\theta _ j}^t(\hat{\psi},\hat{\theta}_j;x_i),\\\hat{e}[-\hat{\ell}_{\theta_j\theta_j}(\hat{\psi},\hat{\theta});\hat { \phi}]&=&\sum_{i=\hat{n}_{j-1}+1}^{\hat{n}_j}\frac{1}{f_j^2(\hat{\psi } , \hat{\theta}_j;x_i)}{f_j}_{\theta_j}(\hat{\psi},\hat{\theta } _ j;x_i){f_j}_{\theta_j}^t(\hat{\psi},\hat{\theta}_j;x_i)\end{aligned}\ ] ] for .then is a consistent estimator of .consider the problem of analyzing the mineral content of a core sample , which is extensively studied in chen and gupta ( ) , chernoff ( ) and srivastava and worsley ( ) .in particular , we consider the data in chernoff ( ) on the mineral content of minerals in a core sample measured at equally spaced points . since some of the minerals have a very low assay , we follow chen and gupta ( ) and srivastava and worsley ( ) in analyzing only the variables and with the highest assays .thus , we assume that has a -variate normal distribution with a within - segment mean parameter vector and a variance - covariance matrix that is common to all segments .the analyses of chen and gupta ( ) , chernoff ( ) and srivastava and worsley ( ) suggest that there are change points of the mean vector and , hence , we make that assumption here .the estimates of change points , within - segment parameters of mean vectors and common parameter of variance - covariance matrix were computed using maximum likelihood .the estimated change points are and , which are different from those estimated change points by chen and gupta ( ) , chernoff ( ) and srivastava and worsley ( ) , and are more reasonable .this is because chen and gupta ( ) , chernoff ( ) and srivastava and worsley ( ) use the binary segmentation procedures which detect multiple change points one by one , not simultaneously , whereas the method in this paper simultaneously estimates multiple change points .the estimated six within - segment mean vectors are in the following .they are arranged according to the order of from left to right .for example , the two vectors on the first line are , respectively , the first and second within - segment mean vectors . estimated common variance - covariance matrix is paper establishes the consistency of maximum likelihood estimators of the parameters of a general class of multiple change - point models and gives the asymptotic distribution of the parameters of the within - segment distributions .the required regularity conditions are relatively weak and are generally satisfied by exponential family distributions .some important problems in the analysis of multiple change - point models were not considered here .one is that the asymptotic distribution of the maximum likelihood estimator of the vector of change points was not considered .the reason for this is that the methods used to determine this asymptotic distribution are quite different from the methods used to establish the consistency of the maximum likelihood estimator ; see , for example , hinkley ( ) for a treatment of this problem in a single change - point model .thus , this is essentially a separate research topic .however , the asymptotic properties obtained in this paper are necessary for the establishment of the asymptotic distribution of the maximum likelihood estimator of the vector of change points in this model. this will be a subject of future work .another important problem is to extend the results of this paper to the case in which the number of change points is not known and must be determined from the data .clearly , a likelihood - based approach to this problem will require an understanding of the properties of maximum likelihood estimators in the model in which the number of change points is known .thus , the results of the present paper can be considered as a first step toward the development of a likelihood - based methodology that can be used to determine simultaneously the number and location of the change points .this is also a topic of future research .consider a change - point model with a single change point , , and suppose that there are no common parameters in the model . in hinkley ( ), it is shown that , the maximum likelihood estimator of , satisfies under the condition with probability as , which was described as a `` consistency assumption '' .note that the random variables in the sum are drawn from the distribution with density .suppose that converges to as , uniformly in , where is distributed according to the distribution with density .equation ( [ eqa1 ] ) then holds , provided that note that , by properties of the kullback leibler distance and assumption [ a20 ] , for each .thus , condition ( [ eqa1 ] ) fails whenever the distribution corresponding to the density is in the closure of the set of distributions corresponding to densities of the form , in a certain sense .one such case occurs if and have the same parametric form with parameters , respectively , satisfying .for instance , suppose that the random variables in the first segment are normally distributed with mean and standard deviation and the random variables in the second segment are normally distributed with mean and standard deviation .then where is normally distributed with mean and variance .clearly , ( [ eqa1 ] ) does not hold in this case .a similar situation occurs when the distribution with density can be viewed as a limit of the distributions with densities .for instance , suppose that is the density of a weibull distribution with rate parameter and shape parameter , , , and is the density of an exponential distribution with rate parameter . in this appendix, we show that this is a strong assumption that is not generally satisfied by otherwise well - behaved models .for instance , suppose that and have the same functional form and that the difference between the two distributions is due to the fact that .again , ( [ eqa1 ] ) will not hold .thus , the consistency condition used in hinkley ( ) is too strong for the general model considered here .proof of lemma [ lem1 ] we first need to prepare some results which are to be used in this proof . for ,let us define ,\ ] ] where .we then have that for .it is straightforward to show that is a convex function with respect to for any .let . because for , convexity of gives that noting that ,\ ] ] it follows from assumption [ a20 ] that .if we let , then .let .consider a change - point fraction configuration such that .for any , there are two cases : a candidate change - point fraction may be on the left or the right of the true change - point fraction . for any with on the right of , we have that .then if we define , then the case gives that and \\ & \leq&\frac{n_{j , j+1}}{n}g_j(\phi^0)\leq(\lambda_j-\lambda_j^0)\bar { g}(\phi^0).\end{aligned}\ ] ] for any with on the left of , we have that .similarly , we define . using the fact that , it similarly gives that .therefore , if , then we obtain that .on the other hand , we have for any , so now , consider the other case of a change - point fraction configuration , where .it is clear that there exists a pair of integers such that , and .let .for any , we have that \\ & \leq&\frac{n_{i , j+1}+n_{ij}}{n}\min(\alpha_{i , j+1},1-\alpha _ { i , j+1})\bar{g}(\phi^0)\\ & \leq&\frac{\delta_{\lambda}^0}{2}\min\biggl(\frac { n_{i , j+1}}{n},\frac{n_{ij}}{n}\biggr)\bar{g}(\phi^0)\\ & \leq&\frac{1}{2}\biggl(\frac{{\delta_{\lambda}^0}}{2}\biggr)^2\bar { g}(\phi^0).\end{aligned}\ ] ] combining the results from the two cases of and , it follows that and \leq-\frac{\delta_{\lambda}^0}{2}\min\biggl[\rho(\phi,\phi^0),-\frac{\delta _ { \lambda}^0}{4}\bar{g}(\phi^0)\\\biggr ] .\ ] ] note that ( [ eqb1 ] ) can be simplified .if we define then we have that .it follows from inequality ( [ eqb1 ] ) that .\ ] ] if , then we have that if , then .letting inequality ( [ eqb1 ] ) gives that .setting , we finally have that which concludes the proof .proof of lemma [ lem2 ] with part 1 of assumption [ a22 ] in mind , equation ( [ eq6 ] ) can be achieved by induction with respect to .the induction method is similar to the one used in mricz , serfling and stout ( ) , so its proof is omitted here . using part 2 of assumption [ a22 ] , equation ( [ eq7 ] )can be proven similarly by the same induction method .proof of theorem [ thm1 ] let then , for any , it follows from lemma [ lem1 ] that therefore , we obtain that \}\biggr|>\frac{c_1}{2}\delta\biggr ) \\ & & \qquad{}+p_r\biggl(\sum_{j=1}^{k+1}\frac{1}{n}\biggl|\sum _ { i = n_{j-1}^0 + 1}^{n_j^0}\{\log f_j(\psi^0,\theta_j^0;x_i)-e[\log f_j(\psi^0,\theta_j^0;x_i)]\biggr|>\frac{c_1}{2}\delta\biggr)\\ & & \quad\leq\sum_{j=1}^{k+1}p_r\biggl(\max_{0\leq n_{j-1}<n_j\leq n,\theta_j\in \theta_j,\psi\in\psi}\frac{1}{n}\biggl|\sum_{i = n_{j-1}+1}^{n_j}\{\log f_j(\psi , \theta_j;x_i)-e[\log f_j(\psi,\theta_j;x_i)]\}\biggr|\\ & & \quad\hspace*{43pt}>\frac{c_1\delta } { 2(k+1)}\biggr)\\ & & \qquad{}+\sum_{j=1}^{k+1}p_r\biggl(\frac{1}{n}\biggl|\sum _ { i = n_{j-1}^0 + 1}^{n_j^0}\{\log f_j(\psi^0,\theta_j^0;x_i)-e[\log f_j(\psi^0,\theta_j^0;x_i)]\}\biggr|>\frac{c_1\delta}{2(k+1)}\biggr ) .\end{aligned}\ ] ] it follows from lemma [ lem2 ] that ^ 2\biggl(\sum_{j=1}^{k+1}a_j\biggr)n^{r-2}\rightarrow0\qquad \mbox { as } n\rightarrow+\infty,\ ] ] noting that . for , we similarly obtain that \}\biggr|\\ & & { } \hspace*{51pt}>\frac{c_2\delta } { 2(k+1)}\biggr)\\ & & \qquad{}+\sum_{j=1}^{k+1}p_r\biggl(\frac{1}{n}\biggl|\sum _ { i = n_{j-1}^0 + 1}^{n_j^0}\{\log f_j(\psi^0,\theta_j^0;x_i)-e[\log f_j(\psi^0,\theta_j^0;x_i)]\}\biggr|>\frac{c_2\delta}{2(k+1)}\biggr).\end{aligned}\ ] ] similarly , lemma [ lem2 ] shows that .noting the fact that if and only if and , it follows that and for , which completes the proof .proof of theorem [ thm2 ] let us first define for any .because of the consistency of , we need to consider only those terms whose observations are in , and for all in equation ( [ eqn2.5 ] ) .therefore , we have \\ & & \qquad{}\hspace*{78pt}-\frac{1}{n}\sum_{t\in\tilde{n}_{jj}}[\log f_j(\psi ^0,\theta^0_j;x_t)-e(\log f_j(\psi^0,\theta^0_j;x_t))]\\ & & \qquad{}\hspace*{78pt}+\frac{1}{3(k+1)}j_1\biggr\}>0\biggr)\\ & & \qquad{}+\sum_{j=2}^{k+1}p_r\biggl(\max_{\lambda\in\lambda_{\delta , n},\phi\in\phi}\biggl\{\frac{1}{n}\sum_{t\in\tilde{n}_{j , j-1}}[\log f_j(\psi , \theta_j;x_t)-e(\log f_j(\psi,\theta_j;x_t))]\\ & & \qquad{}\hspace*{92pt}-\frac{1}{n}\sum_{t\in\tilde{n}_{j , j-1}}[\log f_{j-1}(\psi ^0,\theta^0_{j-1};x_t)-e(\log f_{j-1}(\psi^0,\theta^0_{j-1};x_t))]\\ & & \qquad{}\hspace*{92pt}+\frac { 1}{3k}j_1\biggr\}>0\biggr)\\ & & \qquad{}+\sum_{j=1}^{k}p_r\biggl(\max_{\lambda\in\lambda_{\delta , n},\phi \in\phi}\biggl\{\frac{1}{n}\sum_{t\in\tilde{n}_{j , j+1}}[\log f_j(\psi,\theta _ j;x_t)-e(\log f_j(\psi,\theta_j;x_t))]\\ & & \qquad{}\hspace*{91pt}-\frac{1}{n}\sum_{t\in\tilde{n}_{j , j+1}}[\log f_{j+1}(\psi ^0,\theta^0_{j+1};x_t)-e(\log f_{j+1}(\psi^0,\theta^0_{j+1};x_t))]\\ & & \qquad{}\hspace*{92pt}+\frac { 1}{3k}j_1\biggr\}>0\biggr)\\ & & \quad \equiv\sum_{j=1}^{k+1}i_{1j}+\sum_{j=2}^{k+1}i_{2j}+\sum _ { j=1}^{k}i_{3j}.\end{aligned}\ ] ] first , consider the probability formulas in the above equation for any .the consistency of allows us to restrict our attention to the case . for this case, we have that therefore , we obtain that \\ & & \qquad{}\hspace*{16pt}-v(\psi^*,\theta^*_j;\psi ^0,\theta^0_j)\}>\frac{n^0_j - n^0_{j-1}}{6(k+1 ) } |v(\psi^*,\theta _ j^*;\psi^0,\theta^0_j)|\biggr)\\ & \leq & p_r\biggl(\max_{n_{j-1}^0\leq s <t\leq n_j^0,\psi\in\psi , \theta_j\in\theta_j}\sum_{i = s+1}^t\{[\log f_j(\psi,\theta _ j;x_t)-\log f_j(\psi^0,\theta^0_j;x_t)]\\ & & \qquad{}\hspace*{112pt}-v(\psi,\theta_j;\psi^0,\theta^0_j)\}>\frac { e}{6(k+1)}(n^0_j - n^0_{j-1})\biggr),\end{aligned}\ ] ] where , , and are , respectively , the maximizing values of , , and obtained through the maximization .equation ( [ eq7 ] ) of lemma [ lem2 ] can then be applied to show that as .next , consider the probability formula for any . in this case , .we have that +\frac{1}{6k}j_1\biggr\}>0\biggr)\\ & & { } \hspace*{18pt}+p_r\biggl(\max_{\lambda\in\lambda_{\delta , n},\phi\in\phi}\biggl\ { -\frac{1}{n}\sum_{t\in\tilde{n}_{j , j-1}}[\log f_{j-1}(\psi,\theta _ { j-1};x_t)-e(\log f_{j-1}(\psi,\theta_{j-1};x_t))]\\ & & \qquad{}\hspace*{74pt}+\frac{1}{6k}j_1\biggr\}>0\biggr)\\ & \equiv & i_{2j}^{(1)}+i_{2j}^{(2)}.\end{aligned}\ ] ] and can be handled in the same way , so we just show how to handle .only two cases have to be considered . if , then \biggr|\\ & & { } \quad\hspace*{32pt}>\frac{c_1\delta}{6k}\biggr).\end{aligned}\ ] ] equation ( [ eq6 ] ) of lemma [ lem2 ] gives that as .if for the other case , then . therefore , we obtain that \\ & & \qquad{}\hspace*{113pt}-\frac{c_1}{6k}\biggr)>0\biggr)\\ & \leq & p_r\biggl(\max_{n_{j-1}\leq s < t\leq n^0_{j-1},\theta _ j\in\theta_j,\psi\in\psi}\biggl|\sum_{i = s+1}^{t}[\log f_j(\psi,\theta _j;x_i)-e(\log f_j(\psi,\theta_j;x_i))]\biggr|\\ & & \quad{}\hspace*{2pt}>\frac { c_1}{6k}(n^0_{j-1}-n_{j-1})\biggr),\end{aligned}\ ] ] which converges to zero as , by equation ( [ eq6 ] ) of lemma [ lem2 ] . can be handled in the same way as .therefore , theorem [ thm2 ] is proved .proof of theorem [ thm3 ] we first have the expansion (\hat{\phi } -\phi^0).\ ] ] the fact that then gives that ^{-1}\frac{\hat{\ell}_{\phi}(\psi^0,\theta^0)}{\sqrt{n}}.\ ] ] now , consider the limit of .we have that + \frac{1}{\sqrt{n}}\ell^0_{\phi}(\psi^0,\theta^0).\nonumber\end{aligned}\ ] ] because of the consistency of , we can assume that for .it is then straightforward to obtain that \\ & & \quad=\frac{1}{\sqrt{n}}\sum_{j=1}^{k+1}\bigl[\hat{\ell}_{\phi } ^{(j)}(\psi^0,\theta_j^0)-\ell_{\phi}^{(j)}(\psi^0,\theta _ j^0)\bigr]\nonumber\\ & & \quad = \frac{1}{\sqrt{n}}\sum_{j=1}^{k+1}\biggl\ { i(\hat{n}_j\geq n^0_j,\hat{n}_{j-1}\geq n^0_{j-1})\\ & & { } \hspace*{56pt}\times\biggl[\sum_{i = n^0_j+1}^{\hat{n}_j}\frac{\partial}{\partial\phi } \log f_j(\psi^0,\theta^0_j;x_i ) -\sum_{i = n^0_{j-1}+1}^{\hat{n}_{j-1}}\frac{\partial}{\partial\phi}\log f_j(\psi^0,\theta^0_j;x_i)\biggr]\\ & & { } \hspace*{58pt}+i(\hat{n}_j\geq n^0_j,\hat{n}_{j-1 } < n^0_{j-1})\\ & & { } \hspace*{67pt}\times\biggl[\sum_{i = n^0_j+1}^{\hat{n}_j}\frac{\partial}{\partial\phi } \log f_j(\psi^0,\theta^0_j;x_i)+\sum_{i=\hat { n}_{j-1}+1}^{n^0_{j-1}}\frac{\partial}{\partial\phi}\log f_j(\psi ^0,\theta^0_j;x_i)\biggr]\\ & & { } \hspace*{58pt}+i(\hat{n}_j < n^0_j,\hat{n}_{j-1}\geq n^0_{j-1})\\ & & { } \hspace*{66pt}\times\biggl[-\sum_{i=\hat{n}_j+1}^{n^0_j}\frac{\partial}{\partial\phi } \log f_j(\psi^0,\theta^0_j;x_i)-\sum_{i = n^0_{j-1}+1}^{\hat { n}_{j-1}}\frac{\partial}{\partial\phi}\log f_j(\psi^0,\theta ^0_j;x_i)\biggr]\\ & & { } \hspace*{58pt}+i(\hat{n}_j < n^0_j,\hat{n}_{j-1 } < n^0_{j-1})\\ & & { } \hspace*{68pt}\times\biggl[-\sum_{i=\hat{n}_j+1}^{n^0_j}\frac{\partial}{\partial\phi } \log f_j(\psi^0,\theta^0_j;x_i)+\sum_{i=\hat { n}_{j-1}+1}^{n^0_{j-1}}\frac{\partial}{\partial\phi}\log f_j(\psi ^0,\theta^0_j;x_i)\biggr]\biggr\}.\end{aligned}\ ] ] it follows from theorem [ thm2 ] that =\frac{1}{\sqrt{n}}\mathrm{o}_p(1),\ ] ] which converges to zero in probability as .since it follows that in a similar way , we easily obtain that therefore , we have that proving the result .h. he thanks professor peter hall for his support of this research .the research of h. he was financially supported by a mascos grant from the australian research council .the research of t.a .severini was supported by the u.s. national science foundation .
models with multiple change points are used in many fields ; however , the theoretical properties of maximum likelihood estimators of such models have received relatively little attention . the goal of this paper is to establish the asymptotic properties of maximum likelihood estimators of the parameters of a multiple change - point model for a general class of models in which the form of the distribution can change from segment to segment and in which , possibly , there are parameters that are common to all segments . consistency of the maximum likelihood estimators of the change points is established and the rate of convergence is determined ; the asymptotic distribution of the maximum likelihood estimators of the parameters of the within - segment distributions is also derived . since the approach used in single change - point models is not easily extended to multiple change - point models , these results require the introduction of those tools for analyzing the likelihood function in a multiple change - point model .
multiple - input multiple - output ( mimo ) systems that employ non - orthogonal space - time block codes ( stbc ) from cyclic division algebras ( cda ) for arbitrary number of transmit antennas , , are quite attractive because they can simultaneously provide both _ full - rate _( i.e. , complex symbols per channel use , which is same as in v - blast ) as well as _ full transmit diversity _ . the golden code is a well known non - orthogonal stbc from cda for 2 transmit antennas .high spectral efficiencies of the order of tens of bps / hz can be achieved using large non - orthogonal stbcs .for example , a stbc from cda has 256 complex symbols in it with 512 real dimensions ; with 16-qam and rate-3/4 turbo code , this system offers a high spectral efficiency of 48 bps / hz . decoding of non - orthogonal stbcs with such large dimensions , however , has been a challenge .sphere decoder and its low - complexity variants are prohibitively complex for decoding such stbcs with hundreds of dimensions . in this paper, we present a probabilistic data association ( pda ) based algorithm for decoding large non - orthogonal stbcs from cda . key attractive features of this algorithm are its low - complexity and near - ml performance in systems with large dimensions ( e.g. , hundreds of dimensions ) .while creating hundreds of dimensions in space alone ( e.g. , v - blast ) requires hundreds of antennas , use of non - orthogonal stbcs from cda can create hundreds of dimensions with just tens of antennas ( space ) and tens of channel uses ( time ) . given that 802.11 smart wifi products with 12 transmit antennas at 2.5 ghz are now commercially available ( which establishes that issues related to placement of many antennas and rf / if chains can be solved in large aperture communication terminals like set - top boxes / laptops ) , large non - orthogonal stbcs ( e.g. , stbc from cda ) in combination with large dimension near - ml decoding using pda can enable communications at increased spectral efficiencies of the order of tens of bps / hz ( note that current standards achieve only bps / hz using only up to 4 transmit antennas ) .pda , originally developed for target tracking , is widely used in digital communications - .particularly , pda algorithm is a reduced complexity alternative to the a posteriori probability ( app ) decoder / detector / equalizer. near - optimal performance has been demonstrated for pda - based multiuser detection in cdma systems - .pda has been used in the detection of v - blast signals with small number of dimensions - . to our knowledge, pda has not been reported for _ decoding non - orthogonal stbcs with hundreds of dimensions _ so far .our results in this paper can be summarized as follows : * we adapt the pda algorithm for decoding non - orthogonal stbcs with large dimensions . with i.i.d fading and perfect csir, the algorithm achieves near - siso awgn uncoded ber and near - capacity coded ber ( within about 5 db of the theoretical capacity ) for stbc from cda , 4-qam , rate-3/4 turbo code , and 18 bps / hz . * relaxing the perfect csir assumption , we report results with a training based iterative pda decoding / channel estimation scheme . the iterative scheme is shown to be effective with large coherence times . *relaxing the i.i.d fading assumption by adopting a spatially correlated mimo channel model ( proposed by gesbert et al in ) , we show that the performance loss due to spatial correlation is alleviated by using more receive spatial dimensions for a fixed receiver aperture . * finally, the performance of the pda algorithm is compared with that of the likelihood ascent search ( las ) algorithm we recently presented in - .the pda algorithm is shown to perform better than the las algorithm at low snrs for higher - order qam ( e.g. , 16-qam ) , and in the presence of spatial correlation .consider a stbc mimo system with multiple transmit and receive antennas .an stbc is represented by a matrix , where and denote the number of transmit antennas and number of time slots , respectively , and denotes the number of complex data symbols sent in one stbc matrix .the entry in represents the complex number transmitted from the transmit antenna in the time slot .the rate of an stbc is .let and denote the number of receive and transmit antennas , respectively .let denote the channel gain matrix , where the entry in is the complex channel gain from the transmit antenna to the receive antenna .we assume that the channel gains remain constant over one stbc matrix and vary ( i.i.d ) from one stbc matrix to the other .assuming rich scattering , we model the entries of as i.i.d .the received space - time signal matrix , , can be written as where is the noise matrix at the receiver and its entries are modeled as i.i.d , where is the average energy of the transmitted symbols , and is the average received snr per receive antenna , and the entry in is the received signal at the receive antenna in the time - slot .consider linear dispersion stbcs , where can be written in the form where is the complex data symbol , and is its corresponding weight matrix . the received signal model in ( [ systemmodel ] )can be written in an equivalent v - blast form as where , , , , whose entry is the data symbol , and whose column is , .each element of is an -pam/-qam symbol .let , , , be decomposed into real and imaginary parts as : further , we define , , , and as ^t , \\ \hspace{4 mm } { \bf x}_r = [ { \bf x}_i^t \hspace{2 mm } { \bf x}_q^t ] ^t , \hspace{4 mm } { \bf n}_r = [ { \bf n}_i^t \hspace{2 mm } { \bf n}_q^t ] ^t.\end{aligned}\ ] ] .\hspace{10 mm } ( \mbox{9.a})\ ] ] ] now , ( [ systemmodelvec2 ] ) can be written as henceforth , we work with the real - valued system in ( [ systemmodelreal ] ) . for notational simplicity ,we drop subscripts in ( [ systemmodelreal ] ) and write where , , , and .we assume that the channel coefficients are known at the receiver but not at the transmitter .let denote the -pam signal set from which ( entry of ) takes values , .now , define a -dimensional signal space to be the cartesian product of to .the ml solution is then given by whose complexity is exponential in .we focus on the detection of square ( i.e. , ) , full - rate ( i.e. , ) , circulant ( where the weight matrices s are permutation type ) , non - orthogonal stbcs from cda , whose construction for arbitrary number of transmit antennas is given by the matrix in eqn.(9.a ) given at the bottom of this page . in ( 9.a ) , , , and , are the data symbols from a qam alphabet . when , the code in ( 9.a ) is information lossless ( ill ) , and when and , it is of full - diversity and information lossless ( fd - ill ) .high spectral efficiencies with large can be achieved using this code construction . however , since these stbcs are non - orthogonal , ml detection gets increasingly impractical for large .consequently , a key challenge in realizing the benefits of these large stbcs in practice is that of achieving near - ml performance for large at low decoding complexities .the ber performance results we report in sec .[ sec4 ] show that the pda - based decoding algorithm we propose in the following section essentially meets this challenge .in this section , we present the proposed pda - based decoding algorithm for square qam .the applicability of the algorithm to any rectangular qam is straightforward . in the real - valued system model in ( [ systemmodelii ] ) , each entry of belongs to a -pam constellation , where is the size of the original square qam constellation .let denote the constituent bits of the entry of .we can write the value of each entry of as a linear combination of its constituent bits as let , defined as ^t\hspace{-1 mm } , \hspace{-0mm}\ ] ] denote the transmitted bit vector . defining $ ] , we can write as where is the identity matrix . using ( [ b2seq ] ) , we can rewrite ( [ systemmodelii ] ) as where is the effective channel matrix .our goal is to obtain , an estimate of the vector . for this, we iteratively update the statistics of each bit of , as described in the following subsection , for a certain number of iterations , and hard decisions are made on the final statistics to get .the algorithm is iterative in nature , where statistic updates , one for each of the constituent bits , are performed in each iteration .we start the algorithm by initializing the a priori probabilities as , and . in an iteration ,the statistics of the bits are updated sequentially , i.e. , the ordered sequence of updates in an iteration is .the steps involved in each iteration of the algorithm are derived as follows .the likelihood ratio of bit in an iteration , denoted by , is given by denoting the column of by , we can write ( [ revsystemmodelii ] ) as where is the interference plus noise vector . to calculate ,we approximate the distribution of to be gaussian , and hence is gaussian conditioned on . since there are terms in the double summation in ( [ ext_1 ] ) , this gaussian approximation gets increasingly accurate for large ( note that ) .since a gaussian distribution is fully characterized by its mean and covariance , we evaluate the mean and covariance of given and . for notational simplicity ,let us define and .it is clear that .let and , where denotes the expectation operator .now , from ( [ ext_1 ] ) , we can write as similarly , we can write as next , the covariance matrix of given is given by \nonumber \\ & & \hspace{-7 mm } \big [ { \bf n } + \hspace{-1 mm } \sum_{l=0}^{2k-1 } \hspace{-1 mm } \sum_{{\stackrel{m=0}{m\neq q(i - l)+j}}}^{q-1}\hspace{-3 mm } { \bf h}_{ql+m } ( b_l^{(m ) } - 2p_l^{m+}+ 1)\big]^t\bigg\}. \end{aligned}\ ] ] assuming independence among the constituent bits , we can simplify in ( [ covpm1 ] ) as using the above mean and covariance expressions , we can write the distribution of given as similarly , is given by using ( [ pdf_p1 ] ) and ( [ pdf_m1 ] ) , can be written as using and , is computed using ( [ llr_i_j_k ] ) .now , using the value of , the statistics of is updated as follows . from ( [ llr_i_j_k ] ) , and using , we have and as an approximation , dropping the conditioning on , and using the above procedure , we update and for all and sequentially .this completes one iteration of the algorithm ; i.e. , each iteration involves the computation of and equations ( [ exp1 ] ) , ( [ exm1 ] ) , ( [ covpm2 ] ) , ( [ ext_fin1 ] ) , ( [ llr_i_j_k ] ) , ( [ appx1 ] ) , and ( [ appx2 ] ) for all .the updated values of and in ( [ appx1 ] ) and ( [ appx2 ] ) for all are fed back to the next iteration .the algorithm terminates after a certain number of such iterations . at the end of the last iteration, hard decision is made on the final statistics to obtain the bit estimate as if , and otherwise . in coded systems , s are fed as soft inputs to the decoder .the most computationally expensive operation in computing is the evaluation of the inverse of the covariance matrix , , of size which requires complexity , which can be reduced as follows .define matrix as at the start of the algorithm , with and initialized to 0.5 for all , * d * becomes .we note that when the statistics of is updated using ( [ appx1 ] ) and ( [ appx2 ] ) , the matrix in ( [ ckdef ] ) also changes .a straightforward inversion of this updated matrix would require complexity .however , we can obtain the from the previously available in complexity as follows .since the statistics of only is updated , the new matrix is just a rank one update of the old matrix .therefore , using the matrix inversion lemma , the new can be obtained from the old as where where and are the new .e ., after the update in ( [ appx1]) and ( [ appx2 ] ) and old ( before the update ) values , respectively . it can be seen that both the numerator and denominator in the 2nd term on the rhs of ( [ ckcorr ] ) can be computed in complexity .therefore , the computation of the new using the old can be done in complexity ._ computation of : _ using ( [ ckdef ] ) and ( [ covpm2 ] ) , we can write in terms of as we can compute from at a reduced complexity using the matrix inversion lemma , which states that substituting , , , and in ( [ cinv_cmp2 ] ) , we get which can be computed in complexity . _computation of and : _ computation of involves the computation of and also . from ( [ exm1 ] ) , it is clear that can be computed from with a computational overhead of only . from ( [ exp1 ] ) , it can be seen that computing would require complexity . however , this complexity can be reduced as follows .define vector as using ( [ exp1 ] ) and ( [ ekdef ] ) , we can write * u * can be computed iteratively at complexity as follows .when the statistics of is updated , we can obtain the new from the old as whose complexity is .hence , the computation of in ( [ ekdef ] ) and in ( [ ekdef2 ] ) needs complexity .the listing of the proposed pda algorithm is summarized in the table - i in the next page . ' '' '' table - i : proposed pda - based algorithm listing ' '' '' \1 . , , .\2 . . : number of iterations \4 . ; is the iteration number \5 . for = 0 to \6 . for = 0 to \7 . \8 . \9 . \10 . \11 . , \12 . \13 . \14 . , \15 . \16 . \17 . \18 .end ; end of for loop starting at line 5 \19 .if ( ) goto line 21 \20 . , goto line 5 \21 . \22 . \23 . terminate ' '' ''we need to compute at the start of the algorithm .this requires complexity .so the computation of the initial in line 2 requires .based on the complexity reduction in sec .[ complex ] , the complexity in updating the statistics of one constituent bit ( lines 7 to 17 ) is .so , the complexity for the update of all the constituent bits in an iteration is .since the number of iterations is fixed , the overall complexity of the algorithm is .for , since there are symbols per stbc and bits per symbol , the overall complexity per bit is .in this section , we present the simulated uncoded / coded ber of the pda algorithm in decoding non - orthogonal stbcs from cda ) and ill ( ) stbcs with pda decoding were almost the same . here , we present the performance of ill stbcs . ] .number of iterations in the pda algorithm is set to in all the simulations . _pda versus las performance with 4-qam : _ in fig .[ fig1 ] , we plot the uncoded ber of the pda algorithm as a function of average received snr per rx antenna , , in decoding , , stbcs from cda with and 4-qam . perfect channel state information at the receiver ( csir ) and i.i.d fading are assumed .for the same settings , the performance of the las algorithm in - with mmse initial vector are also plotted for comparison . from fig .[ fig1 ] , it is seen that * the ber performance of pda algorithm improves and approaches siso awgn performance as is increased ; e.g. , performance close to within about 1 db from siso awgn performance is achieved at uncoded ber in decoding stbc from cda having 512 real dimensions , and this illustrates the ability of the pda algorithm to achieve excellent performance at low complexities in large non - orthogonal stbc mimo . * with 4-qam ,pda and las algorithms achieve almost the same performance . _pda versus las performance with 16-qam : _ figure [ fig2 ] presents an uncoded ber comparison between pda and las algorithms for stbc from cda with and 16-qam under perfect csir and i.i.d fading .it can be seen that the pda algorithm performs better at low snrs than the las algorithm .for example , with and stbcs , at low snrs ( e.g. , db for stbc ) , pda algorithm performs better by about 1 db compared to las algorithm at uncoded ber . _ turbo coded ber performance of pda : _ figure [ fig3 ] shows the rate-3/4 turbo coded ber of the pda algorithm under perfect csir and i.i.d fading for ill stbc with and 4-qam , which corresponds to a spectral efficiency of 18 bps / hz . the theoretical minimum snr required to achieve 18 bps / hz spectral efficiency on a mimo channel with perfect csir and i.i.d fading is 4.3 db ( obtained through simulation of the ergodic capacity formula ) . from fig .[ fig3 ] , it is seen that the pda algorithm is able to achieve vertical fall in coded ber within about 5 db from the theoretical minimum snr , which is a good nearness to capacity performance .we relax the perfect csir assumption by considering a training based iterative pda decoding / channel estimation scheme .transmission is carried out in frames , where one pilot matrix ( for training purposes ) followed by data stbc matrices are sent in each frame as shown in fig .one frame length , , ( taken to be the channel coherence time ) is channel uses .the proposed scheme works as follows : obtain an mmse estimate of the channel matrix during the pilot phase , use the estimated channel matrix to decode the data stbc matrices using pda algorithm , and iterate between channel estimation and pda decoding for a certain number of times . for stbc from cda , in addition to perfect csir performance , fig .[ fig3 ] also shows the performance with csir estimated using the proposed iterative decoding / channel estimation scheme for and .2 iterations between decoding and channel estimation are used . with ( which corresponds to large coherence times , i.e. , slow fading ) the ber and bps / hz with estimated csir get closer to those with perfect csir ._ effect of spatial mimo correlation : _ in figs .[ fig1 ] to [ fig3 ] , we assumed i.i.d fading . but spatial correlation at transmit / receive antennas and the structure of scattering and propagation environment can affect the rank structure of the mimo channel resulting in degraded performance .we relaxed the i.i.d . fading assumption by considering the correlated mimo channel model in , which takes into account carrier frequency ( ) , spacing between antenna elements ( ) , distance between tx and rx antennas ( ) , and scattering environment . in fig .[ fig5 ] , we plot the ber of the pda algorithm in decoding stbc from cda with perfect csir in i.i.d . fading , and correlated mimo fading model in . it is seen that , compared to i.i.d fading , there is a loss in diversity order in spatial correlation for ; further , use of more rx antennas ( ) alleviates this loss in performance .we can decode _ perfect codes _, of large dimensions also using the proposed pda algorithm .
non - orthogonal space - time block codes ( stbc ) from cyclic division algebras ( cda ) having large dimensions are attractive because they can simultaneously achieve both high spectral efficiencies ( same spectral efficiency as in v - blast for a given number of transmit antennas ) _ as well as _ full transmit diversity . decoding of non - orthogonal stbcs with hundreds of dimensions has been a challenge . in this paper , we present a probabilistic data association ( pda ) based algorithm for decoding non - orthogonal stbcs with large dimensions . our simulation results show that the proposed pda - based algorithm achieves near siso awgn uncoded ber as well as near - capacity coded ber ( within about 5 db of the theoretical capacity ) for large non - orthogonal stbcs from cda . we study the effect of spatial correlation on the ber , and show that the performance loss due to spatial correlation can be alleviated by providing more receive spatial dimensions . we report good ber performance when a training - based iterative decoding / channel estimation is used ( instead of assuming perfect channel knowledge ) in channels with large coherence times . a comparison of the performances of the pda algorithm and the likelihood ascent search ( las ) algorithm ( reported in our recent work ) is also presented .
estimating extreme risks in a multivariate framework is highly connected with the estimation of the extremal dependence structure . this structure can be described _ via _ the stable tail dependence function ( s.t.d.f . ) , first introduced by . for any arbitrary dimension , consider a multivariate vector with continuous marginal cumulative distribution functions ( c.d.f . )the s.t.d.f .is defined for each positive reals as \\[-8pt]\nonumber & & \qquad = l(x_1,\ldots , x_d).\end{aligned}\ ] ] assuming that such a limit exists and is nondegenerate is equivalent to the classical assumption of existence of a multivariate domain of attraction for the componentwise maxima ; see , for example , , chapter 7 .the previous limit can be rewritten as = l(x_1,\ldots , x_d),\ ] ] where denotes the multivariate c.d.f . of the vector , and for , .consider a sample of size drawn from and an intermediate sequence , that is to say a sequence tending to infinity as , with .denote by a vector of the positive quadrant and by the order statistics among realizations of the margins .the empirical estimator of is obtained from ( [ eql ] ) , replacing by its empirical version , by , and for by its empirical counterpart ,n} ] , for . *the _ second - order condition _ consists of assuming the existence of a positive function , such that as , and a nonnull function such that for all with positive coordinates , - l({\mathbf x } ) \bigr\ } \nonumber\\[-8pt]\\[-8pt]\nonumber & & \qquad = m({\mathbf x}),\end{aligned}\ ] ] uniformly on any ^d ] , for .this implicitly requires that is not a multiple of the function ; see remark [ rmmn ] .[ rkhomol ] the function defined by ( [ eql ] ) and that appears in ( [ eq2ndorder ] ) and ( [ eq3rdorder ] ) is homogeneous of order 1 .we refer , for instance , to , pages 213 and 236 .most of the estimators constructed in this paper use the homogeneity property .note that pointwise convergence in ( [ eql ] ) entails uniform convergence on the square ^d ] be the space of real valued functions that are right - continuous with left - limits . now introduce the conditions : * the third - order condition is satisfied , so that ( [ eq2ndorder ] ) and ( [ eq3rdorder ] ) hold ; * the coefficients of regular variation and of the functions and defined in ( [ eq2ndorder ] ) and ( [ eq3rdorder ] ) are negative ; * the function defined in ( [ eq2ndorder ] ) is differentiable and defined in ( [ eq3rdorder ] ) is continuous .[ propdev - asympt - l ] assume that the conditions of proposition [ propcv - ps - l ] are fulfilled and that the set of conditions hold .consider the estimator of defined by ( [ eq1storderhat ] ) where is such that and .then as tends to infinity , in ^d) ] given in terms of the measure defined by ( [ eqmu ] ) and of : there exists such that .[ rkasymptbias ] a difference between the previous result and theorem 7.2.2 of consists of the choice of the intermediate sequence that is larger here .indeed , we suppose whereas they choose , which implies . our choice requires the more informative second - order condition ( [ eq2ndorder ] ) .a nonnull asymptotic bias appears in our framework .the conditions on , and required in proposition [ propdev - asympt - l ] are not too restrictive : because of the regular variation of and , they are implied by the choice , with .as pointed out in remark [ rkasymptbias ] , a nonnull asymptotic bias appears from proposition [ propdev - asympt - l ] .the bias reduction procedure will consist of subtracting the estimated asymptotic bias obtained in section [ subsecmethoda ] .the key ingredient is the homogeneity of the functions and mentioned in remarks [ rkhomol ] and [ rkpropmn ] .this homogeneity will also provide other constructions to get rid of the asymptotic bias .equation ( [ eqasympt - dev - l ] ) suggests a natural correction of as soon as an estimator of is available . in order to take advantage of the homogeneity of ,let us introduce a positive scale parameter which allows to contract or to dilate the observed points .we denote and from ( [ eqasympt - dev - l ] ) one gets in ^d) ] for every , where is a continuous centered gaussian process defined by with covariance = { \mathbb e}[z_l({\mathbf x } ) z_l({\mathbf y } ) ] ( 1 - b^{-1/2 } + a^{-1/2 } ) ^2 ] .the procedure of bias reduction introduced in the previous section requires the estimation of the second - order parameter .it is actually possible to avoid it , making use of combinations of estimators of .the asymptotic bias of is , as already noted from ( [ eqdev - asympt - la - l ] ) . making use of ( [ eqdeltacv ] ) and homogeneity of ,one gets as tends to infinity , for any intermediate sequence that satisfies .the expression can thus be used as an estimator of the asymptotic bias of .after simplifications , this leads to a new family of asymptotically unbiased estimators of by substracting the estimated bias from , namely which is well defined for any real number such that .[ thmbias - lu ] assume that the conditions of proposition [ propdev - asympt - l ] are fulfilled , and consider the estimator of defined by ( [ defestimc ] ) .let be an intermediate sequence such that converges in distribution .suppose also that is such that , , and .assume moreover that the function never vanishes except on the axes .then , as tends to infinity , in ^d) ] given by \times\break ( a^{-\rho } -1)^{-2 } ( a^{-\rho } - a^{-1/2 } ) ^2 ] leads to where this allows us to identify .the identification of second and third - order terms has previously be derived by .the purpose of this section is to evaluate the performance of the estimators of introduced in section [ secprocedure ] . for simplicity, we will focus on dimension 2 , and simulate samples from the distributions presented in section [ secexamples ] .thanks to the homogeneity property , one can focus on the estimation of for , which coincides with the pickands dependence function ; see , for example , , page 267 .considering first the estimation at leads to the definition of aggregated versions of our estimators .these new estimators will be both compared in terms of -errors for or associated level curves .let us start with the estimation of for the bivariate student distribution with 2 degrees of freedom .this model is a particular case of sections [ subsecresnick - style ] and [ subsecelliptical ] . for one sample of size 1000, figure [ sec - simu1-stu2 ] gives , as functions of , the estimation of at point by , and , respectively , defined by ( [ eq1storderhat ] ) , ( [ defestimb ] ) and ( [ defestimc ] ) . for the last two estimators ,the parameters have been tuned as follows : , and estimated using ( [ defrho - hat ] ) with .these values have been empirically selected based on intensive simulation , and will be kept throughout the paper . for the bivariate law based on a sample of size 1000 . ]one can check from figure [ sec - simu1-stu2 ] that the empirical estimator behaves fairly well in terms of bias for small values of . besides, the bias is efficiently corrected by the two estimators and .since the bias almost vanishes along the range of , one can think about reducing the variance through an aggregation in ( via mean or median ) of or .this leads us to consider the two following estimators : where is the sample size and is an appropriate fraction of .their performance will be compared to those of the family .simplified notation will be used instead of . because any s.t.d.f . satisfies , the competitors have been corrected so that they satisfy the same inequalities .if satisfies the condition imposed on in theorems [ thmbiasb ] and [ thmbias - lu ] , then the aggregated estimators and would inherit the asymptotic properties of and .indeed , all the estimators jointly converge , since they are based on a single process .[ remmixture ] in the following simulation study , is arbitrarily fixed to .such a choice is open to criticism since it does not satisfy the theoretical assumptions mentioned in the previous remark .but it is motivated here by the fact that the bias happened to be efficiently corrected , even for very large values of , as already illustrated on figure [ sec - simu1-stu2 ] .note , however , that such a choice would not be systematically the right one . in presence of more complex models such as mixtures , should not exceed the size of the subpopulation with heaviest tail . to illustrate this point ,take , for example , the bivariate c.d.f . , where is the c.d.f . of the bivariate model , and is the uniform c.d.f . on the s.t.d.f . is , and only of the data belong to the targeted domain of attraction , so should not exceed .classical criteria of quality of an estimator of are the absolute bias ( abias ) and the mean square error ( mse ) defined by where is the number of replicates of the experiment and is the estimate from the sample .note that what we call _ abias _ is also referred as _ mae _ ( for mean absolute error ) in the literature .figure [ figabias - mse - st ] plots these criteria in the estimation of for the bivariate model when and . in the bivariate modelwhen as a function of . ]figure [ figabias - mse - st ] exhibits the strong dependence of the behavior of in terms of , as well as the efficiency of the bias correction procedures .the estimator given by ( [ defestimb ] ) outperforms the estimator defined by ( [ defestimc ] ) , no matter the value of .moreover , the abias and mse curves associated to almost reach the minimum of those of . finally , the aggregated version answers surprisingly well to the estimation problem of the s.t.d.f .first , its performance is similar to the best reachable from the original estimator .second , it gets rid of the delicate choice of a threshold ( or would at least simplify this choice ; see remark [ remmixture ] ) .these comparisons have also been made for five other models obtained from section [ secexamples ] .the results are very similar to the ones obtained for the bivariate distribution and are therefore not presented .the comparisons are now handled not only at a single point , but for the whole function using an -error defined as follows : where is the size of the subdivision of ] for every and , with figure [ figbox - rho ] illustrates the finite sample behavior of this estimator of for a collection of bivariate models introduced in section [ secexamples ] , for which the true value of is equal to . given by ( [ defrho - hat ] ) using samples of size 1000 drawn from six models : ; ; symmetric logistic with ; archimax model with logistic generator with ; archimax model with mixed generator .red line indicates the true value of . ]these boxplots show that the estimator performs reasonably well in median , no matter the choice of model , but the uncertainty is rather important .fortunately this seems from simulation studies to have only minor influence on the estimation of . recall that from ( [ eqdev - asympt - la - l ] ) the asymptotic bias of is given by . in order to circumvent an estimation of the term ,a renormalization is needed , focusing , for instance , on the estimation of where .thanks to ( [ eqdeltacv ] ) , this ratio can be consistently estimated by as soon as is a well - chosen intermediate sequence .the asymptotic normality can also be derived from analogous arguments to those used in the proof of proposition [ prop3estim - rho ] .details are not presented here for the sake of simplicity .figure [ figl1-norm - m ] summarizes the behavior of the estimator of the curve through boxplots of the -error , defined as in ( [ eqnorm1 ] ) .we observe from this figure that the best estimation is reached for large values of .this feature does not depend on the degree of asymptotic dependence in the symmetric logistic model , nor on the strength of the bias of the original estimator detected on figure [ figl1-norm - l ] .these graphs confirm that the asymptotic bias is remarkably well estimated for large values of .this helps to understand why the bias subtraction is accurate for large or very large choices of , as also commented in section [ subseccorrections ] .-error of -curve .first row : bivariate logistic model with ( left ) and with ( right ) .second row : bivariate logistic model with ( left ) and bivariate archimax with mixed generator ( right ) . ]this paper deals with the estimation of the extremal dependence structure in a multivariate context . focusing on the s.t.d.f ., the empirical counterpart is the nonparametric reference .a common feature when modeling extreme events is the delicate choice of the number of observations used in the estimation , and it spoils the good performance of this estimator .the aim of this paper has been to correct the asymptotic bias of the empirical estimator , so that the choice of the threshold becomes less sensitive .two asymptotically unbiased estimators have been proposed and studied , both theoretically and numerically .the estimator defined in section [ subsecmethodb ] proves to outperform the original estimator , whatever the model considered .its aggregated version defined in section [ subseccorrections ] appears as a worthy candidate to estimate the s.t.d.f .proof of proposition [ propcv - ps - l ] denote by the uniform random variables for .introducing allows us to rewrite as the following : ,n } , \ldots , \frac{n}{k}u^{(d)}_{[kx_d],n } \biggr).\ ] ] write ,n } , \ldots , \frac{n}{k}u^{(d)}_{[kx_d],n } \biggr ) \\ & & \quad\qquad { } - \frac{n}{k } \bigl [ 1- f\bigl\{f_1^{-1}\bigl(1-u^{(1)}_{[kx_1],n } \bigr),\ldots , f_d^{-1}\bigl(1-u^{(d)}_{[kx_d],n } \bigr ) \bigr\}\bigr ] \\ & & \quad\qquad{}+ \frac{n}{k } \bigl [ 1- f\bigl\{f_1^{-1 } \bigl(1-u^{(1)}_{[kx_1],n}\bigr),\ldots , f_d^{-1 } \bigl(1-u^{(d)}_{[kx_d],n}\bigr ) \bigr\}\bigr ] \\ & & \quad\qquad { } - l \biggl ( \frac{n}{k } u^{(1)}_{[kx_1],n } , \ldots , \frac{n}{k}u^{(d)}_{[kx_d],n } \biggr ) \\ & & \quad\qquad { } + l \biggl(\frac{n}{k } u^{(1)}_{[kx_1],n } , \ldots , \frac { n}{k}u^{(d)}_{[kx_d],n } \biggr ) - l({\mathbf x}),\end{aligned}\ ] ] and denote [ resp . , and the first line ( resp . , second and third lines ) of the right - hand side .applying [ ( ) , proposition 7.2.3 ] leads to in ^d) ] in ( [ eq2ndorder ] ) yields ,n } , \ldots , \frac{n}{k}u^{(d)}_{[kx_d],n } \biggr)\biggr{\vert}\to0 \qquad\mbox{a.s . }\ ] ] then the result follows from ,n } , \ldots , \frac{n}{k}u^{(d)}_{[kx_d],n } \biggr)\biggr{\vert}\to0 \qquad\mbox{a.s . } , \ ] ] which is obtained combining ( [ eqeksmarg1 ] ) and the continuity of the function .proof of proposition [ propdev - asympt - l ] we use the notation introduced in the proof of proposition [ propcv - ps - l ] .thanks to the skorohod construction , we can start from ( [ eqskorohoda1 ] ) . combined with ( [ eqskorohoda3 ] ) , it is sufficient to prove the convergence note that the third - order condition , the uniformity on ^d ] for every .proof of lemma [ lemdelta ] making use of the homogeneity of the function , write using the skorohod construction , it follows from equations ( [ eqasympt - dev - l ] ) and ( [ eqdev - asympt - la - l ] ) that tends to 0 almost surely , as tends to infinity. proof of theorem [ thmbias - lu ] note that under a skorohod construction , lemma [ lemdelta ] allows us to write the expansions of the terms , and , which implies on the one hand \\[-8pt]\nonumber & & \qquad\quad { } + \frac{1}{\sqrt{k_\rho } \alpha(n / k_\rho ) } \bigl\ { a^{-1 } z_l \bigl(a^2 { \mathbf x}\bigr ) -2 z_l(a { \mathbf x } ) + a z_l({\mathbf x } ) \bigr\ } \\ & & \quad\qquad { } + o \biggl ( \frac{1}{\sqrt{k_\rho } \alpha(n / k_\rho ) } \biggr),\nonumber\end{aligned}\ ] ] and on the other hand , both uniformly for ^d$ ] . combining ( [ eqdeltaterm1 ] ) and ( [ eqdeltaterm2 ] ) with equation ( [ eqasympt - dev - l ] ) , one gets since the last expression and equation ( [ eqdeltaterm1 ] ) are , respectively , the numerator and denominator of , one obtains , after simplification , since does not vanish by assumption .the choice of the sequences and allows us to conclude since .proof of proposition [ proplim - quotient ] applying lemma [ lemdelta ] , we have as a consequence , since by assumption .writing and using twice equation ( [ eqsupdelta ] ) leads to the conclusion .proof of proposition [ prop3estim - rho ] define .lemma [ lemdelta ] used twice yields where is defined in proposition [ prop3estim - rho ] . since ,the result follows straightforwardly from ( [ eqdev - asympt - q ] ) and the delta method .we wish to thank armelle guillou for pointing out a deficiency in the original version of the paper , as well as several misprints . we thank the referees for very helpful comments .
the estimation of the extremal dependence structure is spoiled by the impact of the bias , which increases with the number of observations used for the estimation . already known in the univariate setting , the bias correction procedure is studied in this paper under the multivariate framework . new families of estimators of the stable tail dependence function are obtained . they are asymptotically unbiased versions of the empirical estimator introduced by huang [ statistics of bivariate extremes ( 1992 ) erasmus univ . ] . since the new estimators have a regular behavior with respect to the number of observations , it is possible to deduce aggregated versions so that the choice of the threshold is substantially simplified . an extensive simulation study is provided as well as an application on real data . , +
linear mixed models and the model - based estimators including empirical bayes ( eb ) estimator or empirical best linear unbiased predictor ( eblup ) have been studied quite extensively in the literature from both theoretical and applied points of view . of these ,the small area estimation ( sae ) is an important application , and methods for sae have received much attention in recent years due to growing demand for reliable small area estimates . for a good review and account on this topic , see ghosh and rao ( 1994 ) , rao ( 2003 ) , datta and ghosh ( 2012 ) and pfeffermann ( 2014 ) .the linear mixed models used for sae are the fay - herriot model suggested by fay and herriot ( 1979 ) for area - level data and the nested error regression ( ner ) models given in battese , harter and fuller ( 1988 ) for unit - level data .especially , the ner model has been used in application of not only sae but also biological experiments and econometric analysis . besides the noise , a source of variationis added to explain the correlation among observations within clusters , or subjects , and to allow the analysis to ` borrow strength from other clusters .the resulting estimators , such as eb or eblup , for small - cluster means or subject - specific values provide reliable estimates with higher precisions than direct estimates like sample means . in the ner model with small - clusters ,let be individual observations from the -th cluster for , where is a -dimensional known vector of covariates .the normal ner model is written as where and denote the random effect and samping error , respectively , and they are mutually independently distributed as and .the mean of is for regression coefficients , and the variance of is decomposed as = \tau^2 + \si^2 . \label{hm0}\ ] ] which is the same for all the clusters .however , jiang and nguyen ( 2012 ) illustrated that the within - cluster sample variances change dramatically from cluster to cluster for the data given in battese , et al .also , the normality assumptions for random effects and error terms are not always appropriate in practice .thus , we want to address the issue of relieving these assumptions of normal ner models in the two directions : heterogeneity of variances and non - normality of underlying distributions . in real application, we often encounter the situation where the sampling variance is affected by the covariate . in such case ,the variance function is a useful tool for describing its relationship .variance function estimation has been studied in the literature in the framework of heteroscedastic nonparametric regression .for example , see cook and weisberg ( 1983 ) , hall and carroll ( 1989 ) , muller and stadtmuller ( 1987 , 1993 ) and ruppert , wand , holst and hossjer ( 1997 ) .thus , in this paper , we propose use of the technique to introduce the heteroscedastic variances into ner model without assuming normality of underlying distributions .the variance structure we consider is namely , the setup means that the sampling error has heteroscedastic variance .then we suggest the variance function model given by , where the details are explained in section [ sec : model ] .related to this paper , jiang and nguyen ( 2012 ) proposed the heteroscedastic nested error regression model with the setup that variance is proportional to , namely this is equivalent to the assumption that and . for setup ( [ hm2 ] ) ,jiang and nguyen ( 2012 ) assumed normality for and and demonstrated the quite interesting result that the maximum likelihood ( ml ) estimators of and are consistent for large , which implies that the resulting empirical bayes estimator estimates the bayes estimator consistently . in setup ( [ hm2 ] ) , however , there is no consistent estimator for the heteroscedastic variance , and the mean squared error ( mse ) of the eb can not be estimated consistently , since it depends on . to fix the inconsistent estimation of , maiti , ren and sinha ( 2014 ) suggested the hierarchical model such that s are random variables and has a gamma distribution .maiti , et al .( 2014 ) applied this setup to the fay - herriot model with statistics for estimating .however , the resulting eb estimator and the mse can not be expressed in closed forms .the same setup of was used recently by kubokawa , sugasawa , ghosh and choudhuri ( 2014 ) who derived explicit expressions of the eb estimator and the mse to second - order . in their simulation study , however , the finite sample properties of estimators of two hyper - parameters in the gamma prior distribution of are not so well . although the hierarchical models used in maiti , et al . ( 2014 ) and kubokawa , et al .( 2014 ) provide consistent estimators for model parameters and predictors , both models assume parametric hierarchical structures based on normal distributions of and . however , the normality assumption is not always appropriate and another heteroscedastic models are useful for such a situation when the normality assumption does not seem to be correct .in contrast to the existing results , the proposed model with variance function does not assume normality for either nor .the advantage of this paper is that the mse of the eb or eblup and its unbiased estimator are derived analytically in closed forms up to second - order without assuming normality for and .nonparametric approach to sae has been studied by jiang , lahiri and wan ( 2002 ) , hall and maiti ( 2006 ) , lohr and rao ( 2009 ) and others .most estimators of the mse have been given by numerical methods such as jackknife and bootstrap methods except for lahiri and rao ( 1995 ) , who provided an analytical second - order unbiased estimator of the mse in the fay - heriot model .hall and maiti ( 2006 ) developed a moment matching bootstrap method for nonparametric estimation of mse in nested error regression models .the suggested method is actually convenient but it requires bootstrap replication and has computational burden . in this paper , without assuming the normality , we derive not only second - order biases and variances of estimators for the model parameters , but also a closed expression for a second - order unbiased estimator of the mse in a closed form .thus our mse estimator does not require any resampling method and is useful in practical use .also our mse estimator can be regarded as a generalization of the robust mse estimator given in lahiri and rao ( 1995 ) .the paper is organized as follows : a setup of the proposed hner model and estimation strategy with asymptotic properties are given in section [ sec : model ] . in section [ sec : mse ] , we obtain the eblup and the second - order approximation of the mse .further , we provide the second - order unbiased estimators of mse by the analytical calculation . in section [ sec :sim ] , we investigate the performance of the proposed procedures through simulation and empirical studies . the technical proofs are given in the appendix .suppose that there are small clusters , and let be the pairs of observations from the -th cluster , where is a -dimensional known vector of covariates .we consider the heteroscedastic nested error regression model where is a -dimenstional unknown vector of regression coefficients , and and are mutually independent random variables with mean zero and variances and , which are denoted by it is noted that no specific distributions are assumed for and .it is assumed that the heteroscedastic variance of is given by where is a -dimensional known vector given for each cluster , and is a -dimensional unknown vector .the variance function is a known ( user specified ) function whose range is nonnegative .some examples of the variance function are given below .the model parameters are , and , whereas the total number of the model parameters is .let , and . then the model ( [ model ] ) is expressed in a vector form as where is an vector with all elements equal to one , and the covariance matrix of is for and . it is noted that the inverse of is expressed as where .further , let , , and .then , the matricial form of ( [ model ] ) is written as , where .now we give some examples of the variance function in ( [ vf ] ) .\(a ) in the case that the dispersion of the sampling error is proportional to the mean , it is reasonable to put and for the sub - vector of the covariate . for identifiability of , we restrict .\(b ) consider the case that clusters are decomposed into homogeneous groups with .then , we put which implies that note that for .thus , the models assumes that the clusters are divided into known groups with their variance are equal over the same groups .jiang and nguyen ( 2012 ) used a similar setting and argued that the unbiased estimator of the heteroscedastic variance is consistent when as , where denotes the number of elements in .\(c ) log linear functions of variance were treated in cook and weisberg ( 1983 ) and others .that is , is a linear function , and is written as . similarly to ( a ) , we put . for the above two cases ( a ) and ( b ) , we have , while the case ( c ) corresponds to . in simulation and empirical studies in section[ sec : sim ] , we use the log - linear variance model .as given in subsequent section , we show consistency and asymptotic expression of estimators for as well as and .we here provide estimators of the model parameters , and . when values of and are given , the vector of regression coefficients is estimated by the generalized least squares ( gls ) estimator this is not a feasible form since and are unknown .when estimators and are for and , we get the feasible estimator by replacing and in with their estimators . concerning estimation of , we use the second moment of observations s . from model ( [ model ] ) , it is seen that =\tau^2+\si^2(\z_{ij}'\bga).\ ] ] based on the ordinary least squares ( ols ) estimator , a moment estimator of is given by with substituting estimator into , where . for estimation of ,we consider the within difference in each cluster .let be the sample mean in the -th cluster , namely .it is noted that for , which dose not include the term of .then it is seen that =\left(1 - 2n_i^{-1}\right)\si^2(\z_{ij}'\bga)+n_i^{-2}\sum_{h=1}^{n_i}\si^2(\z_{ih}'\bga),\ ] ] which motivates us to estimate by solving the following estimating equation given by \z_{ij}=\0,\ ] ] which is equivalent to =\0\ ] ] where . it is noted that , in case of homoscedastic case , namely , the estimator and reduces to the estimator identical to prasad - rao estimator ( prasad and rao , 1990 ) up to the constant factor .note that the objective function ( [ bga ] ) for estimation of does not depend on and and that the estimator of depends on .these suggest the following algorithm for calculating the estimates of the model parameters : we first obtain the estimate of by solving ( [ bga ] ) , and then we get the estimate from ( [ tau ] ) with .finally we have the gls estimate with substituting and in ( [ bbe ] ) .in this section , we provide large sample properties of the estimators given in the previous subsection when the number of clusters goes to infinity , but s are still bounded . to establish asymptotic results, we assume the following conditions under .* assumption ( a ) * * there exist and such that for .the dimensions and are bounded , namely .the number of clusters with one observaion , namely , is bounded . *the variance function is twice differentiable and its derivatives are denoted by and , respectively .* the following matrices converge to non - singular matrices : for and . *the forth moments of and exist , namely <\infty ] . the conditions 1 and 3 are the standard assumptions in small area estimation .the condition 2 is also non - restrictive , and the simple variance function and obviously satisfies the assumption .the moment condition 4 is necessary for existence of mse of the eblup , and it is satisfied by many continuous distributions , including normal , shifted gamma , laplace and -distribution with degrees of freedom larger than 5 .in what follows , we use the notations for simplicity . to derive asymptotic approximations of the estimators , we define the following statistics in the -th cluster : .\label{u2}\end{aligned}\ ] ] moreover , we define noting that and under assumption ( a ). then we obtain the asymptotically linear expression of the estimators .[ thm : asymp ] let be the estimator of . under assumption ( a ), it follows that with the asymptotically linear expression where from theorem [ thm : asymp ] , it follows that have an asymptotically normal distribution with mean vector and covariance matrix , where is a matrix partitioned as & e[\psi_i^{\bbe}\psi_i^{\bga ' } ] & e[\psi_i^{\bbe}\psi_i^{\tau}]\\ e[\psi_i^{\bga}\psi_i^{\bbe ' } ] & e[\psi_i^{\bga}\psi_i^{\bga ' } ] & e[\psi_i^{\bga}\psi_i^{\tau}]\\ e[\psi_i^{\tau}\psi_i^{\bbe ' } ] & e[\psi_i^{\tau}\psi_i^{\bga ' } ] & e[\psi_i^{\tau}\psi_i^{\tau } ] \end{array}\right).\end{aligned}\ ] ] it is noticed that =0 ] when are normally distributed . in such a case , it follows and , namely and are asymptotically orthogonal .however , since we do not assume that normality for observations , and are not necessarily orthogonal .the asymptotic covariance matrix or can be easily estimated from samples .for example , ] . in the general forms of , the minimizer ( best predictor ) of the msecan not be obtain without a distributional assumption for and .thus we focus on the class of linear and unbiased predictors , and the best linear unbiased predictor ( blup ) of in terms of the mse is given by this can be simplified to where for . in case of homogeneous variances , namely , it is confirmed that the blp reduces to with as given in hall and maiti ( 2006 ) .the blup is not feasible since it depends on unknown parameters , and .plugging the estimators into , we get the empirical best linear unbiased predictor ( eblup ) for . in the subsequent section ,we consider the mean squared errors ( mse ) of eblup ( [ eblup ] ) without any distributional assumptions for and .to evaluate uncertainty of eblup given by ( [ eblup ] ) , we evaluate the mse defined as ] .this term vanishes under the normality assumptions for and , but in general , it can not be neglected . as in the case of , we obtain an approximation of ] , the second - order approximation of the mse is given by where , and are given in , and , respectively , and , and .the approximated mse given in theorem [ thm : mse ] depends on unknown parameters .thus , in the subsequent section , we derive the second - order unbiased estimator of the mse by the analytical and the matching bootstrap methods .we first derive the analytical second - order unbiased estimator of the mse . from theorem [ thm :mse ] , is , so that it can be estimated by the plug - in estimator with second - order accuracy , namely =r_{2i}(\bphi)+o(m^{-1}) ] from theorem [ thm : asymp ] , we propose the bias corrected estimator of given by which is second - order unbiased estimator of , namely =r_{1i}(\bphi)+o(m^{-1}).\ ] ] now , we summarize the result for the second - order unbiased estimator of mse in the following theorem .[ thm : mseest ] under assumption ( a ) and =e[\ep_{ij}^3]=0 ] .it is remarked that the proposed estimator of mse does not require any resampling methods such as bootstrap .this means that the analytical estimator can be easily implemented and has less computational burden compared to bootstrap .moreover , we do not assume normality of and in the derivation of the mse estimator as in lahiri and rao ( 1995 ) .thus the proposed mse estimator is expected to have a robustness property , which will be investigated in the simulation studies .we first compare the performances of eblup obtained from the proposed hner with variance functions ( hnervf ) with the conventional ner and the hner with random dispersions ( hnerrd ) proposed in kubokawa , et al .( 2014 ) in terms of simulated mse . to this end , we consider the following data generating process : we take , , , . for the values of and , we consider two patterns : .note that indicates that the true model holds the heteroscedasticity in sampling variances while indicates the true model has homoscedastic variance in which both hner models are overfitted .we generate and from the uniform distribution on and , respectively , which are fixed through the simulation runs .following hall and maiti ( 2006 ) , we consider five patterns of distributions of and , that is , m1 : and are both normally distributed , m2 : and are both scaled -distribution with degrees of freedom , m3 : and are both scaled and located distribution , m4 : are are scaled and located and distribution , respectively , and m5 : are are both logistic distribution .based on simulation runs , we calculate the mse of each area defined as where and are obtained values of the eblup and the true values of in the -th iteration , respectively . for estimation of the variance component in the ner model ,we use the prasad and rao estimator ( prasad and rao , 1990 ) .the resulting simulated mse values for five distribution and two values of are given in figure [ comp1 ] ( in case of ) and figure [ comp2 ] ( in case of ) .from figure 1 , it is observed that the hnervf provides least values of mse in all areas .it is a natural result that the hnerrd provides second best prediction in terms of mse values , but the mse values are not so different from the ner model .thus the model specification is appropriate , the eblup obtained from hnervf performs so well compared to the existing models . on the other hand , in figure [ comp2 ] , the hnervf provides little larger mse values than the hnerrd and ner in normal case ( m1 ) .it is not surprising result since the parameter in the hnervf is 0 in the true model and the estimation error of inflates the mse values .however , in other cases ( m2 ) , the hnervf provides the close mse values to the ner and hnerrd although the true model has homoscedastic variances .thus we may conclude that the hnervf has little disadvantages of over - specification in terms of mse values of the eblup .( heteroscedasticity ) .[ comp1 ] , title="fig:",width=207 ] ( heteroscedasticity ) .[ comp1 ] , title="fig:",width=207 ] ( heteroscedasticity ) .[ comp1 ] , title="fig:",width=207 ] + ( heteroscedasticity ) .[ comp1 ] , title="fig:",width=207 ] ( heteroscedasticity ) .[ comp1 ] , title="fig:",width=207 ] ( homoscedasticity ) .[ comp2 ] , title="fig:",width=207 ] ( homoscedasticity ) .[ comp2 ] , title="fig:",width=207 ] ( homoscedasticity ) .[ comp2 ] , title="fig:",width=207 ] + ( homoscedasticity ) .[ comp2 ] , title="fig:",width=207 ] ( homoscedasticity ) .[ comp2 ] , title="fig:",width=207 ] we next investigate the finite sample performances of the mse estimators given in theorem [ thm : mseest ] .we use the same data generating process given in ( [ dgp ] ) and we take , , and .moreover , we equally divided areas into four groups ( ) , so that each group has five areas and the areas in the same group has the same sample size . following the simulation study in the previous subsection, we again consider the five patterns of distributions for and .the simulated values of the mse are obtained from ( [ sim - mse ] ) based on simulation runs .then , based on simulation runs , we calculate the relative bias ( rb ) and coefficient of variation ( cv ) of mse estimators given by where is the mse estimator in the -th iteration . in table[ mse - sim ] , we report mean and median values of and in each group . for comparison , results for the naive mse estimator , without any bias correction , are reported in table [ mse - sim ] as well .the naive mse estimator is the plug - in estimator of the asymptotic mse ( [ r1 ] ) , namely it is obtained by replacing and in formula ( [ r1 ] ) by and , respectively . in table[ mse - sim ] , the relative bias is small , less than 10% in many cases .when the underlying distributions leave from normality , the mse estimator still provides small relative bias although it has higher coefficient of variation .the naive mse estimator is more biased than the analytical mse estimator in all groups and models , so that the bias correction in mse estimator is successful .we now investigate empirical performances of the suggested model , the empirical bayes estimator and the second - order unbiased estimator of mse through analysis of real data .the data used here originates from the posted land price data along the keikyu train line in 2001 .this train line connects the suburbs in the kanagawa prefecture to the tokyo metropolitan area .those who live in the suburbs in the kanagawa prefecture take this line to work or study in tokyo everyday .thus , it is expected that the land price depends on the distance from tokyo .the posted land price data are available for 52 stations on the keikyu train line , and we consider each station as a small area , namely , . for the -th station ,data of land spots are available , where varies around 4 and some areas have only one observation . for , denotes the value of the posted land price ( yen/10,000 ) for the unit meter squares of the -th spot , is the time to take from the nearby station to the tokyo station around 8:30 in the morning , is the value of geographical distance from the spot to the station and denotes the floor - area ratio , or ratio of building volume to lot area of the spot .this data set is treated in kubokawa , et al .( 2014 ) , where they pointed out that the heteroscedasticity seem to be appropriate from boxplots of some areas and bartlet test for testing homoscedastic variance .figure [ plp ] is the plot of the pairs , where is ols residuals given by .it indicates that the residuals are more variable for small than for large , namely the variances seem functions of .thus we apply the following hner model with a variance function given by where and . for the variance function , we use motivated from figure [ plp ] . as a submodel of ( [ plp - model ] ), we also consider the homoscedastic variance model with .then the estimated values of parameters in these two models are given in the following : the estimated values of and , coefficients of and , in both models are negative values which leads to the natural result that the and have negative influence on .the sign of is negative .this corresponds to the variability illustrated in figure [ plp ] .the obtained values of eblup given in ( [ eblup ] ) are given in table [ plp - res ] for selected 15 areas .to see the difference of predicted values in terms of the degree of shrinkage , we compute for each two model and the results are given in figure [ plp2 ] .it is observed that in ner model decreases as the area sample size gets large .this is because the sample mean provides better estimates of the true mean as gets larger , so that the sample mean does not need to be shrunk . on the other hand , in hnervfis influenced by the estimated heteroscedastic variance as well as .thus the plot in figure [ plp2 ] shows that the shrinkage degrees in hnervf has more variability than that in ner . in table [ plp - res ] and figure [ plp ], we also provide the estimates of squared root of mse ( smse ) given in theorem [ thm : mseest ] .it is revealed from table [ plp - res ] that the estimates of the smse in ner get smaller as gets larger . on the other hands ,the smse in hnervf do not have a similar property , because the smse in hnervf is affected by not only but also the heteroscedastic variance as indicated in the mse formula given in theorem [ thm : mse ] . from figure [ plp ] , we observe that the estimated smses of hnervf are smaller than that of ner in many areas .especially , in area 47 , 49 , 50 and 51 , the smse values of hnervf are dramatically small compared to ner . in some other areas , the smess of hnervf is larger than that of ner , but the differences are not so large .these observations and the residual plot in figure [ plp ] motivate us to utilize the hnervf in case of heteroscedastic variance explained by some covariates .( left ) and estimated mse in hnervf and ner ( right ) . , title="fig:",width=309 ] ( left ) and estimated mse in hnervf and ner ( right ) . , title="fig:",width=309 ] against area sample size in hnervf and ner , title="fig:",width=309 ] against area sample size in hnervf and ner , title="fig:",width=309 ] .the estimated results of plp data for selected 15 areas [ cols="^,^,^,^,^,^,^,^,^,^ " , ]in the context of small - area estimation , homogeneous nested error regression models have been studied so far in the literature . however , some real data sets show heteroscedasticity in variances as pointed out in jiang and nguyen ( 2012 ) and kubokawa , et al . ( 2014 ) . in such a case ,the residuals often indicate that the heteroscedasticity can be explained by some covariates , which motivated us to propose and investigate the heteroscedastic nested error regression model with variance functions ( hnervf ) .we have proposed the estimating method for the model parameters and the asymptotic properties of these estimators have been established without any distributional assumptions for error terms . for measuring uncertainty of the empirical bayes estimator , the mean squared errors ( mse ) have been approximated up to second - order , and their second - order unbiased estimators have been provided in the closed form . for estimation of mse in hnervf without distributional assumptions , we can utilize the moment matching bootstrap method proposed in hall and maiti ( 2006 ) . to derive the bootstrap estimator, we use the following representation of mse : + 2e\left[(\muh_i-\mut_i)(\mut_i-\mu_i)\right],\ ] ] noting that the second and third terms are .then we can establish the second - order unbiased mse estimator via three - point distribution or -distribution for approximation of distribution of error terms .however , the bootstrap method has computational burden , so that we did not treat in this paper .* acknowledgments . *+ the first author was supported in part by grant - in - aid for scientific research ( 15j10076 ) from japan society for the promotion of science ( jsps ) .the second author was supported in part by grant - in - aid for scientific research ( 23243039 and 26330036 ) from japan society for the promotion of science .proof of theorem 1 .* since are mutually independent , the consistency of and follows from the standard argument of m - estimators , so that is also consistent . in what follows , we derive the asymptotic expressions of the estimators .for the asymptotic expansion of defined as the minimizer of ( [ bga ] ) .remember that the estimator is given as the solution of the estimating equation =\0\ ] ] using taylor expansions , we have where .\ ] ] from the central limit theorem , it follows that so that the second terms in the expansion formula is .then we get under assumption ( a ) , we have from the independence of and the fact , we can use the central limit theorem to show that the leading term in the expansion of is .thus , finally we consider the asymptotic expansion of . from the expression in ( [ bbe ] ), it follows that since for , we have where under assumption ( a ) , we have for , whereby . since and as shown above , we get which completes the proof .proof of corollary [ cor : cond ] .* let .note that does not depend on and that are mutually independent .then , =\frac{1}{m^2}\sum_{j=1,j\neq i}^{m}e\left[\psi_j^{\theta_k}\psi_j^{\theta_l}\right]+\frac{1}{m^2}\psi_i^{\theta_k}\psi_i^{\theta_l}\\ & = \bom_{kl}+\frac{1}{m^2}\left\{\psi_i^{\theta_k}\psi_i^{\theta_l}-e\left[\psi_i^{\theta_k}\psi_i^{\theta_l}\right]\right\}=\bom_{kl}+o_p(m^{-1}),\end{aligned}\ ] ] where is the -element of and we used the fact that =e[\psi_j^{\theta_k}]=0 ] and ] and =0 $ ] . using the expression ( [ mu.deriv ] ) of with the above moment results , we obtain =0,\ ] ] which leads to .
the article considers a nested error regression model with heteroscedastic variance functions for analyzing clustered data , where the normality for the underlying distributions is not assumed . classical methods in normal nested error regression models with homogenous variances are extended in the two directions : heterogeneous variance functions for error terms and non - normal distributions for random effects and error terms . consistent estimators for model parameters are suggested , and second - order approximations of their biases and variances are derived . the mean squared errors of the empirical best linear unbiased predictors are expressed explicitly to second - order . second - order unbiased estimators of the mean squared errors are provided analytically in closed forms . the proposed model and the resulting procedures are numerically investigated through simulation and empirical studies .
we consider the iterative solution of large - scale discrete ill - posed problems where the norm is the 2-norm of a vector or matrix , and the matrix is extremely ill conditioned with its singular values decaying gradually to zero without a noticeable gap .this kind of problem arises in many science and engineering areas , such as signal processing and image restoration , typically when discretizing fredholm integral equations of the first - kind .in particular , the right - hand side is affected by noise , caused by measurement or discretization errors , i.e. , where represents the gaussian white noise vector and denotes the noise - free right - hand side , and it is supposed that . because of the presence of noise in and the ill - conditioning of , the naive solution of is meaningless and far from the true solution , where the superscript denotes the moore - penrose generalized inverse of a matrix .therefore , it is necessary to use regularization to determine a best possible approximation to .the solution of can be analyzed by the svd of : where and are orthogonal matrices , and the entries of the diagonal matrix are the singular values of , which are assumed to be simple throughout the paper and labelled in decreasing order . with , we obtain throughout the paper , we assume that satisfies the discrete picard condition : on average , the coefficients decay faster than the singular values . to be definitive , for simplicity we assume that these coefficients satisfy a widely used model in the literature , e.g. , ( * ? ?* and 153 ) and : let be the transition point such that which can be written as , the solution of the modified problem that replaces by its best rank approximation in ( [ eq1 ] ) , where , and .remarkably , is the minimum - norm least squares solution of the perturbed problem that replaces in by its best rank approximation , and the best possible tsvd solution of by the tsvd method is .a number of approaches have been proposed for determining , such as discrepancy principle , discrete l - curve and generalized cross validation ; see , e.g. , for comparisons of the classical and new ones . in our numerical experiments, we use the l - curve criterion in the tsvd method and hybrid lsqr .the tsvd method has been widely studied ; see , e.g. , . for a small and moderate , the tsvd method has been used as a general - purpose reliable and efficient numerical method for solving . as a result, we will take the tsvd solution as a standard reference when assessing the regularizing effects of iterative solvers and accuracy of iterates under consideration in this paper . as well known , it is generally not feasible to compute svd when is large . in this case ,one typically projects onto a sequence of low dimensional krylov subspaces and gets a sequence of iterative solutions .the conjugate gradient ( cg ) method has been used when is symmetric definite .as a cg - type method applied to the semidefinite linear system or the normal equations system , the cgls algorithm has been studied ; see and the references therein .the lsqr algorithm , which is mathematically equivalent to cgls , has attracted great attention , and is known to have regularizing effects and exhibits semi - convergence ( see , , and also ) : the iterates tend to be better and better approximations to the exact solution and their norms increase slowly and the residual norms decrease . in later stages , however , the noise starts to deteriorate the iterates , so that they will start to diverge from and instead converge to the naive solution , while their norms increase considerably and the residual norms stabilize .such phenomenon is due to the fact that a projected problem inherits the ill - conditioning of .that is , as the iterations proceed , the noise progressively enters the solution subspace , so that a small singular value of the projected problem appears and the regularized solution is deteriorated .as far as an iterative solver for solving is concerned , a central problem is whether or not a pure iterative solver has already obtained a best possible regularized solution at semi - convergence , namely whether or not the regularized solution at semi - convergence is at least as accurate as .as it appears , for krylov subspace based iterative solvers , their regularizing effects critically rely on how well the underlying -dimensional krylov subspace captures the -dimensional dominant right singular subspace of . the richer information the krylov subspace contains on the -dimensional dominant right singular subspace , the less possible a small ritz value of the resulting projected problem appears and thus the better regularizing effects the solver has .to precisely describe the regularizing effects of an iterative solver , we introduce the term of _ full _ or _ partial _ regularization : if the iterative solver itself computes a best possible regularized solution at semi - convergence , it is said to have the full regularization ; in this case , no additional regularization is needed . here , as defined in the abstract , a best possible regularized solution means that it is at least as accurate as the best regularized solution obtained by the truncated singular value decomposition ( tsvd ) method .otherwise , it is said to have the partial regularization ; in this case , in order to compute a best possible regularized solution , its hybrid variant , e.g. , a hybrid lsqr , is needed that combines the solver with additional regularization , which aims to remove the effects of small ritz values , and expand the krylov subspace until it captures all the dominant svd components needed and the method obtains a best possible regularized solution .the study of the regularizing effects of lsqr and cgls has been receiving intensive attention for years ; see and the references therein .however , there has yet been no definitive result or assertion on their full or partial regularization .to proceed , we need the following definition of the degree of ill - posedness , which follows hofmann s book and has been commonly used in the literature , e.g. , : if there exists a positive real number such that the singular values satisfy , then the problem is termed as mildly or moderately ill - posed if or ; if with considerably , , then the problem is termed severely ill - posed .it is clear that the singular values of a severely ill - posed problem decay exponentially at the same rate , while those of a moderately or mildly ill - posed problem decay more and more slowly at the decreasing rate approaching one with increasing , which , for the same , is smaller for the moderately ill - posed problem than it for the mildly ill - posed problem .other minimum - residual methods have also gained attention for solving . for problems with symmetric , minres and its preferred variant mr - iiare alternatives and have been shown to have regularizing effects .when is nonsymmetric and multiplication with is difficult or impractical to compute , gmres and its preferred variant rrgmres are candidates .the hybrid approach based on the arnoldi process was first introduced in , and has been studied in .recently , gazzola _et al_. have studied more methods based on the lanczos bidiagonalization , the arnoldi process and the nonsymmetric lanczos process for the severely ill - posed problem .they have described a general framework of the hybrid methods and present krylov - tikhonov methods with different parameter choice strategies employed . in this paper , we focus on lsqr .we derive bounds for the 2-norm distance between the underlying -dimensional krylov subspace and the -dimensional right singular space .there has been no rigorous and quantitative result on the distance before .the results indicate that the -dimensional krylov subspace captures the -dimensional dominant right singular space better for severely and moderately ill - posed problems than for mildly ill - posed problems . as a result, lsqr has better regularizing effects for the first two kinds of problems than for the third kind . by the bounds and the analysis on them , we draw a definitive conclusion that lsqr generally has only the partial regularization for mildly ill - posed problems , so that a hybrid lsqr with additional explicit regularization is needed to compute a best possible regularized solution .we also use the bounds to derive an estimate for the accuracy of the rank approximation , generated by lanczos bidiagonalization , to , which is closely related to the regularization of lsqr .our results help to further understand the regularization of lsqr , though they appear less sharp .in addition , we derive a bound on the diagonal entries of the bidiagonal matrices generated by the lanczos bidigonalization process , showing how fast they decay .numerical experiments confirm our theory that lsqr has only the partial regularization for mildly ill - posed problems and a hybrid lsqr is needed to compute best possible regularized solutions .strikingly , the experiments demonstrate that lsqr has the full regularization for severely and moderately ill - posed problems .our theory gives a partial support for the observed general phenomena . throughout the paper ,all the computation is assumed in exact arithmetic .since cgls is mathematically equivalent to lsqr , all the assertions on lsqr apply to cgls .this paper is organized as follows . in section [ sectionmain ], we describe the lsqr algorithm , and then present our theoretical results on lsqr with a detailed analysis . in section [ sectionexp ] ,we report numerical experiments to justify the partial regularization of lsqr for mildly ill - posed problems .we also report some definitive and general phenomena observed .finally , we conclude the paper in section [ sectioncon ] . throughout the paper, we denote by the -dimensional krylov subspace generated by the matrix and the vector , by the frobenius norm of a matrix , and by the identity matrix with order clear from the context .lsqr for solving is based on the lanczos bidiagonalization process , which starts with and , at step ( iteration ) , computes two orthonormal bases and of the krylov subspaces and , respectively .define the matrices and .then the -step lanczos bidiagonalization can be written in the matrix form where denotes the -th canonical basis vector of and the quantities and denote the diagonal and subdiagonal elements of the lower bidiagonal matrix , respectively . at iteration , lsqr computes the solution with note that .we get as stated in the introduction , lsqr exhibits semi - convergence at some iteration : the iterates become better approximations to until some iteration , and the noise will dominate the after that iteration .the iteration number plays the role of the regularization parameter .however , semi - convergence does not necessarily mean that lsqr finds a best possible regularized solution as may become ill - conditioned before but does not yet contain all the needed dominant svd components of . in this case , in order to get a best possible regularized solution, one has to use a hybrid lsqr method , as described in the introduction .the significance of is that the lsqr iterates can be interpreted as the minimum - norm least squares solutions of the perturbed problems that replace in by its rank approximations , whose nonzero singular values are just those of .if the singular values of approximate the large singular values of in natural order for , then lsqr must have the full regularization , and the regularized solution is best possible and is as comparably accurate as the best possible regularized solution by the tsvd method .hansen s analysis shows that the lsqr iterates have the filtered svd expansions : where , and are the singular values of . in our context , if we have for some , the factors , are not small , meaning that is already deteriorated and becomes a poorer regularized solution , namely , lsqr surely does not have full regularization . as a matter of fact , in terms of the best possible solution , it is easily justified that the full regularization of lsqr is equivalent to requiring that the singular values of approximate the largest singular values of in natural order for , so it is impossible to have for .the regularizing effects of lsqr critically depend on what mainly contains and provides .note that the eigenpairs of are the squares of singular values and right singular vectors of , and the tridiagonal matrix is the projected matrix of onto the subspace , which is obtained by applying the symmetric lanczos tridiagonalization process to starting with .we have a general claim deduced from and exploited widely in : the more information the subspace contains on the dominant right singular vectors , the more possible and accurate the ritz values approximate the largest singular values of ; on the other hand , the less information it contains on the other right singular vectors , the less accurate a small ritz value is if it appears . for our problem , since the small singular values of are clustered and close to zero , it is expected that a small ritz value will show up as grows large , and it starts to appear more late when contains less information on the other right singular vectors . in this sense , we say that lsqr has better regularizing effects since contains more dominant svd components . using the definition of canonical angles between the two subspaces and of the same dimension , we have the following theorem , which shows how well the subspace , on which lsqr and cgls work , captures the -dimensional dominant right singular space . [ thm2 ]let the svd of be , and assume that its singular values are distinct and satisfy with .let be the subspace spanned by the columns of , and .then with the matrix to be defined by and _ proof_. let consist of the first columns of defined in .we see is spanned by the columns of the matrix with partition the matrices and as follows : where .since is a vandermonde matrix with distinct for , it is nonsingular .thus , by the svd of , we have with define . then and the columns of form an orthonormal basis of .write . by definition, we obtain which proves and indicates that is monotonically increasing with respect to .we next estimate .we have so we need to estimate .it is easily justified that the -th column of consists of the coefficients of the lagrange polynomial that interpolates the elements of the -th canonical basis vector at the abscissas .consequently , the -th column of is from which we obtain since is monotonic for , it is bounded by .furthermore , let .then for and we have by absorbing those higher order terms into .note that in the above numerator we have and it is then easily seen that their product is on the other hand , by definition , the denominator in is exactly one for , and it is strictly bigger than one for .therefore , for any , we have . from this andit follows that therefore , for and considerably , from we have * remark 2.1 * we point out that should not be sharp . as we have seen from the proof , the factor seems intrinsic and unavoidable , but the factor in is an overestimate andcan certainly be reduced .is an overestimate since for not near to is considerably smaller than , but we replace all them by their maximum .in fact , our derivation clearly illustrates that the smaller is , the smaller than .recall the discrete picard condition .then we observe that almost remains constant for . for , note that all the almost remain the same .thus , we have , meaning that does not capture as well as it does for .* remark 2.2 * the theorem can be extended to moderately ill - posed problems with the singular values considerably and not big since , in a similar manner to the proof of theorem [ thm2 ] , we can obtain by the first order taylor expansion which , unlike for severely ill - posed problems , depends on and increases slowly with for considerably .however , for mildly ill - posed problems , from above we have considerably for .* remark 2.3 * a combination of and and the above analysis indicate that captures better for severely ill - posed problems than for moderately ill - posed problems .there are two reasons for this .the first is that the factors are basically fixed constants for severely ill - posed problems as increases , and they are smaller than the counterparts for moderately ill - posed problems unless the degree of its ill - posedness is far bigger than one and small .the second is that the factor is smaller for severely ill - posed problems than the factor for moderately ill - posed problems for the same .* remark 2.4 * the situation is fundamentally different for mildly ill - posed problems : firstly , we always have substantially for and any , which is considerably bigger than for moderately ill - posed problems for the same .secondly , defined by is closer to one than that for moderately ill - posed problems for .thirdly , for the same noise level and , we see from the discrete picard condition and the definition of that is bigger for a mildly ill - posed problem than that for a moderately ill - posed problem . all of them show that captures _ considerably better _ for severely and moderately ill - posed problems than for mildly ill - posed problems for . in other words , our results illustrate that contains more information on the other right singular vectors for mildly ill - posed problems , compared with severely and moderately ill - posed problems .the bigger , the more it contains .therefore , captures more effectively for severely and moderately ill - posed problems than mildly ill - posed problems .that is , contains more information on the other right singular vectors for mildly ill - posed problems , making the appearance of a small ritz value more possible before and lsqr has better regularizing effects for the first two kinds of problems than for the third kind .note that lsqr , at most , has the full regularization , i.e. , there is no ritz value smaller than for , for severely and moderately ill - posed problems .our analysis indicates that lsqr generally has only the partial regularization for mildly ill - posed problem and a hybrid lsqr should be used . *remark 2.5 * relation and indicate that captures better for severely ill - posed problems than for moderately ill - posed problems .there are two reasons for this .first , the all the are basically a fixed constant for severely ill - posed problems , which is smaller than those ratios for moderately ill - posed problems unless is rather big and small .second , the quantities for severely ill - posed problems are smaller than the corresponding for moderately ill - posed problems .let us investigate more and get insight into the regularization of lsqr .define which measures the quality of the rank approximation to .based on , we can derive the following estimate for .[ thm3 ] assume that is severely or moderately ill posed .then _ proof_. let be the best rank approximation to with respect to the 2-norm , where , and .since is of rank , the lower bound in is trivial by noting that .we now prove the upper bound . from ,we obtain it is easily known that with having orthonormal columns .then by the definition of we obtain numerically , it has been extensively observed in the literature that the decay as fast as and , more precisely , for severely ill - posed problems ; see , e.g. , .they mean that the are very good rank approximations to .recall that the tsvd method generates the best regularized solution . as a result , if , the lsqr iterate is reasonably close to the tsvd solution for is reasonably small .this means that lsqr has the full regularization and does not need any additional regularization to improve . as our experiments will indicate in detail , these observed phenomena are of generality for both severely and moderately ill - posed problems and thus should have strong theoretical supports .compared to the observations , our appears to be a considerrable overestimate .we next present some results on appearing in . if , the lanczos bidiagonalization process terminates , and we have found exact singular triples of . in our context , since has only simple singular values and has components in all the left singular vectors , early termination is impossible in exact arithmetic , but small is possible .we aim to investigate how fast decays .we first give a refinement of a result in .[ thm1 ] let be the svd of , where and are orthogonal , and , and define and .then _ proof_. from and , we obtain so holds . from, we get note that .then we get we remark that it is an inequality other than the equality in a result of similar to . in combination with the previous results and remarks , this theorem shows that once becomes small for not big , the singular values of may approximate the large singular values of , and it is more possible that no small one appears for severely ill - posed problems and moderately ill - posed problems . as our final result , we establish an intimate and interesting relationship between and , showing how fast decays .[ thm4 ] it holds that _ proof_. with the notations as in theorem [ thm1 ] , we have .so , by , we have note that .therefore , from we obtain the theorem indicates that decays at least as fast as , which , in turn , means that may decrease in the same rate as , as observed in for severely ill - posed problems .in this section , we report numerical experiments to illustrate the the regularizing effects of lsqr .we will demonstrate that lsqr has the full regularization for severely and moderately ill - posed problems , stronger phenomena than our theory proves , but it only has the partial regularization for mildly ill - posed problems , in accordance with our theory , for which a hybrid lsqr is needed to compute best possible regularized solutions .we choose several ill - posed examples from hansen s regularization toolbox .all the problems arise from the discretization of the first kind fredholm integral equation for each problem we use the codes of to generate a matrix , true solution and noise - free right - hand . in order to simulate the noisy data, we generate the gaussian noise vector whose entries are normally distributed with mean zero . defining the noise level , we use , respectively , in the test examples . to simulate exact arithmetic ,the full reorthogonalization is used during the lanczos bidiagonalization process .we remind that , as far as ill - posed problem is concerned , our primary goal consists in justifying the regularizing effects of iterative solvers , which are _ unaffected by sizes _ of ill - posed problems and only depends on the degree of ill - posedness . therefore , for this purpose , as extensively done in the literature ( see , e.g. , and the references therein ), it is enough to test not very large problems .indeed , for large , say , 1,0000 and more , we have observed completely the same behavior as that for not large , e.g. , used in this paper . a reason for using not large is because such choice makes it practical to fully justify the regularization effects of lsqr by comparing it with the tsvd method , which suits only for small and/or medium sized problems for computational efficiency .all the computations are carried out in matlab 7.8 with the machine precision under the microsoft windows 7 64-bit system .we consider the following two severely ill - posed problems .* example 1 * this problem shaw arises from one - dimensional image restoration , and can be obtained by discretizing the first kind fredholm integral equation with ] as both integration and domain intervals .the kernel , the solution and the right - hand side are given by these two problems are severely ill - posed , whose singular values with for shaw and for wing , respectively . in figure[ fig : res ] , we display the curves of the sequences and with , respectively .they illustrate that the quantities decrease as fast as and both of them level off at the level of for no more than 20 , and after that these quantities are purely round - offs and are reliable no more .moreover , the curves of quantities always lie below those of , which coincides with theorem [ thm4 ] .we can see that the decaying curves with different noise levels are almost the same .furthermore , we observe that for severely ill - posed problems , indicating that the are very good rank approximations to with the approximate accuracy and that does not become ill - conditioned before . as a result ,the regularized solutions become better approximations to until iteration , and they are deteriorated after that iteration . at iteration , only captures the dominant svd components of and suppress the other svd components , so that it is a best possible regularized solution . as a result ,the pure lsqr has the full regularization for severely ill - posed problems. we will give a more direct justification on these assertions in section 3.3 . in figure [ fig2 ] , we plot the relative errors with different noise levels for these two problems . obviously, lsqr exhibits semi - convergence phenomenon .moreover , for smaller noise level , we get better regularized solutions at the cost of more iterations , as expected ., and for the problem shaw with ( left ) and ( right ) ; ( c)-(d ) : plots of decaying behavior of the sequences and for the problem wing with ( left ) and ( right).,width=264,height=188 ] ( a ) , and for the problem shaw with ( left ) and ( right ) ; ( c)-(d ) : plots of decaying behavior of the sequences and for the problem wing with ( left ) and ( right).,width=264,height=188 ] ( b ) , and for the problem shaw with ( left ) and ( right ) ; ( c)-(d ) : plots of decaying behavior of the sequences and for the problem wing with ( left ) and ( right).,width=264,height=188 ] ( c ) , and for the problem shaw with ( left ) and ( right ) ; ( c)-(d ) : plots of decaying behavior of the sequences and for the problem wing with ( left ) and ( right).,width=264,height=188 ] ( d ) with respect to for the problems shaw ( left ) and wing ( right).,width=264,height=188 ] ( a ) with respect to for the problems shaw ( left ) and wing ( right).,width=264,height=188 ] ( b ) we now consider the following two moderately ill - posed problems .* example 3 * this problem heat arises from the inverse heat equation , and can be obtained by discretizing volterra integral equation of the first kind , a class of equations that is moderately ill - posed , with ] as both integration and domain intervals .the kernel , the solution and the right - hand side are given by , and for the problem heat with ( left ) and ( right ) ; ( c)-(d ) : plots of decaying behavior of the sequences and for the problem phillips with ( left ) and ( right).,width=264,height=188 ] ( a ) , and for the problem heat with ( left ) and ( right ) ; ( c)-(d ) : plots of decaying behavior of the sequences and for the problem phillips with ( left ) and ( right).,width=264,height=188 ] ( b ) , and for the problem heat with ( left ) and ( right ) ; ( c)-(d ) : plots of decaying behavior of the sequences and for the problem phillips with ( left ) and ( right).,width=264,height=188 ] ( c ) , and for the problem heat with ( left ) and ( right ) ; ( c)-(d ) : plots of decaying behavior of the sequences and for the problem phillips with ( left ) and ( right).,width=264,height=188 ] ( d ) from figure [ fig3 ] , we see that decreases as fast as , and decays as fast as .however , slightly different from severely ill - posed problems , we can observe that the may not be so close to the , as reflected by the thick rope formed by three lines . by comparing the behavior of for severely and moderately ill - posed problem ,we come to the conclusion that the -step lanczos bidiagonalization may generate more accurate rank approximation for severely ill - posed problems than for moderately ill - posed problems , namely , the rank approximation may be more accurate for severely ill - posed problems than for moderately ill - posed problems .nonetheless , we have seen that , for the test moderately ill - posed problems , all the are still excellent approximations to the , so that lsqr still has the full regularization . in figure [ fig4 ], we depict the relative errors of , and observe analogous phenomena to those for severely ill - posed problems .a distinction is that now lsqr needs more iterations for moderately ill - posed problems with the same noise level . with respect to for the problems heat ( left ) and phillips ( right).,width=264,height=188 ] ( a ) with respect to for the problems heat ( left ) and phillips ( right).,width=264,height=188 ] ( b ) for the previous four severely and moderately ill - posed problems , we now compare the regularizing effects of the pure lsqr and the hybrid lsqr with the additional tsvd regularization used within projected problems .we show that lsqr has the full regularization and no additional regularization is needed , which is based on the observation that at semi - convergence the regularized solution by lsqr is as accurate as that obtained by the hybrid lsqr for each problem . in the sequel , we only report the results for the noise level .results for other are analogous and thus omitted .figures [ fig5 ] ( a)-(b ) and figures [ fig6 ] ( a)-(b ) indicate that the relative errors of approximate solutions obtained by the two methods reach the same minimum level , and the hybrid lsqr simply stabilizes the regularized solutions with the minimum error .this means that the pure lsqr itself has already found a best possible regularized solution at semi - convergence and no additional regularization is needed .so it has the full regularization .our task is to determine such , which is the iteration where starts to increase dramatically while its residual norm remains almost unchanged .the l - curve criterion fits nicely into this task . in these examples ,we also choose for the pure lsqr .figure [ fig5 ] ( c ) and figures [ fig6 ] ( c)-(d ) show that the regularized solutions are generally very good approximations to the true solutions .however , we should point out that for the problem wing with a discontinuous solution , the large relative error indicates that the regularized solution is a poor approximation to the true solution , as depicted in figure [ fig5 ] ( d ) .such phenomenon is due to the fact that the regularization of lsqr and its hybrid variants is unsuitable for the ill - posed problems with discontinuous solutions . for such kind of problems ,more reasonable regularization is total variation regularization , which takes the form with some matrix and the 1-norm . with respect to lsqr and lsqr with additional tsvd regularization for ;( c)-(d ) : the regularized solutions for the pure lsqr for the problems shaw ( left ) and wing ( right).,width=264,height=188 ] ( a ) with respect to lsqr and lsqr with additional tsvd regularization for ; ( c)-(d ) : the regularized solutions for the pure lsqr for the problems shaw ( left ) and wing ( right).,width=264,height=188 ] ( b ) with respect to lsqr and lsqr with additional tsvd regularization for ; ( c)-(d ) : the regularized solutions for the pure lsqr for the problems shaw ( left ) and wing ( right).,width=264,height=188 ] ( c ) with respect to lsqr and lsqr with additional tsvd regularization for ; ( c)-(d ) : the regularized solutions for the pure lsqr for the problems shaw ( left ) and wing ( right).,width=264,height=188 ] ( d ) obtained by the pure lsqr and lsqr with the additional tsvd regularization for ; ( c)-(d ) : the regularized solutions for the pure lsqr for the problems heat ( left ) and phillips ( right).,width=264,height=188 ] ( a ) obtained by the pure lsqr and lsqr with the additional tsvd regularization for ; ( c)-(d ) : the regularized solutions for the pure lsqr for the problems heat ( left ) and phillips ( right).,width=264,height=188 ] ( b ) obtained by the pure lsqr and lsqr with the additional tsvd regularization for ; ( c)-(d ) : the regularized solutions for the pure lsqr for the problems heat ( left ) and phillips ( right).,width=264,height=188 ] ( c ) obtained by the pure lsqr and lsqr with the additional tsvd regularization for ; ( c)-(d ) : the regularized solutions for the pure lsqr for the problems heat ( left ) and phillips ( right).,width=264,height=188 ] ( d ) in what follows , we compare the regularizing effects of the pure lsqr and hybrid lsqr for mildly ill - posed problems , showing that lsqr has only the partial regularization and a hybrid lsqr should be used for this kind of problem to improve the regularized solution by lsqr at semi - convergence .* example 5 * the problem deriv2 is mildly ill - posed , which is obtained by discretizing the first kind fredholm integral equation with $ ] as both integration and domain intervals .the kernel is green s function for the second derivative : and the solution and the right - hand side are given by figure [ fig7 ] ( a ) shows that the relative errors of approximate solutions by the hybrid lsqr reach a considerably smaller minimum level than those by the pure lsqr , a clear indication that lsqr has the partial regularization .as we have seen , the hybrid lsqr expands the krylov subspace until it contains enough dominant svd components and , meanwhile , additional regularization effectively dampen the svd components corresponding to small singular values .for instance , the semi - convergence of the pure lsqr occurs at iteration , but it is not enough . as the hybrid lsqr shows , we need a larger six dimensional krylov subspace to construct a best possible regularized solution .we also choose for the pure lsqr and the hybrid lsqr .figure [ fig7 ] ( b ) indicates that the regularized solution obtained by the hybrid lsqr is a considerably better approximation to than that by the pure lsqr , especially in the non - smooth middle part of . and the regularized solution with respect to lsqr and lsqr with the additional tsvd regularization for the problem deriv2 and .,width=264,height=188 ] ( a ) and the regularized solution with respect to lsqr and lsqr with the additional tsvd regularization for the problem deriv2 and .,width=264,height=188 ] ( b )for large - scale discrete ill - posed problems , lsqr and cgls are commonly used methods .these methods have regularizing effects and exhibit semi - convergence .however , if a small ritz value appears before the methods capture all the needed dominant svd components , the methods have only the partial regularization and must be equipped with additional regularization so that best possible regularized solutions can be found . otherwise , lsqr has the full regularization and can compute best possible regularized solutions without additional regularization needed .we have proved that the underlying -dimensional krylov subspace captures the dimensional dominant right singular space better for severely and moderately ill - posed problems than for mildly ill - posed problems .this makes lsqr have better regularization for the first two kinds of problems than for the third kind .furthermore , we have shown that lsqr generally has only the partial regularization for mildly ill - posed problems .numerical experiments have demonstrated that lsqr has the full regularization for severely and moderately ill - posed problems , stronger than our theory predicts , and it has the partial regularization for mildly moderately ill - posed problems , compatible with our assertion .together with the observations , it appears that the excellent performances of lsqr on severely and moderately ill - posed problems generally hold . as for future work , it is more appealing to derive an accurate estimate for other than , as it plays a crucial role in analyzing the accuracy of the rank approximation , generated by lanczos bidiagonalization , to .accurate bounds for are the core of completely understanding the regularizing effects of lsqr , but our bound for is conservative and is expected to be improved on substantially . since cgls is mathematically equivalent to lsqr , our results apply to cgls as well .our current work has helped to better understand the regularization of lsqr and cgls .but for a complete understanding of the intrinsic regularizing effects of lsqr and cgls , we still have a long way to go , and more research is needed .we thank the three referees very much for their valuable suggestions and comments , which made us improve the presentation of the paper ., _ ill - conditioning of the truncated singular value decomposition , tikhonov regularization and their applications to numerical partial differential equations_. numer .linear algebra appl . , 18 ( 2011 ) , pp .
lsqr , a lanczos bidiagonalization based krylov subspace iterative method , and its mathematically equivalent cgls applied to normal equations system , are commonly used for large - scale discrete ill - posed problems . it is well known that lsqr and cgls have regularizing effects , where the number of iterations plays the role of the regularization parameter . however , it has long been unknown whether the regularizing effects are good enough to find best possible regularized solutions . here a best possible regularized solution means that it is at least as accurate as the best regularized solution obtained by the truncated singular value decomposition ( tsvd ) method . in this paper , we establish bounds for the distance between the -dimensional krylov subspace and the -dimensional dominant right singular space . they show that the krylov subspace captures the dominant right singular space better for severely and moderately ill - posed problems than for mildly ill - posed problems . our general conclusions are that lsqr has better regularizing effects for the first two kinds of problems than for the third kind , and a hybrid lsqr with additional regularization is generally needed for mildly ill - posed problems . exploiting the established bounds , we derive an estimate for the accuracy of the rank approximation generated by lanczos bidiagonalization . numerical experiments illustrate that the regularizing effects of lsqr are good enough to compute best possible regularized solutions for severely and moderately ill - posed problems , stronger than our theory predicts , but they are not for mildly ill - posed problems and additional regularization is needed . ill - posed problem , regularization , lanczos bidiagonalization , lsqr , cgls , hybrid 65f22 , 65j20 , 15a18
.the standard genetic code . assignment of the 64 possible codons to amino acids or stop signals , with polar requirement of the amino acids indicated in brackets . [ cols= " < , < , < , < " , ] [ tab : tab7 ] in table [ tab : tab7 ] we have listed the possible ways to fill a single box that are compatible with the considered trna wobble rules .let enumerate the possible trna patterns as listed in the rightmost column of table [ tab : tab7 ] .we write , , for the number of amino acids , stop codons and unassigned codons present in pattern . + + * problem .* we now consider the problem of filling 16 boxes ( 64 codons in total ) using 20 different amino acids , stop codons and unassigned codons .it is useful to solve a slightly more general problem : the number of ways to fill boxes using * amino acids , * each of the first amino acids at least once , * exactly stop codons , and * exactly unassigned codons .the original problem is obtained by setting and . + + * recurrence . * we denote the number of such fillings by and compute their values by the recurrence with basis * rationale . *the reasoning behind is the following .we fill box number first , and worry about the remaining boxes later .we iterate over the possible trna patterns with variable .to realise pattern we need amino acids , stop codons and unassigned codons .there is only one way to choose stop codons and unassigned codons , but we can obtain the amino acids from two sources . we can take some from the still - to - use amino acids that we have to use at least once , and we must take the others from the free amino acids that can be used as desired .we consider all possible ways to realise the choice : we first iterate over the number of amino acids that we take from the still - to - use pool with variable . selecting out of still - to - use amino acids can be done in ways .similarly , taking the remaining amino acids from free amino acids can be done in ways .all these chosen amino acids are different , and so there are ways to instantiate the pattern using them .now we still have to fill the remaining boxes , using the remaining still - to - use amino acids at least once , while using exactly stop codons and leaving codons unassigned .+ + * implementation .* the value can be efficiently evaluated by dynamic programming .this is achieved by storing all intermediate values of that are computed in memory , and recalling them when they are needed instead of reevaluating .this way , can be evaluated in time and space .note that a single call to computes for many , , and . + + * sampling . *the above dynamic programming implementation has the advantage that it allows uniform sampling over the space of all codes .we first sample a number uniformly between and .then we use the recurrence in reverse to determine which code this number corresponds to .this is done as follows .say the number sampled was .we then incrementally evaluate the sum of .once the partial sum up to surpasses , we know that pattern was used in code number . similarly we decode which amino acids are used and in which order they are placed . by explicitly keeping track of the set of still - to - use amino acids we can retrieve the entire code recursively .we gratefully acknowledge steven de rooij for taking part in the effort to determine the global minimum .furthermore we gratefully acknowledge the large improvements of the manuscript by comments from paul g. higgs and two anonymous referees .novozhilov as , wolf yi , koonin ev , evolution of the genetic code : partial optimization of a random code for robustness to translation error in a rugged fitness landscape , _ biology direct _ 2 , 24 ( 2007 ) .takai k , classification of the possible pairs between the first anticodon and the third codon positions based on a simple model assuming two geometries with which the pairing effectively potentiates the decoding complex , _ journal of theoretical .biology _ 242 , 564580 ( 2006 ) .massey se , searching of code space for an error - minimized genetic code via codon capture leads to failure , or requires at least 20 improving codon reasignments via the ambiguous intermediate mechanism , __ 70 , 106115 ( 2010 ) .
the genetic code has been shown to be very error robust compared to randomly selected codes , but to be significantly less error robust than a certain code found by a heuristic algorithm . we formulate this optimisation problem as a quadratic assignment problem and thus verify that the code found by the heuristic is the global optimum . we also argue that it is strongly misleading to compare the genetic code only with codes sampled from the fixed block model , because the real code space is orders of magnitude larger . we thus enlarge the space from which random codes can be sampled from approximately codes to approximately codes . we do this by leaving the fixed block model , and using the wobble rules to formulate the characteristics acceptable for a genetic code . by relaxing more constraints three larger spaces are also constructed . using a modified error function , the genetic code is found to be more error robust compared to a background of randomly generated codes with increasing space size . we point out that these results do not necessarily imply that the code was optimized during evolution for error minimization , but that other mechanisms could explain this error robustness . genetic code , error robustness , origin of life .
two main kinds of representation for belief change have been suggested in the literature . in the agm theory ,belief change is considered as a change of _ belief sets _ that are taken to be deductively closed theories . according to the so - called base approach ,however , a belief set should be seen as generated by some ( finite ) _ base _ ( see , e.g. , ) .consequently , revisions of belief sets are determined on this approach by revisions of their underlying bases .this drastically reduces the set of alternatives and hence makes this approach constructive and computationally feasible .the agm theory and base approach constitute two seemingly incomparable representations for belief change , each with advantages of its own .still , the framework of epistemic states suggested in allows to formulate both as different species of a single general representation .the reformulation makes it possible , in particular , to reveal common constitutive principles behind them .these common principles embody , however , also common shortcomings that lead to a loss of information in iterated changes .this suggests epistemic states in their full generality as a justified representational alternative .epistemic states could be primarily seen as a generalization of the agm models , namely , they form an abstract logical representation for belief change . though the base - oriented models can be translated into epistemic states , the translation loses some important advantages of such models , namely their inherent finiteness and constructivity .hence it would be desirable to have a constructive representation for epistemic states that would preserve these features of base models while avoiding their problems .fortunately , such an alternative representation has already been suggested in , and it amounts to using sets , or _ flocks _ , of bases . as we will show ,a certain modification of the original flock models will provide us with a constructive representation for an important class of epistemic states and belief change processes in them .the theory described below , is different from that suggested in , since the latter has turned out to be flawed in its treatment of expansions .the shortcoming has created the need for shifting the level of representation from pure epistemic states used in the above papers to more general persistent epistemic states ( see below ) .a common feature of the agm and base representations is a preference structure on certain subsets of the belief set . in the case of the agm paradigm, it is a preference structure on the maximal subtheories of the belief set , while for the base representation it is a preference structure on subtheories that are generated by the subsets of the base .the notion of an epistemic state , defined below , reflects and generalizes this common structure of the two representations . an _ epistemic state _ is a triple , where is a set of _ admissible belief states _ , is a preference relation on , while is a labelling function assigning a deductively closed theory ( called _ an admissible belief set _ ) to every admissible belief state from .any epistemic state determines a unique set of propositions that are _ believed _ in it , namely , the set a propositions that hold in all maximally preferred admissible belief states .thus , even if an epistemic state contains conflicting preferred belief states , we can still believe in propositions that hold in all of them .still , the belief set associated with an epistemic state does not always constitute an admissible belief set by itself .the latter will hold , however , in an ideal situation when an epistemic state contains a unique most preferred admissible belief state .such epistemic states will be called _both agm and base models correspond to determinate epistemic states .nevertheless , non - determinate epistemic states have turned to be essential for an adequate representation of both nonmonotonic reasoning and belief change . in particular , the necessity of accepting non - determination arises most clearly in the analysis of contractions ( see below ) . in what follows, we will concentrate on a special kind of persistent epistemic states that will provide a representation for a non - prioritized belief change .an epistemic state will be called _ persistent _ if it satisfies persistence : : if , then . for persistent epistemic states , the informational content of admissible belief statesis always preserved ( persist ) in transitions to more preferred states .this property reflects the idea that the informational content of an admissible state is an essential ( though presumably not the only ) factor in determining the place of this state in the preference structure .persistent epistemic states are formally similar to a number of _ information models _ suggested in the logical literature , such as kripke s semantics for intuitionistic logic , veltman s data semantics , etc .( see for an overview ) .all such models consist of a partially ordered set of informational states .the relevant partial order represents possible ways of information growth , so it is assumed to satisfy the persistence requirement by its very meaning . as will be shown, persistent epistemic states constitute a smallest natural class of epistemic states that is closed under belief change operations that do not involve prioritization .moreover , it is precisely this class of epistemic states that admits a constructive representation in terms of flocks of bases .an epistemic state will be called _ pure _ if it satisfies pure monotonicity : : if and only if .pure epistemic states is a special kind of persistent states for which the preference relation is determined solely by the informational content of admissible belief states .accordingly , a pure epistemic state can be defined simply as a set of deductively closed theories , with the intended understanding that the relation of set inclusion among such theories plays the role of a preference relation . in other words , for pure epistemic states, preference is given to maximal theories .pure epistemic states have been used in as a basis for a foundational approach to belief change .as we mentioned , however , the approach has turned out to be flawed , since it does not provide an adequate representation of belief expansions .the present study stems from a more general approach to representing belief change suggested in .assume that a belief set is generated by some base with respect to a certain consequence relation ( that is , ) .this structure is representable as an epistemic state of the form , where admissible belief states are the subsets of , assigns each such subset its deductive closure , while is a preference relation on . in the simplest ( non - prioritized ) case ,this preference relations is definable via set inclusion : iff . in the latter casebase - generated epistemic states are equivalent to pure epistemic states consisting of theories of the form , where ranges over the subsets of .notice that any such epistemic state will be determinate , since it contains a most preferred theory , namely .unfortunately , we will see later that base - generated epistemic states ( and bases themselves ) are arguably inadequate for representing belief contractions . a generalization of bases that overcomes this shortcoming has been suggested in and consists in using sets ( or ` flocks ' ) of bases . by a _ flock_ we will mean an arbitrary set of sets of propositions , for .such a flock can be considered as a collection of bases , and the following construction of the epistemic state generated by a flock can be seen as a natural generalization of base - generated epistemic states .any flock generates an epistemic state defined as follows : * is a set of all nonempty sets such that , for some ; * , for each ; * holds iff .as can be seen , flocks constitute a generalization of bases .namely , any base can be identified with a singular flock .as in our study , flocks were used in as a framework for belief change operations .our subsequent results will be different , however .the main difference can be described as follows .let us say that two flocks are _ identical _ if they generate the same epistemic state .now , let be a flock and a set of propositions such that , for some .then it is easy to see that flocks and produce the same epistemic state , and consequently they are identical in the above sense .this shows that a flock is determined , in effect , by its inclusion - maximal elements . according to , however , the above two flocks are distinct , and hence the validity of propositions with respect to a flockis determined , in effect , by _minimal _ sets belonging to the flock .this makes the resulting theory less plausible and more complex than it could be .the above feature , though plausible by itself , gives rise , however , to high sensitivity of flocks with respect to the syntactic form of the propositions occurring in them .let us consider the flock , where is logically equivalent to . replacing with , we obtain a different flock , which is identical to .note that is believed in the epistemic state generated by the latter flock , though only is believed in the epistemic state generated by the source flock .the above example shows that flocks do not admit replacements of logically equivalent propositions , at least in cases when such a replacement leads to identification of propositions with other propositions appearing elsewhere in the flock .it should be kept in mind , however , that the epistemic state generated by a flock is a syntax - independent object , though purely syntactic differences in flocks may lead to significant differences in epistemic states generated by them .flock - generated epistemic states are always persistent ; this follows immediately from the fact that holds in such a state if and only if , and hence .moreover , it has been shown in that any finitary persistent epistemic state is representable by some flock .this means that flocks constitute an adequate syntactic formalism for representing persistent epistemic states .unfortunately , this also shows that flocks are not representable by pure epistemic states , as has been suggested in .since belief sets are uniquely determined by epistemic states , operations on epistemic states will also determine corresponding operations on belief sets .two kinds of operations immediately suggest themselves as most fundamental , namely removal and addition of information to epistemic states .admissible belief states of an epistemic state constitute all potential alternatives that are considered as ` serious possibilities ' by the agent . in accordance with this, the contraction of a proposition from an epistemic state is defined as an operation that removes all admissible belief states from that support .we will denote the resulting epistemic state by .the contraction operation has quite regular properties , most important of which being _ commutativity _ : a sequence of contractions can be performed in any order yielding the same resulting epistemic state .let us compare the above contraction with base contraction . according to , the first step in performing a base contraction of consists in finding preferred ( selected ) subsets of the base that do not imply .so far , this fits well with our construction , since the latter subsets exactly correspond to preferred admissible theories of the contracted epistemic state .then our definition says , in effect , that the contracted belief set should be equal to the intersection of these preferred theories .unfortunately , such a solution is unacceptable for the base paradigm , since we need to obtain a unique contracted base ; only the latter will determine the resulting belief set .accordingly , defines first the contracted base as the intersection of all preferred subsets of the base ( ` partial meet base contraction ' ) , and then the contracted belief set is defined as the set of all propositions that are implied by the new base .the problems arising in this approach are best illustrated by the following example ( adapted from ) .two equally good and reliable friends of a student say to her , respectively , that niamey is a nigerian town , and that niamey has a university .our student should subsequently retract her acquired belief that niamey is a university town in nigeria .let and denote , respectively , propositions niamey is a town in nigeria " , and there is a university in niamey " .then the above situation can be described as a contraction of from the ( belief set generated by the ) base .as has been noted in , this small example constitutes a mayor stumbling block for the base approach to belief change .actually , we will see that none of the current approaches handles satisfactorily this example . to begin with , it seems reasonable to expect that the contracted belief set in the above situation should contain , since each of the acceptable alternatives support this belief .this result is also naturally sanctioned by the agm theory . using the base contraction ,however , we should retreat first to the two sub - bases and that do not imply , and then form their intersection which happens to be empty ! in other words , we have lost all the information contained in the initial base , so all subsequent changes should start from a scratch . next , it seems also reasonable to require that if subsequent evidence rules out , for example , we should believe that . in other words ,contracting first and then from the initial belief state should make us believe in .this time the agm theory can not produce this result .the reason is that the first contraction gives the logical closure of as the contracted belief set , and hence the subsequent contraction of will not have any effect on the corresponding belief state .notice that this information loss is not ` seen ' in one - step changes ; it is revealed , however , in subsequent changes . as we see it ,the source of the above problem is that traditional approaches to belief change force us to choose in situations we have no grounds for choice .and our suggested solution here amounts to retaining all the preferred alternatives as parts of the new epistemic state , instead of transforming them into a single ` combined ' solution .this means , in particular , that we should allow our epistemic states to be non - determinate .this will not prevent us from determining each time a unique current set of beliefs ; but we should remember more than that . returning to the example , our contraction operation results in a new belief set , as well as a new epistemic state consisting of two theories .this latter epistemic state , however , is not base - generated , though it will be generated by a flock .flocks emerge as a nearest counterpart of bases that will already be closed with respect to our contraction operation .actually , the latter will correspond to the operation of deletion on flocks suggested in . a _ contraction of a flock _ with respect to ( notation ) is a flock consisting of all maximal subsets of each that do not imply .the following result confirms that the above operation on flocks corresponds to contraction of associated epistemic states . if is an epistemic state generated by a flock , then , for any , . despite the similarity of the above definition of contraction with that given in , the resulting contraction operationwill behave differently in our framework .the following example illustrates this .contraction of the flock with respect to is a flock , which is identical to according to our definition of identity .consequently , will be believed in the resulting epistemic state .this behavior seems also to agree with our intuitions , since eliminating as an option will leave us with as a single solution . in the representation of , however , is reducible to . consequently , nothing will be believed in the resulting epistemic state .furthermore , this also makes the corresponding operation of deletion _ non - commutative _ : whereas deletion of and then from the base results in , deletion of first and then will give a different flock , namely .the operation of expansion consists in adding information to epistemic states . in the agm theory , this is achieved through a straightforward addition of the new proposition to the belief set , while in the base approach the new proposition is added directly to the base .the framework of epistemic states drastically changes , however , the form and content of expansion operations .this stems already from the fact that adding a proposition to an epistemic state is no longer reducible to adding it to the belief set ; it should determine also the place of the newly added proposition in the structure of the expanded epistemic state .this will establish the degree of firmness with which we should believe the new proposition , as well as its dependence and justification relations with other beliefs . as a way of modelling this additional information, we suggest to treat the latter as a special case of merging the source epistemic state with another epistemic state that will represent the added information .accordingly , we will describe first some merge operation on epistemic states .then an expansion will be defined roughly as a merge of with a rudimentary epistemic state that is generated by the base .merge is a procedure of combining a number of epistemic states into a single epistemic state , in which we seek to combine information that is supported by the source epistemic states .it turns out that this notion of merging can be captured using a well - known algebraic construction of _ product_. roughly , a merge of two epistemic states and is an epistemic state in which the admissible states are all _ pairs _ of admissible states from and , the labelling function assigns each such pair the deductive closure of the union of their corresponding labels , while the resulting preference relation agrees with the ` component ' preferences ( see for a formal description ) .since the primary subject of this study is non - prioritized change , we will consider below only one kind of merge operations , namely a pure merge that treats the source epistemic states as having an equal ` weight ' . a _ pure merge _ of epistemic states and is an epistemic state such that * , for any ; * iff and .a pure merge is a merge operation that treats the two component epistemic states as two equally reliable sources .it is easy to see , in particular , that it is a commutative operation .being applied to base - generated epistemic states , pure merge corresponds to a straightforward union of two bases : if and are epistemic states generated , respectively , by bases and , then is equivalent to an epistemic state generated by . to begin with , the following result shows that pure merge preserves persistence of epistemic states .a pure merge of two persistent epistemic states is also a persistent epistemic state .since finitary persistent epistemic states are representable by flocks , a pure merge gives rise to a certain operation on flocks .this operation can be described as follows : let us consider two flocks and that have no propositions in common .then a _ merge _ of and will be a flock thus , the merge of two disjoint flocks is obtained by a pairwise combination of bases belonging to each flock .note , however , that the assumption of disjointness turns out to be essential for establishing the correspondence between merge of flocks and pure merge of associated epistemic states .still , this requirement is a purely syntactic constraint that can be easily met by replacing some of the propositions with logically equivalent , though syntactically different propositions .a suitable example will be given later when we will consider expansions of flocks that are based on the above notion of merge .the following result shows that merge of flocks corresponds exactly to a pure merge of associated epistemic states .if and are epistemic states generated , respectively , by disjoint flocks and , then is isomorphic to . for any proposition , we will denote by the epistemic state generated by a singular base .this pure epistemic state consists of just two theories , namely and .accordingly , it gives a most ` pure ' expression of the belief in .now the idea behind the definition below is that an expansion of an epistemic state with respect to amounts to merging it with .a _ pure expansion _ of an epistemic state with respect to a proposition ( notation ) is a pure merge of and the epistemic state that is generated by a base . in a pure expansionthe new proposition is added as an independent piece of information , that is , as a proposition that is not related to others with respect to priority .being applied to base - generated epistemic states , pure expansion corresponds to a straightforward addition of a new proposition to the base .if is generated by a base , then is isomorphic to an epistemic state generated by .since pure merge is a commutative operation , pure expansions will also be commutative . as any other kind of change in epistemic states , expansions generate corresponding changes in belief sets of epistemic states .it turns out that expansions generate in this sense precisely agm belief expansion functions : if is a belief set of , then the belief set of coincides with .thus , belief expansions generated by expansions of epistemic states behave just as agm expansions : the underlying epistemic state plays no role in determining the resulting expanded belief set , since the latter can be obtained by a direct addition of new propositions to the source belief set .it should be kept in mind , however , that identical expansions of belief sets can be produced by expansions of different epistemic states , and even by different expansions of the same epistemic state .these differences will be revealed in subsequent contractions and revisions of the expanded belief set . as we have seen earlier, pure merge generates a corresponding merge operation on flocks .consequently , pure expansion corresponds in this sense to a certain expansion operation on flocks .an _ expansion of a flock _ with respect to a proposition that does not appear in is a flock .thus , an expansion of a flock is obtained simply by adding the new proposition to each base from the flock .our earlier results immediately imply that this operation exactly corresponds to a pure expansion of the associated epistemic state with : if is an epistemic state generated by the flock , and does not appear in , then is isomorphic to .the above operation is quite similar in spirit to the operation of insertion into flocks used in , though the latter was intended to preserve consistency of the component bases , so they defined , in effect , the corresponding revision operation based on contraction and expansion in our sense .note , however , that our flock expansion is defined only when the added proposition does not appear in the flock .the need for the restriction is illustrated by the following example .let us return to the flock , where and denote , respectively , propositions niamey is a town in nigeria " , and there is a university in niamey " .recall that this flock is obtainable by contracting from the base .in other words , it reflects an informational situation in which we have reasons to believe in each of these propositions , but can not believe in both .now let us expand the epistemic state with .this expansion can be modeled by expanding with some proposition that is logically equivalent to .in other words , the epistemic state generated by the flock will be equivalent to the expansion of with .this flock sanctions belief in in full accordance with our intuitions .actually , it can be shown that the latter flock is reducible to a flock in the sense that the latter flock will produce an equivalent epistemic state .however , can not be replaced with in these flocks : the flock is already reducible to a single base in which both and are believed , contrary to our intuitions about the relevant situation : receiving a new support for believing that there is a university in niamey should not force us to believe also that niamey is a town in nigeria .this also shows most vividly that a straightforward addition of to each base in the flock does not produce intuitively satisfactory results .an additional interesting aspect of the above representation is that , though we fully believe in in the flock , the option has not been forgotten ; if we will contract now from the latter flock , we will obtain the flock which supports belief in .a little reflection shows that this is exactly what would be reasonable to believe in this situation .the purpose of this study was to give a formal representation for iterative non - prioritized change .as has been shown , such a representation can be achieved in the framework of persistent epistemic states , with flocks providing the corresponding constructive representation . moreover , these representations overcome shortcomings of both the agm and base - oriented models that incur loss of information in iterative changes .contraction and expansions are two basic operations on epistemic states that allow to define the majority of derived belief changes .thus , a _ revision _ of an epistemic state is definable via levi identity ( on the level of epistemic states ) , namely as a combination of contraction and expansion .as can be shown , the resulting operation will be sufficiently expressive to capture any relational belief revision function in the sense of agm . to conclude the paper ,we want to mention an interesting problem concerning expressivity of our belief change operations vis - a - vis flocks .namely , it has been shown in that there are flocks that are not constructible from simple primitive flocks using the contraction and expansion operations ( a simplest example being the flock ) .this apparently suggests that our stock of belief change operations is not complete and need to be extended with other operations that would provide functional completeness with respect to constructibility of flocks. the resulting theory would give then a truly complete constructive representation of non - prioritized belief change .
we address a general representation problem for belief change , and describe two interrelated representations for iterative non - prioritized change : a logical representation in terms of persistent epistemic states , and a constructive representation in terms of flocks of bases .
voids form a prominent aspect of the distribution of galaxies and matter on megaparsec scales .they are enormous regions with sizes in the range of mpc that are practically devoid of any galaxy and usually roundish in shape .forming an essential ingredient of the _ cosmic web _ , they are surrounded by elongated filaments , sheetlike walls and dense compact clusters .together they define the salient weblike pattern of galaxies and matter which pervades the observable universe .voids have been known as a feature of galaxy surveys since the first surveys were compiled . following the discovery by of the most dramatic specimen , the botes void , a hint of their central position within a weblike arrangement came with the first cfa redshift slice .this view has recently been expanded dramatically as maps of the spatial distribution of hundreds of thousands of galaxies in the 2dfgrs and sdss have become available. voids are a manifestation of the cosmic structure formation process as it reaches a non - linear stage of evolution .structure forms by gravitational instability from a primordial gaussian field of small amplitude density perturbations , where voids emerge out of the depressions ( e.g. * ? ? ?* ; * ? ? ?they mark the transition scale at which perturbations have decoupled from the hubble flow and organized themselves into recognizable structural features .early theoretical models of void formation were followed and generalized by the first numerical simulations of void centered universes . in recent yearsthe huge increase in computational resources has enabled n - body simulations to resolve in detail the intricate substructure of voids within the context of hierarchical cosmological structure formation scenarios .they confirm the theoretical expectation of voids having a rich substructure as a result of their hierarchical buildup .theoretically this evolution has been succesfully embedded in the extended press - schechter description . showed how this can be described by a two - barrier excursion set formalism ( also see * ? ? ?the two barriers refer to the two processes dictating the evolution of voids : their merging into ever larger voids as well as the collapse and disappearance of small ones embedded in overdense regions ( see * ? ? ?besides representing a key constituent of the cosmic matter distribution voids are interesting and important for a variety of reasons .first , they are a prominent feature of the megaparsec universe .a proper and full understanding of the formation and dynamics of the cosmic web is not possible without understanding the structure and evolution of voids .secondly , they are a probe of cosmological parameters .the outflow from the voids depends on the matter density parameter , the hubble parameter and possibly on the cosmological constant ( see e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?. these parameters also dictate their redshift space distortions while their intrinsic structure and shape is sensitive to various aspects of the power spectrum of density fluctuations . a third point of interest concerns the galaxies in voids .voids provide a unique and still largely pristine environment for studying the evolution of galaxies .the recent interest in environmental influences on galaxy formation has stimulated substantial activity in this direction . despite the considerable interest in voidsa fairly basic yet highly significant issue remains : identifying voids and tracing their outline within the complex spatial geometry of the cosmic web .there is not an unequivocal definition of what a void is and as a result there is considerable disagreement on the precise outline of such a region ( see e.g. * ? ? ?because of the vague and diverse definitions , and the diverse interests in voids , there is a plethora of void identification procedures .the `` sphere - based '' voidfinder algorithm of has been at the basis of most voidfinding methods. however , this succesful approach will not be able to analyze complex spatial configurations in which voids may have arbitrary shapes and contain a range and variety of substructures . a somewhat related and tessellation based voidfinding technique that still is under development is zobov .it is the voidfinder equivalent to the voboz halofinder method .here we introduce and test a new and objective voidfinding formalism that has been specifically designed to dissect the multiscale character of the void network and the weblike features marking its boundaries .our _ watershed void finder _ ( wvf ) is based on the watershed algorithm .it stems from the field of mathematical morphology and image analysis .the wvf is defined with respect to the dtfe density field of a discrete point distribution .this assures an optimal sensitivity to the morphology of spatial structures and yields an unbiased probe of substructure in the mass distribution ( see e.g * ? ? ?* ; * ? ? ?because the wvf void finder does not impose a priori constraints on the size , morphology and shape of a voids it provides a basis for analyzing the intricacies of an evolving void hierarchy .indeed , this has been a major incentive towards its development .this study is the first in a series . herewe will define and describe the watershed void finder and investigate its performance with respect to a test model of spatial weblike distributions , voronoi kinematic models .having assured the success of wvf to trace and measure the spatial characteristics of these models the follow - up study will address the application of wvf on a number of gif n - body simulations of structure formation . amongst others ,wvf will be directed towards characterizing the hierarchical structure of the megaparsec void population . for a comparison of the wvf with other void finder methods we refer to the extensive study of . in the following sections we will first describe howthe fundamental concepts of mathematical morphology have been translated into a tool for the analysis of cosmological density fields inferred from a discrete n - body simulation or galaxy redshift survey point distribution ( sect . 2 & 3 ) . to test our methodwe have applied it to a set of heuristic and flexible models of a cellular spatial distribution of points , voronoi clustering models .these are described in section 4 . in section 5we present the quantitative analysis of our test results and a comparison with the known intrinsic properties of the test models . in section 6we evaluate our findings and discuss the prospects for the analysis of cosmological n - body simulations .the new void finding algorithm which we introduce here is based on the _ watershed transform _ of and .a more extensive and technical description of the basic concepts of mathematical morphology and the basic watershed algorithm in terms of homotopy transformations on lattices is provided in appendix [ app : mathmorph ] and [ app : wshedimpl ] .the watershed transform is used for segmenting images into distinct regions and objects .the watershed transform ( wst ) is a concept defined within the context of mathematical morphology , and was first introduced by . the basic idea behind the wst finds its origin in geophysics .the wst delineates the boundaries of the separate domains , i.e. the _ basins _ , into which yields of , for example , rainfall will collect . the word _ watershed_ refers to the analogy of a landscape being flooded by a rising level of water .suppose we have a surface in the shape of a landscape ( first image of fig .[ fig : proc ] ) .the surface is pierced at the location of each of the minima . as the water - level rises a growing fraction of the landscapewill be flooded by the water in the expanding basins .ultimately basins will meet at the ridges corresponding to saddle - points in the density field .this intermediate step is plotted in the second image of fig .[ fig : proc ] .the ridges define the boundaries of the basins , enforced by means of a sufficiently high dam .the final result ( see last in fig . [ fig : proc ] ) of the completely immersed landscape is a division of the landscape into individual cells , separated by the _ ridge dams_. in the remainder of this study we will use the word _`` segment '' _ to describe the watershed s cells .the watershed algorithm holds several advantages with respect to other voidfinders : * within an ideal smooth density field ( i.e. without noise ) it will identify voids in a parameter free way .no predefined values have to be introduced . in less ideal , and realistic , circumstances a few parameters have to be set for filtering out discreteness noise .their values are guided by the properties of the data .* the watershed works directly on the topology of the field and does not reply on a predefined geometry / shape . by implicationthe identified voids may have any shape . *the watershed naturally places the _ divide lines _ on the crests of a field .the void boundary will be detected even when its boundary is distorted . *the transform naturally produces closed contours . as long as minimaare well chosen the watershed transform will not be sensitive to local protrusions between two adjacent voids .obviously we can only extract structural information to the extent that the point distribution reflects the underlying structure .undersampling and shotnoise always conspire to obfiscate the results , but we believe the present methodology provides an excellent way of handling this .the watershed void finder ( wvf ) is an implementation of the watershed transform within a cosmological context .the watershed method is perfectly suited to study the holes and boundaries in the distribution of galaxies , and holds the specific promise of being able to recognize the void hierarchy that has been the incentive for our study .the analogy of the wst with the cosmological context is straightforward : _ voids _ are to be identified with the _ basins _ , while the _ filaments _ and _ walls _ of the cosmic web are the ridges separating the voids from each other .an outline of the steps of the watershed procedure within its cosmological context is as follows : * * dtfe * : given a point distribution ( n - body , redshift survey ) , the delaunay tessellation field estimator ( dtfe , * ? ? ?* ) is used to define a continuous density field throughout the sample volume .this guarantees a density field which retains the morphological character of the underlying point distribution , i.e. the hierarchical nature , the web - like morphology dominated by filaments and walls , and the presence voids is warranted . + * * grid sampling * : for practical processing purposes the dtfe field is sampled on a grid .the optimal grid size has to assure the resolution of all morphological structures while minimizing the number of needed gridcells .this criterion suggests a grid with gridcells whose size is in the order of the interparticle separation .+ * * rank - ordered filtering * : the dtfe density field is adaptively smoothed by means of _ natural neighbour maxmin and median _ filtering .this involves the computation of the median , minimum or maximum of densities within the _ contiguous voronoi cell _ , the region defined by a point and its _ natural neighbours _ .+ * * contour levels * : the image is transformed into a discrete set of density levels .the levels are defined by a uniform partitioning of the cumulative density distribution . + * * pixel noise * : with an opening and closing ( operation to be defined in appendix .[ app : mathmorph ] ) of 2 pixel radius we further reduce pixel by pixel fluctuations . + * * field minima * : the minima in the smoothed density field are identified as the pixels ( grid cells ) which are exclusively surrounded by neighbouring grid - cells with a higher density value . + * * flooding * : the _ flooding procedure _ starts at the location of the minima . at successively increasing flood levels the surrounding region with a density lower than the corresponding density thresholdis added to the _ basin _ of a particular minimum .the flooding is illustrated in fig .[ fig : proc ] . + * * segmentation * : once a pixel is reached by two distinct basins it is identified as belonging to their segmentation boundary . by continuing this procedure up to the maximum density level the whole region has been segmented into distinct _void patches_. + * * hierarchy correction * : a correction is necessary to deal with effects related to the intrinsic hierarchical nature of the void distribution .the correction involves the removal of segmentation boundaries whose density is lower than some density threshold. the natural threshold value would be the typical void underdensity ( see sect .[ sec : threshold ] ) . alternatively, dependent on the application , one may chose to take a user - defined value .+ 0.4truecm a direct impression of the watershed voidfinding method is most readily obtained via the illustration of a representative example . in fig .[ fig : wvf ] the watershed procedure has been applied to the cosmological gif2 simulation . the n - body particle distribution ( lefthand fig .[ fig : wvf ] ) is translated into a density field using the dtfe method .the application of the dtfe method is described in section [ sec : dtfe ] , the details of the dtfe procedure are specified in appendix [ app : dtfe ] .the dtfe density field is sampled and interpolated on a grid , the result of which is shown in the top righthand frame of fig .[ fig : wvf ] .the gray - scales are fixed by uniformly sampling the cumulative density distribution , ensuring that all grayscale values have the same amount of volume .the dtfe density field is smoothed by means of the adaptive _ natural neighbour median _filtering described in sect .[ sec : natnghb ] .this procedure determines the filtered density values at the location of the particles .subsequently , these are interpolated onto a grid .this field is translated into a grayscale image following the same procedure as that for the raw dtfe image ( bottom lefthand panel ) .the minima in the smoothed density field are identified and marked as the flooding centres for the watershed transform .the resulting wvf segmentation is shown in the bottom righthand frame of fig .[ fig : wvf ] . the correspondence between the cosmic web , its voids and the watershed segmentation is striking .there is an almost perfect one - to - one correspondence between the segmentation and the void regions in the underlying density field .the wvf method does not depend on any predefined shape . as a result, the recovered voids do follow their natural shape .a qualitative assessment of the whole simulation cube reveals that voids are very elongated and have a preferential orientation within the cosmic web , perhaps dictated by the megaparsec tidal force field ( see e.g. * ? ? ?clearly , the watershed void finder is able to extract substructure at any level present in the density distribution . while this is an advantage with respect to tracing the presence of substructure within voids it does turn into a disadvantage when seeking to trace the outline of large scale voids or when dealing with noise in the dataset . while the noise - induced artificial segments are suppresed by means of the full machinery of markers ( sect .[ sec : marker ] ) , void patch merging ( sect .[ sec : merging ] ) and natural neighbour rank filtering ( sect . [ sec : natnghb ] ) , it are the latter two which may deal with intrinsic void hierarchy .the follow - up study will involve a detailed quantitative analyze of volume and shapes of the voids in the gif2 mass distribution for a sequence of timesteps .in order to appreciate the various steps of the watershed void finder outlined in the previous section we need to describe a few of the essential steps in more detail . to process a point sample into a spatial density field we use dtfe . to detect voids of a particular scaleit is necessary to remove statistically insignificant voids generated by the shotnoise of the discrete point sample as well as physically significant subvoids . in order to retain only the statistically significicant voids we introduce and apply natural neighbour rank - order filtering .hierarchy merging is used for the removal of subvoids which one would wish to exclude from a specific void study .the input samples for our analysis are mostly samples of galaxy positions obtained by galaxy redshift surveys or the positions of a large number of particles produced by n - body simulations of cosmic structure formation . in order to define a proper continuous field from a discrete distribution of points computer particles or galaxies we translate the spatial point sample into a continuous density field by means of the delaunay tessellation field estimator ( dtfe , * ? ? ?the dtfe technique recovers fully volume - covering and volume - weighted continuous fields from a discrete set of sampled field values .the method has been developed by and forms an elaboration of the velocity interpolation scheme introduced by .it is based upon the use of the voronoi and delaunay tessellations of a given spatial point distribution to form the basis of a natural , fully self - adaptive filter in which the delaunay tessellations are used as multidimensional interpolation intervals .a typical example of a dtfe processed field is the one shown in the top row of fig .[ fig : wvf ] : the particles of a gif n - body simulation are translated into the continuous density field in the righthand frame . the primary ingredient of the dtfe method is the delaunay tessellation of the particle distribution .the delaunay tessellation of a point set is the uniquely defined and volume - covering tessellation of mutually disjunct delaunay tetrahedra ( triangles in 2d ) .each is defined by the set of four points whose circumscribing sphere does not contain any of the other points in the generating set .the delaunay tessellation and the voronoi tessellation of the point set are each others _ dual_. the voronoi tessellation is the division of space into mutually disjunct polyhedra , each polyhedron consisting of the part of space closer to the defining point than any of the other points dtfe exploits three properties of voronoi and delaunay tessellations .the tessellations are very sensitive to the local point density .dtfe uses this to define a local estimate of the density on the basis of the inverse of the volume of the tessellation cells .equally important is their sensitivity to the local geometry of the point distribution .this allows them to trace anisotropic features such as encountered in the cosmic web .finally , dtfe exploits the adaptive and minimum triangulation properties of delaunay tessellations in using them as adaptive spatial interpolation intervals for irregular point distributions . in this way it is the first order version of the _ natural neighbour method _ . within the cosmological context a major and crucial characteristic of a processed dtfe density field is that it is capable of delineating three fundamental characteristics of the spatial structure of the megaparsec cosmic matter distribution .it outlines the full hierarchy of substructures present in the sampling point distribution , relating to the standard view of structure in the universe having arisen through the gradual hierarchical buildup of matter concentrations .dtfe also reproduces any anisotropic patterns in the density distribution without diluting their intrinsic geometrical properties .this is particularly important when analyzing the the prominent filamentary and planar features marking the cosmic web .a third important aspect of dtfe is that it outlines the presence and shape of voidlike regions . because of the interpolation definition of the dtfe field reconstruction voidsare rendered as regions of slowly varying and moderately low density values .a more detailed outline of the dtfe reconstruction procedure can be found in appendix [ app : dtfe ] .dtfe involves the estimate of a continuous field throughout the complete sample volume . to process the dtfe field through the wvf machinery we sample the field on a gridit is important to choose a grid which is optimally suited for the void finding purpose of the wvf method . on the one hand , the grid values should represent all physically significant structural features ( voids ) in the sample volume . on the other hand, the grid needs to be as coarse as possible in order to suppress the detection of spurious and insignificant features .the latter is also beneficial from a viewpoint of computational efficiency .this is achieved by adopting a gridsize in the order of the mean interparticle distance .( a ) ( b ) the dtfe grid sampling is accomplished through monte carlo sampling within each grid cell . within each gridcellthe dtfe density value is measured at 10 randomly distributed sample points .the grid value is taken to be their average .a major and novel ingredient of our wvf method intended to eliminate shot noise in the dtfe density field reconstructions is that of a natural non - linear filtering extension : the _ natural neighbour rank - ordered filtering _we invoke two kinds of non - linear adaptive smoothing techniques , _ median filtering _ and _ max / min filtering _ , the latter originating in mathematical morphology ( mm ) .both filters are rank order filters , and both have well known behaviour .they have a few important properties relevant for our purposes .median filtering is very effective in removing shot noise while preserving the locations of edges .the max / min filters are designed to remove morphological features arising from shot noise ( see appendix [ app : mathmorph ] ) .the filters are defined over neighbourhoods .these are often named connectivity or , alternatively , structure elements .image analysis usually deals with regular two - dimensional image grids .the most common situation for such grids are straightforward 4-connectivities or 8-connectivities ( see fig .[ fig : connect ] ) .when a more arbitrary shape is used one usually refers to it as a structure element . in the situation of our interest we deal withirregularly spaced data , rendering it impossible to use any of the above neighbourhoods .it is the delaunay triangulation which defines a natural neighbourhood for these situations . for any point it consists of its_ natural neighbours _ , i.e. all points to which it is connected via an edge of the delaunay triangulation ( see fig .[ fig : natnbrs ] ) .this may be extended to any higher order natural neighbourhood : e.g. a second order neighbourhood would include the natural neighbours of the ( first order ) natural neighbours .the advantages of following this approach are the same as those for the dtfe procedure : the natural neighbour filtering shortly named nn-_median filtering _ or nn-_min / max filtering _ forms a natural extension to our dtfe based formalism .it shares in the major advantage of being an entirely natural and self - adaptive procedure .the smoothing kernel is compact in regions of high point concentrations , while it is extended in regions of low density .implementing the min / max and median natural neighbour filters within the dtfe method is straightforward .the procedure starts with the dtfe density value at each of the ( original ) sample points. these may be the particles in an n - body simulation or the galaxies in a redshift survey .for each point in the sample the next step consists of the determination of the median , maximum or minimum value over the set of density values made up by that of the point itself and those of its natural neighbours .the new `` filtered '' density values are assigned to the points as the first - order filter value .this process is continued for a number of iterative steps , each step yielding a higher order filtering step .the number of iterative steps of the natural neighbour smoothing is dependent on the size of the structure to be resolved and the sampling density within its realm .testing has shown that a reasonable order of magnitude estimate is the mean number of sample points along the diameter of the structure . as an illustration of this criterionone may want to consult the low noise and high noise voronoi models in fig .[ fig : segorg ] .while the void cells of the low noise models contain on average 6 points per cell diameter , the void cells of the high noise model contain around 16 .fifth - order filtering sufficed for the low noise model , 20-th order for the high noise model ( fig .[ fig : vor11 ] and fig .[ fig : vor15 ] ) in the final step , following the specified order of the filtering process , the filtered density values determined at the particle positions are interpolated onto a regular grid for practical processing purposes ( see sec .[ sec : dtfegrid ] ) . [ cols="^,^,^,^,^,^,^,^,^,^ " , ] [ tab : void ] a direct application of the watershedtransform results in a starkly oversegmented tessellation ( fig .[ fig : vor11 ] and fig .[ fig : vor15 ] ) . amongst the overabundance of mostly artificial , noise - related , segments we may also discern real significant watersheds .their boundary ridges ( divide lines ) are defined by filaments , walls and clusters surrounding the voids .many of these genuine voids are divided into small patches .they are the result of oversegmentation induced by the noisy poisson point distribution within the cells .the local minima within this background noise will act as individual watershed flood centres marking corresponding , superfluous , watershed segments .while for a general cosmological distribution it may be challenging to separate genuine physical subvoids from artificial noise - generated ones , the voronoi kinematic models have the unique advantage of having no intrinsic substructure .any detected substructure has to be artificial , rendering it straightforward to assess the action of the various steps intent on removing the noise contributions .the first step in the removal of insignificant minima consists of the application of the iterative natural neighbour median filtering process .this procedure , described in sect .[ sec : natnghb ] , removes some of the shot noise in the low density regions . at the same time it is edge preserving .the result of five nn - median filtering iterations on the high noise version of the voronoi kinematic clustering model is shown in fig .[ fig : vor11 ] . with the exception of a few artificial edges the resulting watershed segmentationalmost perfectly matches the intrinsic voronoi tessellation .figure [ fig : vor15 ] shows the result for the high noise version of the same voronoi kinematic clustering model . in this case pure nn - median filtering is not sufficient .a much more acceptable result is achieved following the application of the watershed hierarchy segment merging operation and the removal of ridges with a density contrast lower than the 0.8 contrast threshold .for both the low - noise and high - noise realizations we find that the intrinsic and prominent edges of the voronoi pattern remain in place .nonetheless , a few shot noise induced artificial divisions survive the filtering and noise removal operations .they mark prominent coherent but fully artificial features in the noise . given their rare occurrence we accept these oversegmentations as inescapable yet insignificant contaminations .the watershed segmentation retrieved by the watershed voidfinder is compared with the intrinsic ( geometric ) voronoi tessellation .the first test assesses the number of false and correct wvf detections . a second test concerns the volume distribution of the voronoi cells and the corresponding watershed void segments . for our performance study we have three basic models : the intrinsic ( geometric ) voronoi tessellation , and the low noise and high noise voronoi clustering models ( table [ tab : vorkinmpar ] ) .the voronoi clustering models are processed by wvf . in order to assess the various steps in the wvf procedurethe models are subjected to different versions of the wvf .the second column of table [ tab : void ] lists the differently wvf processed datasets .these are : 1 . _original _ : the pure dtfe density field , without any smoothing or boundary removal , subjected to the watershed transform .minmax _ : only the nn-_min / max _ filtering is applied to the dtfe density field before watershed segmentation .med _ : iteratations of median natural - neighbour filtering is applied to the dtfe density field . in all situationsthis includes max / min filtering afterwards ._ hierarch _ : following the watershed transform , on the pure non - filtered dtfe density , a density threshold is applied .the applied hierarchy threshold level is : all segment boundaries with a density lower than are removed as physically insignificant .med _ : mixed process involving an times iterated median filtered dtfe density field , followed by the watershed transform , after which the segment boundaries below the hierarchy threshold are removed .note that the physically natural threshold of is not really applicable to the heuristic voronoi models . on the basis of the model specificationsthe threshold level has been set to .each of the resulting segmentations is subjected to a range of detection assessments .these are listed in the 3rd to 7th column of table [ tab : void ] .the columns of the table contain respectively the number of wvf void detections , the amount of false splits , the amount of false mergers , the number of correctly identified voids , and the correctness measure .while the top block contains information on the intrinsic ( geometric ) voronoi tessellation , the subsequent two blocks contain the detection evaluations for the _ low noise _ and _ high noise _ models .the false detections are split into two cases .the first case we name _ false splits _ : a break up of a genuine cell into two or more watershed voids .the second class is that of the _ false mergers _ : the spurious merging of two voronoi cells into one watershed void .the splits , mergers and correct voids are computed by comparing the overlap between the volume of the voronoi cell and that of the retrieved watershed void .a split is identified if the overlap percentage w.r.t .the voronoi volume is lower than a threshold of 85 percent of the overlapping volume . along the same line, a merger concerns an overlap deficiency with respect to the watershed void volume .when both measures agree for at least 85 percent a void is considered to be correct .the correctness of a certain segmentation is the percentage of correctly identified voids with respect the 180 intrinsic voronoi cells. judging by the number of voids in the low noise model , it is clear that smoothing or any other selection criterion remain necessary to reduce the number of minima from 850 to a number close to the intrinsic value 180 .the second row shows the results for the case when just the maxmin filter is applied .this step already reduces the number of insignificant minima by already 60 percent .it is an indication for the local character of the shot noise component .the next three rows list the results for various iterations of the median filtering . with just 2 iterations almost 90 percent of the voidsare retrieved .most of the splits are removed at 2 iterations .this result does not improve with more median filtering , even up to 20 iterations this just increases the number of mergers as more walls are smoothed away .the number of splits also increases as minima begin to merge . in general the same conclusioncan be drawn for the high noise model .rank - ordered nn - median and nn - min / max filters manage to reduce the number of insignificant minima by a factor of 80 percent ( cf . the number of voids in the second and third row ) .these models attain a correctness of approximately fifty percent .mere rank - ordered filtering is evidently insufficient .we also ran a _model which did not include median filtering .instead only insignificant boundaries were removed .it achieved a recovery of fifty percent .combining both methods ( _ med5hr _ and _ med20hr _ ) recovers 80 till 90 percent of the voids .the succes rate may be understood by the complementarity of both methods : while the median filtering recovers the coherent structures , the thresholding will remove those coherent walls that are far underdense .the translation to a cosmological density field is straightforward .the rank - ordered filtering ensures that insignificant minima are removed and that the watershed will pick up only coherent boundaries .thresholding is able to order these walls by significance and to remove the very underdense and insignificant walls . in fig .[ fig : voiddis ] we compare the distribution of the void volumes .the histogram shows the distribution of equivalent radii for the segment cells , the solid line histogram shows the ( geometric ) volume distribution for the intrinsic voronoi tessellations . on top of thiswe show the distributions for the various ( parameterized ) watershed segmentation models listed in table [ tab : void ] .not surprisingly the best segmentations have nearly equivalent volume - distributions .for the lownoise models this is _ med2 _( lefthand ) , for the highnoise models _med20hr _ ( righthand ) .this conclusion is in line with the detection rates listed in table [ tab : void ] .the visual comparison of the intrinsic geometric voronoi tessellations and the two best segmentations - _ med2 _ for the lownoise model and _ med20hr _ for the highnoise version confirms that also the visual impression between these watershed renderings and the original voronoi model is very much alike .we have also assessed the cell - by - cell correspondence between the watershed segmentations and the voronoi model . identifying each watershed segment with its original voronoi cellwe have plotted the volume of all watershed cells against the corresponding voronoi cell volumes .the scatter plots in fig .[ fig : voidvol ] form a convincing confirmation of the almost perfect one - to - one relation between the volumes derived by the wvf procedure and the original volumes .the only deviations concern a few outliers .these are the _ hierarchy merger _ segments for which the watershed volumes are too large , resulting in a displacement to the right . while the volumes occupied by the watershed segments in fig .[ fig : best ] do overlap almost perfectly with that of the original voronoi cells , their surfaces have a more noisy and erratic appearance .this is mostly a consequence of the shot noise in the ( dtfe ) density field , induced by the noise in the underlying point process .the crests in the density field are highly sensitive to any noise , in addition to assess the impact of the noise on the surfaces of the watershed segments we compared the watershed segement surface areas with the voronoi cell surface areas .the results are shown in fig .[ fig : voidsurf ] .we tested the lownoise _med2 _ and the highnoise _med20hr_. in both cases we find a linear relationship between the watershed surface and the genuine voronoi surface area .both cases involve considerably more scatter than that for the volumes of the cells .in addition to an increased level of scatter we also find a small be it significant offset from the 1 - 1 relation .the slope of the lownoise model is only slightly less than unity , the highnoise model slope deviates considerably more . these offsets do reflect the systematically larger surface areas of the watershed segments , a manifestation of their irregular surfaces .it is evident that the level of irregularity is more substantial for the highnoise than for the lownoise reconstructions ( cf .[ fig : best ] ) .the scatter plots do also reveal several cells with huge deviations in surface area . unlike expected there is no systematic trend for smaller cells to show larger deviations .some of the small deviating cells can be recognized in fig .[ fig : best ] as highly irregular patches .the large deviant cells correspond to watershed segments which as a result of noisy boundaries got wrongly merged . while the irregularity of the surface areas forms a good illustration of the noise characteristics of the watershed patches , for the purpose of void identification it does not pose a serious problem .smoother contours may always be obtained by applying the flooding process on a properly smoothed field . some suggestions for thismay be achieved follows in the discussion .the wvf void finder technique is based on the watershed transform known from the field of image processing .voids are identified with the basins of the cosmological mass distribution , the filaments and walls of the cosmic web with the ridges separating the voids from each other . stemming from the field of mathematical morphology ,watershed cells are considered as patches locally minimizing the `` topographic distance '' .the wvf operates on a continuous density field .because the cosmological matter distribution is nearly always sampled by a discrete distribution of galaxies or n - body particles , the first step of the wvf is to translate this into a density field by means of the delaunay tessellation field estimator ( dtfe ) . because the wvf involves an intrinsically morphological and topological operation , the capability of dtfe to retain the shape and morphology of the cosmic web is essential .it guarantees that within this cosmological application the intrinsic property of the watershed transform to act independent of scale , shape , and structure of a segment is retained . as a result , voids of any scale , shape and structure may be detected by wvf .in addition to the regular watershed transform the wvf needs to invoke various operations to suppress ( discreteness ) sampling noise .in addition , we extend the watershed formalism such that the wvf will be capable of analyzing the hierarchy of voids in a matter distribution , i.e. identify how and which small scale voids are embedded within a void on larger scales . markers indicating significant void centers , false segment removal by hierarchy merging and natural neighbour filtering all affect an efficient noise removal . hierarchy merging manages to eliminate boundaries between subvoids .natural neighbour median filtering , for various orders , is an essential new ingredient for highlighting the hierarchical embedding of the void population .it allows a natural selection of voids , unbiased with respect to the scale and shape of these structures .the voids that persist over a range of scales are expected to relate to the voids that presently dominate the cosmic matter distribution . in other words ,wvf preserves the void hierarchy .the present work includes a meticulous qualitative and quantitative assessment of the watershed transform on the basis of a set of voronoi kinematic models .these heuristic models of spatial weblike or cellular galaxy or particle distributions facilitate the comparison between the void detection of the wvf and that of the characteristics of the cells in the original and intrinsically known voronoi tessellation .it is found that wvf is not only succesfull in reproducing the qualititative cellular appearance of the voronoi models but also in reproducing quantitative aspects like an almost perfect 1 - 1 match of cell size with wvf segment volume and the corresponding void size distribution .we foresee various possible improvements of the wvf .these concern in particular the identification of significant edges .one possibility is that extension proposed by , in which not only the `` topographic costs '' but also the lengths of the contours should be minimized .the length minimization will result in smoother boundaries .additional improvements may be found in better filtering procedures in order to facilitate studies of hierarchically structured patterns .we expect considerable improvements by anisotropic diffusion techniques and are currently implementing these in the wvf code .given the results of our study , we are confident for applying wvf to more elaborate and realistic simulations of cosmic structure formation and on large galaxy redshift surveys . the analysis of a set of gif cosmological simulations will be presented in an upcoming paper .we wish to thank miguel aragn - calvo for permission to use fig . [fig : vorkinmschm ] .we are grateful to the referee , mark neyrinck , for the incisive , detailed and useful comments and recommendations for improvements .we particularly thank the participants of the knaw colloquium `` cosmic voids '' in amsterdam , dec .2006 , for the many useful discussions and encouraging remarks .99 k. , et al .( the sdss collaboration ) , 2003 , aj , 126 , 2081 j. , mhnen p.,1998 , apj , 497 , 534 s. , mller v.,2002 , mnras , 332 , 205 a.j . , hoyle f. , fernando t , vogeley m.s . , 2003 , mnras , 340 , 160 f. , van de weygaert r. , 1996 , mnras , 279 , 693 e. , 1985 , apjs , 58 , 1 s. , lantuejoul c. , 1979 , in proceedings international workshop on image processing , ccett / irisa , rennes , france s. , meyer f. , 1993 , mathematical morphology in image processing , ed .m. dekker , new york , ch .12 , 433 m.j ., marimont d.h . , 1998 , ieee image processing , 7 , 421 j.r . , cole s. , efstathiou g. , kaiser n. , 1991 , apj , 379 , 440 j.r ., kofman l. , pogosyan d. , 1996 , nature , 380 , 603 g.r.,da costa l.n . ,goldwirth d.s ., piran t. , 1992 , apj , 388 , 234 j. , sambridge m. , 1995 , nature , 376 , 655 l. , padilla n.d . , valotto c. , lambas d.g , 2006 , mnras , 373 , 1440 g. , rood h.j . , 1975 , nature , 257 , 294 j.m . , sheth r.k ., diaferio a. , gao l. , yoshida n. , 2005 , mnras , 360 , 216 j.m . , pearce f. , brunino r. , foster c. , platen e. , basilakos s. , fairall a. , feldman h. , gottlber s. , hahn o. , hoyle f. , mller v. , nelson l. , neyrinck m. , plionis m. , porciani c. , shandarin s. , vogeley m. , van de weygaert r. , 1999 , mnras , subm . m. , et al . , ( the 2dfgrs team ) , 2003 , vizier online data catalog , 7226 a. , rees m.j . , 1994 , apj , 433 , l1 v. , geller m. , huchra j.p . , 1986 , apj , 302 , l1 b. n. , 1934 , bull . acad . sci .clase sci ., 7 , 793 j. , da costa l.n . , goldwirth d.s . , lecar m. , piran t. , 1993 , apj . , 410 , 458 j.,joeveer m. , saar e , 1980 , mnras , 193 , 353 h. , piran t. , 1997 , apj , 491 , 421 h.h . , triay r. , 2006 , gr - qc/0607090 s.r . , piran t. , 2006 , mnras , 366 , 467 y. , piran t. , 2001 , apj , 548 , 1 d.m ., vogeley m.s . , 2004 , apj , 605 , 1 s. , okas e.l ., klypin a. , hoffman y. , 2003 , mnras , 344 , 715 l.a . ,thompson s.a . , 1978 ,apj , 222 , 748 n.a ., geller m.j . , 1999 , aas , 195 , 1384 o. , porciani c. , carollo m. , dekel a. , 2007 , mnras , 375 , 489 h.j.a.m .. , 1994 , morphological image operators , academic press m. , yepes g. , gottlber s. , springel w.,2006 , mnras , 371 , 401 y. , shaham j. , 1982 , apj , 262 , l23 y. , silk j. , wyse r.f.g . , 1992 , apj , 388 , l13 f. , vogeley m. , 2002 , apj , 566 , 641 , v. , 1984 , mnras , 206 , 1 g. , fairall a.p . , 1991 , mnras , 248 , 313 g. , colberg j.m ., diaferio a. , white s.d.m . , 1999 ,mnras , 303 , 188 r.p . ,oemler a. , schechter p.l ., shectman s.a . , 1981 ,apj , 248 , 57 r.p . , oemler a. , schechter p.l . ,shectman s.a . , 1987 , apj , 314 , 493 r. , 1998 , in mathematical morphology and its applications to image and signal processing , eds . h.j.a.m .heijmans , j.b.t.m .roerdink , p. 35 - 42, kluwer j. , park d. , 2006 , apj , 652 , 1 b. , weinberg d.h . , 1994 ,mnras , 267 , 605 h. , wasserman i. , 1990 , apj , 348,1 g. , 1975 , random sets and integral geometry , wiley h. , white s.d.m . , 2002 , mnras , 337 , 1193 f. , 1994 , signal processing , 38 , 113 f. , beucher s. , 1990 , j. visual comm .image rep ., 1 , 21 m. c. , gnedin n. y. , hamilton a. j. s. , 2005 , mnras , 356 , 1222 m. , 2007 , in cosmic voids , proc .knaw colloquium , knaw d. , colombi s. , dore o. , 2006 , mnras , 366 , 1201 t.n . , worring m. , van den boomgaard r. , 2003 , ieee transactions on pattern analysis and machine intelligence , vol . 25 , no . 3 , 330 a. , boots b. , sugihara k. , chiu s.n . , 2000 , 2nd ed ., wiley n.d . , ceccarelli l. , lambas d.g . ,mnras , 363 , 977 s.g ., betancort - rijo j.e ., prada f. , klypin a. , gottlber s. , mnras , 369 , 335 s.g ., prada f. , holtzman j. , klypin a. , betancort - rijo j.e . , mnras , 372 , 1710 p.j.e . , 2001 ,apj , 557 , 495 e. , van de weygaert r. , jones b.j.t ., 2007 , mnras , in prep .m. , basilakos s. , 2002 , mnras , 330 , 399 w.h . , schechter p. , 1974 ,apj , 187 , 425 e. , geller m.j ., 1991,apj , 373 , 14 j. , meijster a. , 2000 , fundamenta informaticae , vol .41 , 187 r.r ., vogeley m.s . , hoyle f. , brinkmann j. , 2005 , apj , 624 , 571 b.s . , melott a.l . , 1996 ,apj , 470 , 160 w. , 2007 , the delaunay tessellation field estimator , ph.d .thesis , university of groningen w. , van de weygaert r. , 2000 , a&a , 363 , l29 w. , van de weygaert r. , 2007 , a&a , to be subm .ryden b.s . ,melott a.l . , 2001 , apj , 546 , 609 j. , 1983 , image analysis and mathematical morphology , academic press s. , feldman h.a . , heitmann k. , habib s. , 2006 , mnras , 376 , 1629 r.k . , 1998 , mnras , 300 , 1057 r.k . , van de weygaert r. , 2004 , mnras , 350 , 517 n. , 1998 , the natural element method in solid mechanics . ph.d .thesis , theoretical and applied mechanics , northwestern university , evanston , il , usa a. , van gorkom j.h . , greg m.d ., strauss m.a . , 1996 ,aj , 111 , 2150 r. , 1991 , voids and the geometry of large scale structure , ph.d .thesis , leiden university r. , 1994 , a&a , 283 , 361 r. , 2002 , in modern theoretical and observational cosmology , proc .2nd hellenic cosmology meeting , ed .m. plionis , s. cotsakis , assl 276 , p. 119 - 272 , kluwer r. , 2007 , a&a , to be subm .r. , icke v. , 1989 , a&a , 213 , 1 r. , van kampen e. , 1993 , mnras , 263 , 481 r. , sheth r. , platen e. , 2004 , iau colloq .195 : outskirts of galaxy clusters : intense life in the suburbs , 58 l. , soille p. , 1991, ieee transactions on pattern analysis and machine intelligence , vol .13 , no . 6 , pp .583 g. f. , 1908 , j. reine angew .134 , 198 d.f . , 1992 ,contouring : a guide to the analysis and display of spatial data , pergamon press0.3 cm appendix [ app : mathmorph ] and [ app : wshedimpl ] provide some formal concepts and notations necessary for appreciating the watershed transform .the wvf formalism introduced in this study is largely based upon concepts and techniques stemming from the field of image analysis .although they are used within the context of the morphological analysis of spatial patterns in cosmological density fields the presentation is in terms of the original image analysis nomenclature . in thiswe remind the cosmology reader to translate `` image '' into `` density field '' , `` basin '' into `` void interior '' , etc .mathematical morphology ( mm ) is the field of image analysis which aims at characterizing an image on the basis of its geometrical structure .it provides techniques for extracting image components which are useful for representation and description and was originally developed by g. matheron and j. serra . for more detailswe refer to , also see .it involves a set - theoretic method of image analysis and incorporates concepts from algebra ( set theory , lattice theory ) as well as from geometry ( translation , distance , convexity ) .applications of mathematical morphology may be found in a large variety of scientific disciplines , including material science , medical imaging and pattern recognition .a cosmological density field may be mapped onto an image , the image is a function in -dimensional lattice space ( usually or ) , although in principle images may be continuous , in practice they usually attain a finite number of discrete values .two important classes are : * _ binary image _ : + image with only 2 intensity values ( fig .[ fig : mm_oper ] , top lefthand ) , we follow the convention to identify the binary image by the set for which . *_ grayscale image _: + image with a discrete number of values ( fig .[ fig : mm_oper ] , bottom lefthand ) , mathematical morphology was originally developed for binary images , and was later extended to grayscale images .the two basic operators of mathematical morphology are _ erosion _ and _ dilation _ of a binary image . in order to define these operators we need to invoke the _ translation _ and _ reflection _ of a set , the dilation or erosion of a binary image by a structuring element identifieswhether the translated set has an overlap with or is contained in a certain part of , in other words , dilation consists of the minkowski addition ( ) of the binary image with a structuring element while erosion is the minkowski substraction ( ) with .a structuring element may be any object in .an example is the circle which functioned as a structuring element in fig .[ fig : mm_oper ] . erosion and dilationhave a number of properties : * translation invariance * global scaling invariance * addition : * complementarity : * erosion of a set is dilation of complement * ( and vice versa ) * adjunction relationship : the complementarity and adjunction relationship are two aspects of the existing duality between erosion and dilation . in general erosion and dilation induce a loss of information .erosion followed by dilation , or vice versa , will only result in a restoration of the original image if the sets , and are convex .in fact , various combinations of the erosion and dilation operators do result in new operators .the most straightforward combination of dilation and erosion is that of the consecutive application of an erosion and a dilation .this introduces two new operators , the _ opening _ and _ closing _ operators . on a binary image they have the effects shown in fig .[ fig : mm_oper ] ( top centre , top righthand ) .opening amends caps , removes small islands and opens isthmuses .closing , on the other hand , closes channels , fills small lakes and ( partly ) the gulfs .an additional combination of erosion and dilation is subtraction of the first from the latter , called a _morphological gradient_. formally , an opening is an erosion followed by a dilation , a closing a dilation followed by erosion . \\ \textrm{closing : } & & \phi_{b } & \,\equiv\,&[(x \oplus b)\ominus b ] \\ \ \\ \end{array}\ ] ] characteristics of the opening and closing operators are : * increasing * idempotent : * applying the operator twice yields the same output * opening is anti - extensive : * closing is extensive : note that the extensivity and/or anti - extensivity of operators define the prime conditions for a _morphological operator_. the morphological operators which we discussed above can be generalized to grayscale images .a grayscale image is composed of subsets , with and the support of the full image .the erosion and dilation of a grayscale image involves their application to each individual subset .extension of the binary image definitions ( eqn .[ eqn : bintrref ] ) implies the following definition wrt . a grayscale image , the effect of erosion on a grayscale image is the shrinking of the bright regions .bright spots smaller than the structuring element disappear completely while valleys ( dark ) expand .dilation has the opposite effect : dark regions shrink while bright regions expand .it illustrates the duality between erosion and dilation . for our purposes, this formal definition translates into the following practical implementation for 2-d grayscale images . given a grayscale image with grid elements , \,+\,{\hat b}[i , j]\}\\ & & & \forall [ r , s ] \in a\\ { \mathcal f}\ominus b & \,=\,&{\min_{(i , j ) \in b}}\ \{a[r+i , s+j]\,+\,b[i , j]\}\\ & & & \forall [ r , s ] \in a\\\ \\ \end{array}\ ] ] as in the case of binary images new operators may be defined through combinations of erosions and dilations .the closing and opening operators are defined in exactly the same way as that for the binary images .their effect is shown in the lower column of fig .[ fig : mm_oper ] .the morphological gradient is a dilation minus erosion operation .the gradient operator is often used in object detection because an object is usually associated with a change in grayscale with respect to the background .a variety of additional operators involving openings and closings may be defined .interesting ones are the _ granulometries _ , a sequence of erosions with increasing scale , and _ distance transforms_.the segmentation of images is defined on the basis of a distance criterion , referring to the concept of _ distance _ between subsections of an image . for appreciating the concept of watershed segmentation we consider two distance concepts , the _ geodesic _ and _ topographic _ distance .geodesic distances are used in the case of binary images while topographic distances form the basis for the segmentation of grayscale images .let be a set and x and y two points in an -dimensional lattice space .we may define the geodesic distance is the length of the shortest ( geometric ) path in connecting and ( see lefthand frame , fig .[ fig : skiz ] ) .accordingly , the distance between two subsets and in may be defined as follows .considering the set of all paths between any of the elements of and those of , the distance between and is defined to be the minimum length of any of these paths .based upon the concept of one may formulate a distance function of a set . for each point ,the distance to its complement is computed .the distance function is the resulting map of distance values .regions whose distance is at least can be identified and equated by erosion of with a disk of radius ( defining a set ) . each of these regions forms a section ( see sec . [app : grayscale ] ) , in which is the disk of radius .the map may be regarded as a stack of these sections . for illustrationthe distance transform of the the binary image fig .[ fig : mm_oper ] is depicted in the central frame of fig [ fig : skiz ] .the topographic distance between two points in is defined with respect to the image map . taking the limit of a continuous map , the topographic distance from to is the path which attains the minimum length through the `` image landscape '' , in this definition , the integral denotes the _ image pathlength _ along all paths in the set of all possible paths , .this concept of distance is related to the geodesics of the surface : the path of steepest descent , specifing the track a droplet of water would follow as it flows down a mountain surface . based on the specific definition of distance , we may segment a binary image through the identification of the zones of influence of well - defined subsets of .in general the binary image contains connected subsets ( ) ( the black regions of fig .[ fig : mm_oper ] ) and a set x ( white region ) which contains all points which do not belong to any of the subsets , i.e. the geodesic zone of influence of a subset is the set of all the points that are stricktly closer to than to any other subset , the _ zone of influence _ of is the union of all influence zones of , the boundary set in consists of those points which do belong to yet are not contained in any of the zones of influence .these boundary points define the geodesic _ skeleton _ of , in fig .[ fig : skiz ] ( righthand frame ) the skeleton in set is outlined by white lines .the skeleton is superimposed on its defining distance function landscape , its values indicated by a red colour gradient scheme ( the corresponding landscape profile is depicted in the central frame ) .we should point out that here we follow the definition of mathematical morphology although the name of _ skeleton _ of the large scale cosmic matter distribution has been used for different be it related concepts ( see e.g. * ? ? ?* ; * ? ? ?it is interesting to note that if we restrict the subsets to single points the skeleton naturally evolves into a _ ( first - order ) voronoi tessellation_. it is the definition of the concept of skeleton within the specific context of grayscale images which brings us to the definition of the _ watershed transform _( see next section ) .grayscale images consist of a finite number of discrete levels .this results in a slightly more complicated situation for its segmentation . in the case of a binary imagean image is segmented on the basis of a _ geodesic distance_. for the segmentation of grayscale images the distance concept needs to be generalized to that of _ topographic distance_. * definition : * each _ watershed basin _ is the collection of points which are closer in topographic distance to the defining minimum then to any other minimum .the literature is replete with algorithms for the construction of the _ watershed transform _ of an image into its constituting _ watershed segments_. they may be divided into two classes .one class simulates the _ watershed basin _ immersion process .the second aims at detecting the watershed skeleton on the basis of the distance ._ watershed by immersion _ was introduced and defined by .the first step of the procedure concerns the identification of the minima .formally , a minimum is a plateau at altitude from which it is impossible to reach a point of lower height .starting with the lowest grayscale level and recursively proceeding to the highest level the algorithm allocates the zone of influence of each minimum by gradually filling up the surrounding catchment basin . at a particular grayscale level , with altitude ,the algorithm has hypothetically inundated a landscape region with an altitude .the total of inundated area , is the complement of the section . having arrived at level and proceeding to level the algorithm has three possibilities , * encounter a new minimum ( at level ) *add new points to existing catchment basins ( condition : points connected to only one existing basin ) .* encounter new points that belong to several basins situation ( 1 ) signals the event that a new minimum becomes active in the image .the second option concerns the extension of an existing basin by an additional collection of points .these are points identified at level which find themselves within the realm of a single basin existent at level .they belong to the zone of influence of and find themselves embedded in its extended counterpart at level . in situation( 3 ) more than one basin may be connected to the stack .its correct subdivision is determined by computing the influence zones of all connected basins . defining to be the union of all catchment basins at level the union of catchment basins at level becomes the watershed proceduremay be viewed as iteratively computing the zone of influence at each new grey scale level . following the rationale above the final `` immersion '' definition for the watershed of the image within a domain is on completion of the procedure the union of points attached to every minimum in is equal to the union of catchment basins , .the skeleton remains as the watershed segmentation .the alternative strategy for determining the watershed transform is that of following the strict definition of segmentation by minimum topographic distance . a summary of the most notable schemes .these algorithms seek to find all points ( pixels ) whose topographic distance to a particular marker - ie .a significant minimum in the density field is the shortest amongst that to all other markers in the image .the formalism bears some resemblance to dijkstra s graph theoretical problem of tracing the shortest path forest in a point distribution .based on this similarity an image is seen as a connected ( di)graph in which the pixels of the image function as the nodes of the graph . each point is reachable from each other point via the graph s edges .the latter usually define a network on the basis of or -connectivities .the shortest path between two points ( nodes ) and is found by traversing the graph and keeping track of the walking cost .critical for the procedure is the assignment of a proper measure of cost to each path . by definitionit should be a non - negative increasing function and be related to the definition of topographic distance ( eq . [ eq : topdist ] ) .this suggests the use of the quantity the maximum slope linking the two pixels and .this leads to the following _ cost function _ for the link between two neighbouring pixel and pixel , the total cost for a path connecting any two points and via the points will then simply be the sum the topographic distance is the infimum of over all paths connecting and . given this definition for the topographic distance within a grayscale image we can pursue the segmentation process described in section [ app : segment ] , ultimately yielding the watershed segmentation .we follow the watershed transform algorithm by .their method implicitly incorporates the concept of markers .these markers are the minima used as sources of the watershed flooding procedure . as such they form a select subgroup amongst all minima of an image .the code for the watershed proceudre involves the following steps : * * initialization * all pixels of the cube are initialized and tagged to indicate they have not yet been processed .each grayscale level is allocated a queue and all pixels are attached to the queue corresponding to their level . * * minima * each minimum plateau is tagged by a unique `` minimum tag '' .the pixels corresponding to a minimum are inserted into the corresponding queue .* * flooding * all pixels in the grayscale level queues are processed , starting at the lowest grayscale level .unless a pixel is surrounded by a complex of unprocessed neighbours it will be assigned to the queue of the corresponding minimum .pixels which also border another minimum obtain a boundary tag .* * final stage * for any grayscale level the flooding stops when the queue has emptied .the procedure continues with processing the pixels in the queue for the next grayscale level .the process is finished once all level queues have been emptied .voronoi clustering models are a class of heuristic models for cellular distributions of matter .they use the voronoi tessellation as the skeleton of the cosmic matter distribution , identifying the structural frame around which matter will gradually assemble during the emergence of cosmic structure .the interior of voronoi cells correspond to voids and the voronoi planes with sheets of galaxies .the edges delineating the rim of each wall are identified with the filaments in the galaxy distribution .the most outstanding structural elements are the vertices , corresponding to the very dense compact nodes within the cosmic web , the rich clusters of galaxies .we distinguish two different yet complementary approaches .one is the fully heuristic approach of voronoi element models .they are particularly apt for studying systematic properties of spatial galaxy distributions confined to one or more structural elements of nontrivial geometric spatial patterns .the second , supplementary , approach is that of the voronoi kinematic models , which attempts to `` simulate '' foamlike galaxy distributions on the basis of simplified models of the evolution of the megaparsec scale distribution .the voronoi kinematic model is based upon the notion that voids play a key organizational role in the development of structure and make the universe resemble a soapsud of expanding bubbles .it forms an idealized and asymptotic description of the outcome of the cosmic structure formation process within gravitational instability scenarios with voids forming around a dip in the primordial density field .for plausible structure formation scenarios , most notably the concordance cosmology , this evolution will proceed hierarchically . a detailed assessment of the resulting void hierarchy by demonstrated that this leads to a selfsimilarly evolving peaked void size distribution .by implication , most voids have comparable sizes and excess expansion rates .the geometrically interesting implication is that the asymptotic limit of the `` peaked '' void distribution degenerating into one of only one characteristic void size .it yields a cosmic matter distribution consisting of _ equally sized _ and expanding spherical voids , a geometrical configuration which is precisely that of a voronoi tessellation .this is translated into a scheme for the displacement of initially randomly distributed galaxies within the voronoi skeleton ( see sect .[ app : vorclustform ] for a detailed specification ) . within a void ,the mean distance between galaxies increases uniformly in the course of time .when a galaxy tries to enter an adjacent cell , the velocity component perpendicular to the cell wall disappears .thereafter , the galaxy continues to move within the wall , until it tries to enter the next cell ; it then loses its velocity component towards that cell , so that the galaxy continues along a filament .finally , it comes to rest in a node , as soon as it tries to enter a fourth neighbouring void .0.5truecm 0.2 cm the initial conditions for the voronoi galaxy distribution are : * distribution of nuclei , _ expansion centres _, within the simulation volume .the location of nucleus is . *generate model galaxies whose initial locations , , are randomly distributed throughout the sample volume .* of each model galaxy determine the voronoi cell in which it is located , ie .determine the closest nucleus .all different voronoi models are based upon the displacement of a sample of `` model galaxies '' .the initial spatial distribution of these galaxies within the sample volume is purely random , their initial locations ( defined by a homogeneous poisson process .a set of nuclei within the volume corresponds to the cell centres , or _expansion centres _ driving the evolving matter distribution .the nuclei have locations . following the specification of the initial positions of all galaxies , the second stage of the procedure consists of the calculation of the complete voronoi track for each galaxy ( sec .[ sec : vortrack ] ) .once the voronoi track has been determined , for any cosmic epoch one may determine the displacement that each galaxy has traversed along its path in the voronoi tessellation ( sec .[ sec : vortrack ] ) .the first step of the formalism is the determination for each galaxy the voronoi cell in which it is initially located , ie . finding the nucleus which is closest to the galaxies initial position . in the second stepthe galaxy is moved from its initial position along the radial path emanating form its expansion centre , ie . along the direction defined by the unity vector .dependent on how far the galaxy is moved away from its initial location - set by the _ radius of expansion _ to be specified later the galaxies path ( see fig . [fig : vorkinmschm ] ) may be codified as in which the four different components are : 1 . : unity vector path within voronoi cell 2 . : unity vector path within voronoi wall 3 . : unity vector path along voronoi edge 4 .vertex the identity of the neighbouring nuclei , , and , and therefore the identity of the cell , the wall , the edge and the vertex , depends on the initial location of the galaxy , the position of its closest nucleus and the definition of the galaxies path within the voronoi skeleton .the cosmic matter distribution at a particular cosmic epoch is obtained by calculating the individual displacement factors for each model galaxy .these are to be derived from the global `` void '' expansion factor .this factor parameterizes the cosmic epoch and specifies the ( virtual ) radial path of the galaxy from its expansion centre . at first , while still within the cell s interior , the galaxy proceeds according to as a result within a void the mean distance between galaxies increases uniformly in the course of time .once the galaxy tries to enter an adjacent cell and reaches a voronoi wall , i.e. when , the galaxy s motion will be constrained to the radial path s component within the wall .the galaxy moves along the wall until the displacement supersedes the extent of the path within the wall and it tries to enter a third cell , i.e. when .subsequently , it moves along until it comes to rest at the node as soon as it tries to enter a fourth neighbouring void when .a finite thickness is assigned to all voronoi structural elements .the walls , filaments and vertices are assumed to have a gaussian radial density distribution specified by the widths of the walls , of the filaments and of the vertices .voronoi wall galaxies are displaced according to the specified gaussian density profile in the direction perpendicular to their wall .a similar procedure is followed for the voronoi filament galaxies and the voronoi vertex galaxies . as a resultthe vertices stand out as three - dimensional gaussian peaks .for a detailed specification of the dtfe density field procedure we refer to . in summary ,the dtfe procedure for density field reconstruction from a discrete set of points consists of the following steps : 1 .* point sample * + given that the point sample is supposed to represent an unbiased reflection of the underlying density field , it needs to be a general poisson process of the ( supposed ) underlying density field .* boundary conditions * + the boundary conditions will determine the delaunay and voronoi cells that overlap the boundary of the sample volume .dependent on the sample at hand , a variety of options exists : 1 ._ empty boundary conditions : _ + outside the sample volume there are no points_ periodic boundary conditions : _ + the point sample is supposed to be repeated periodically in boundary boxes , defining a toroidal topology for the sample volume ._ buffered boundary conditions : _ + the sample volume box is surrounded by a bufferzone filled with a synthetic point sample .* delaunay tessellation * + construction of the delaunay tessellation from the point sample . while we also still use the voronoi - delaunay code of , at presentthere is a number of efficient library routines available .particularly noteworthy is the ` cgal ` initiative , a large library of computational geometry routines + 4 .* field values point sample * + the estimate of the density at each sample point is the normalized inverse of the volume of its contiguous voronoi cell of each point .the _ contiguous voronoi cell _ of a point is the union of all delaunay tetrahedra of which point forms one of the four vertices .we recognize two applicable situations : + : + the point sample is an unbiased sample of the underlying density field .typical example is that of -body simulation particles . for -dimensional spacethe density estimate is , with the weight of sample point , usually we assume the same `` mass '' for each point . + : + sampling density according to specified selection process . the non - uniform sampling process is quantified by an a priori known selection function situation is typical for galaxy surveys , may encapsulate differences in sampling density as function of sky position , as well as the radial redshift selection function for magnitude- or flux - limited surveys . for -dimensional spacethe density estimate is , 5 .* field gradient * + calculation of the field gradient estimate in each -dimensional delaunay simplex ( : tetrahedron ; : triangle ) by solving the set of linear equations for the field values at the positions of the tetrahedron vertices , + evidently , linear interpolation for a field is only meaningful when the field does not fluctuate strongly .6 . * interpolation*. + the final basic step of the dtfe procedure is the field interpolation .the processing and postprocessing steps involve numerous interpolation calculations , for each of the involved locations . given a location , the delaunay tetrahedron in which it is embedded is determined . on the basis of the field gradient the field value is computed by ( linear ) interpolation , in principle , higher - order interpolation procedures are also possible .two relevant procedures are : + for nn - interpolation see and .implementation of natural neighbour interpolations is presently in progress .* processing*. + though basically of the same character , for practical purposes we make a distinction between straightforward processing steps concerning the production of images and simple smoothing filtering operations and more complex postprocessing .the latter are treated in the next item .basic to the processing steps is the determination of field values following the interpolation procedure(s ) outlined above .straightforward `` first line '' field operations are _ image reconstruction _ and _ smoothing / filtering_. 1 .* image reconstruction*. + for a set of image points , usually grid points , determine the image value . formally the average field value within the corresponding gridcell . in practicea few different strategies may be followed + the choice of strategy is mainly dictated by accuracy requirements .for wvf we use the monte carlo approach in which the grid density value is the average of the dtfe field values at a number of randomly sampled points within the grid cell .smoothing * and * filtering * : + a range of filtering operations is conceivable .two of relevance to wvf are : of the field + convolution of the field with a filter function , usually user - specified , + ( see sec . [ sec : natnghb ] ) .8 . * post - processing*. + the real potential of dtfe fields may be found in sophisticated applications , tuned towards uncovering characteristics of the reconstructed fields .an important aspect of this involves the analysis of structures in the density field .the wvf formalism developed in this study is an obvious example .
on megaparsec scales the universe is permeated by an intricate filigree of clusters , filaments , sheets and voids , the cosmic web . for the understanding of its dynamical and hierarchical history it is crucial to identify objectively its complex morphological components . one of the most characteristic aspects is that of the dominant underdense voids , the product of a hierarchical process driven by the collapse of minor voids in addition to the merging of large ones . in this study we present an objective void finder technique which involves a minimum of assumptions about the scale , structure and shape of voids . our void finding method , the watershed void finder ( wvf ) , is based upon the watershed transform , a well - known technique for the segmentation of images . importantly , the technique has the potential to trace the existing manifestations of a void hierarchy . the basic watershed transform is augmented by a variety of correction procedures to remove spurious structure resulting from sampling noise . this study contains a detailed description of the wvf . we demonstrate how it is able to trace and identify , relatively parameter free , voids and their surrounding ( filamentary and planar ) boundaries . we test the technique on a set of kinematic voronoi models , heuristic spatial models for a cellular distribution of matter . comparison of the wvf segmentations of low noise and high noise voronoi models with the quantitatively known spatial characteristics of the intrinsic voronoi tessellation shows that the size and shape of the voids are succesfully retrieved . wvf manages to even reproduce the full void size distribution function . [ firstpage ] cosmology : theory large - scale structure of universe methods : data analysis
patrolling is continuously traveling through an environment in order to supervise or guard it .although mostly used to refer to humans guarding an area , the term patrolling is also used to describe surveying through a digital , virtual environment .consider , for example , the task of repeatedly reading web pages from the world - wide - web in order to keep an updated database representing the links between pages , possibly for the purpose of later retrieval of pages in an a accurate and prompt manner .these problems exhibit similarities , in the sense that they can be represented as traveling through the vertices of a graph .but there are also differences : a physical area is usually fixed in size , whereas the virtual area is , in general , prone to constant change .the number of human guards is , generally , fixed for the particular area being patrolled , while the number of software agents or `` bots '' performing a large scale patrolling task may be subject to change as well . partitioning a graph into similar sized components is an important and difficult task in many areas of science and engineering . to name few examples , we can mention the partitioning of a netlist of an electronic vlsi design , the need for clustering in data mining , and the design of systems that balance the load on computer resources in a networked environment .the general graph partition problem is loosely defined as dividing a graph into disjoint , connected components , such that the components are _ similar _ to each other in some sense .practical considerations impose additional constrains .for example , an important problem , known as the _ graph k - cut _ , requires a partition where the sum of the _ weights _ of vertices belonging to each component _ _ is more or less equal , and additionally , the number and/or the sum of weights of edges that _ connect disjoint components _ is minimized .the _ k - cut _ problem can model the distribution of tasks between computers on a network , while minimizing communication requirements between them . in this workwe define a patrolling strategy that fairly divides the work of patrolling the environment among several a(ge)nts by partitioning it into regions of more or less the same size .we have no constraints on edges connecting different components , but we impose strict restrictions on our patrolling agents in search for a heuristic multi - agent graph partitioning algorithm that may continuously run in the background of a host application .we are interested in programming the same behavior for each individual ant - like agent , which should be very simple in terms of resources , hardware or communications .furthermore the agents have no i d s , hence are part of a team of units that are anonymous and indistinguishable to each other .our a(ge)nts should have very little knowledge about the system or environment they operate in , have no awareness on the size or shape of the graph , no internal memory to accumulate information , nor a sense of the number and locations of other agents active in the system .these limitations mean that such a multi agent process has inherent scalability ; the environment might be large , complex , and subject to changes , in terms of vertices , edges and even the number of agents , and our simple agents should still be able to patrol it , while also evolving towards , and ultimately finding balanced partitions , if such partitions exist . to simplify the discussion , we will think of a graph to be partitioned as a planar area , and the task at hand will be to partition the area into regions of more or less the same size . the area is modeled as a grid , where each vertex is a unit area , thus a balanced partition should have components of roughly the same number of vertices . in our scheme , agents are each given the task to _ patrol _ and define a region of their own , and have the ability to expand their region via conquests . like ants ,our agents leave pheromone marks on their paths . the marks decay with time and are subsequently used as cues by all the agents to make decisions about their patrolling route and about the possibility to expand their region . by assumption , each agent operates _ locally _ , thus it can sense levels of pheromones or leave pheromone marks on the vertex it is located , on its edges and on adjacent vertices . while patrolling its region , an agent visits a vertex and reads the intensity of pheromone marks that remain from previous visitsit then uses the reading , and the known rate of pheromone decay , to calculate the vertex s idle - time the time that passed since the previous visit . using the decaying pheromone markwe can chose a patrolling rule according to which the agents visit the vertices of their region in repetitive cycles , each vertex being marked with a pheromone once on each cycle .the patrolling process hence ensures that the idle - time measured by agents on visits to their region s vertices is the same , effectively encoding their region s _ cover time _ the time that takes for an a(ge)nt to complete a full patrolling cycle and therefore it can also be used to estimate the region s size : the shorter the cover time , the smaller the region .we assume that each agent detects pheromones without being able to distinguish between them , except for recognizing its own pheromone .when an a(ge)nt hits a border edge an edge that connects its region with one that is patrolled by another agent - it can use the neighbor s idle - time ( encoded in its pheromone marks ) to calculate the size of the neighboring region , and thereby decide whether to try to conquer the vertex `` on the other side of the border '' .this causes an effect that mimics pressure equalization between gas - filled balloons : at two vertices on opposing ends of a border edge , the agent that hits the border more frequently is the agent with a shorter cover time ( patrolling the smaller region ) hence it may attempt a conquest .we define that in a balanced partition , any pair of neighboring regions have a size difference of at most one vertex .this means that for a graph and agents , our partitioning heuristics ensures a worst case difference of vertices between the largest and smallest of the regions , once a balanced partition is reached .for example in a graph of 1 million vertices ( e.g. database entries , each representing a web page ) and 10 agents ( network bots patrolling the pages ) , this difference is truly negligible. additionally , the length of the patrolling path is predetermined , and is proportional to the size of the region being patrolled , therefore when a balanced partition is reached , the algorithm guarantees that the idleness of any of the vertices of the graph is bounded by a number of steps equal to , about twice the size of the largest possible region ( note that the number of vertices in the graph ) . in figure[ fig : evolution intro ] one can see a series of snapshots depicting 8 patrolling agents working to partition a 50x50 grid .the first snapshot shows an early phase of the joint patrolling algorithm , where agents already captured some of the vertices around their initial random locations , in the second , the area is almost covered and most of the vertices of the graph are being patrolled , the third exhibits a phase when all the area is covered but the regions are not balanced , and finally , the last snapshot shows a balanced partition that the system evolved into .evolution of a 50x50 grid graph partitioning by 8 a(ge)nts , title="fig : " ] evolution of a 50x50 grid graph partitioning by 8 a(ge)nts , title="fig : " ] evolution of a 50x50 grid graph partitioning by 8 a(ge)nts , title="fig : " ] evolution of a 50x50 grid graph partitioning by 8 a(ge)nts , title="fig : " ] this figure exhibits typical stages in the evolution of such a system , for which balanced partitions exist , and the environment graph remains stationary for a time long enough for agents to find one of them .often , the agents will relatively quickly find a partition that covers the graph , and is _ close _ to being balanced . then, on stationary graphs , they may spend a rather long time to reach a perfectly balanced partition . in a timevarying environment the system will continuously adapt to the changing conditions .the concept of partitioning a graph with a(ge)nts patrolling a region and exerting pressure on neighboring regions was first presented by elor and bruckstein in .they proposed a patrol algorithm named bdfs balloon dfs and this work is a follow up research on this problem . according to bdfs , an agent patroling a smaller region conquers vertices from a neighboring larger region . to achieve the goal of patrolling an area, bdfs uses a variation of multi - level depth - first - search ( mldfs ) , an algorithm presented by wagner , lindenbaum and bruckstein in .the task of the mldfs too , was to distribute the work of covering `` tiles on a floor '' among several identical agents .the floor - plan mapping of the tiles is unknown and may even be changing , an allegory for moving furniture around while agents are busy cleaning the floor .mldfs implements a generalization on dfs : agents leave decaying pheromone marks on their paths as they advance in the graph , and then use them either to move to the vertex least recently visited or to backtrack . when none of the choices are possible , either when the graph covering ends , or following to changes in the graph or loss of tracks due to noise in the pheromone marks , agents _ reset , _ thus starting a new search .the time of reset , named `` the time where new history begins '' , is stored in the agents memory , as a * search - level * variable . after a _ reset , _the cycle repeats , hence an agent traces pheromone marks left in an earlier cycle .the mere existence of a pheromone mark is , however , not sufficient for agents to choose a path not yet taken during the current search cycle . to select the next step , agents use the value stored in the * search - level * variable as a threshold :any pheromone that was marked on a vertex or edge prior to this time must have been the result of marking in an earlier cycle . in mldfs , pheromones of all agents are the same , and agents are allowed to step on each other s paths. for the task of partition a graph , in bdfs each agent has its own pheromone and it performs mldfs cycles on its `` own '' region of the graph , leaving its particular pheromone marks .as long as the region is stationary , bdfs agents exactly repeat their previous route .if the region changes , either expands or shrinks , it will cause bdfs to look for a new and possibly substantially different route before settling into the next search cycle .this occurs due to a subtlety in the way that depth - first - search defines a spanning tree , a special type of tree called a _ palm tree _, were each edge in the spanning tree connects a vertex with one of its ancestors , see e.g. tarjan .the spanning tree defined during a bdfs search cycle does not consider all edges emanating from _ all _ of the region s vertices , simply because some of the edges connect to vertices _ on neighboring __ when bdfs conquers a vertex , it is possible that this vertex has more than one edge connecting to the region .all these edges will now be considered during the next search cycle , a process that may dictate a different palm tree .we call this event - _ respanning_. in the algorithm we define here , named _ ant patrolling and partitioning , or antpap _ , we use a different generalization of dfs that avoids respanning .furthermore we reduce the requirements on the agent s capabilities .for example , our agents have no memory , and also can not control the levels of pheromones they leave , the pheromone level at the time of a marking is always the same .we further add the possibility for agents to _ lose _ a vertex if a conquest fails , and we provide a proof of convergence to balanced partitions , while experimentally observing much faster evolution towards such partitions . the subject of multi - agent patrolling has been extensively studied .lauri and charpillet also use an `` ants paradigm '' , where a method based on ant colony optimization ( aco ) , introduced by dorigo , maniezzo , and colorni in .aco provides multi - agent solutions for various problems , for example the traveling salesman problem ( tsp ) in complete and weighted graphs by a so - called _ ant - cycle _ algorithm .ants move to the next vertex according to a probability that is biased by two parameters : the closest neighbor vertex ( corresponding to the lowest edge weight ) and the level of pheromone on the edge . during their search, ants record their path to avoid visiting the same vertex twice .since at each step all ants traverse one edge to a neighboring vertex , all ants complete their travel at the same time .thereafter each ant leaves pheromone marks on the entire path it took .due to the probability bias , shorter edges have a higher probability to be traversed , thus it is probable that multiple ants traversed them , hence they tend to accumulate stronger pheromone levels .the cycles repeat , and with each cycle the biasing gets stronger towards the shortest path .the process ends after a a - priori given number of cycles complete or when the ants all agree on the shortest path . for the patrolling problem , lauri et . used this method to find multiple paths , one for each agent , _ before _ the agents begin their joint work .their algorithm employs multiple colonies of ants where each agent is assigned one ant on each colony .ants in a colony cooperate ( exchange information regarding their choices ) to divide the exploration into disjoint paths , leading the agents to eventually cooperate in the patrolling task .unlike for the tsp , the environment graph is not required to be complete , and ants are allowed to visit a vertex more than once when searching for a patrolling route .chevaleyre , yann , sempe , and ramalho compared cyclic patrolling strategies , in which agents tend to follow each other , to partitioning strategies , in which agents patrol each its own region . by applying several algorithms on several graphs examples, they found that the choice of strategy should be based on the shape of the graph .the partitioning based strategy gets better results on `` graphs having long corridors connecting sub - graphs '' , i.e. if there are high weight edges that are slow to traverse , it is better not traverse them at all by allocating them to connect disjoint partitions .there is substantial research on heuristics for partitioning of a graph , and some of it even related to multi - agent scenarios .inspired by ants , comellas and sapena presented yet another _ ants _ algorithm to find a _k - cut _ solution to a graph .the system is initiated by randomly coloring all the graph vertices in a more or less even number of colors and positioning the agents randomly on the graph .then a _ local cost _ value is calculated for each of the vertices , storing the percentage of neighbors that have the same color as its own .agents will then iteratively move to a neighboring vertex that has the lowest cost ( i.e. with the most neighbors of a _ different _ color than its own ) , and then switch colors with a random vertex on the graph , where the color of is the one most suitable for , i.e. similar to most of s neighbors . is selected from from a random list of vertices colored with the same color as , by choosing from this list the one with lowest cost ( most neighbors colored _ differently _ than ) .then the cost value is refreshed for both and . on each iterationthe number of cuts , defined as the number of edges connecting vertices of different colors , is calculated over all the edges of the graph and the lowest value is stored .the choices of agent moves are stochastic , i.e. agents have a probability to select the next vertex to move to by using the cost value , otherwise it selects another neighbor vertex at random .this allows the system to escape from local minima . unlike our algorithm , agents of comellas and sapena _ _ s ants _ _ aim to find a _ k - cut _ , and while doing so do not leave pheromones to be used as cues on vertices and edges they visit as our agents do . also , their _ _agents are assumed to have the ability to look at vertices that are anywhere in the graph and change their values , thus their sensing is not local as in our algorithm . in _ants _ , each iteration relies on a global calculation that involves access to values on _ all _ edges of the graph , in order to measure the quality of the partition so far determined , as well as storing the result .inspired by bee foraging , mccaffrey simulates a bee colony in order to find a _k - cut _ graph partition .each of the agents , in this case called _ bees _ , is assumed to know in advance the size and shape of the graph , as well as the number of components desired .the agent must have an internal memory to store an array of vectors listing the vertices of all sub - graphs of a proposed solution , as well as the number of cuts this partition has , as a measure of its quality . in a hive ,some 10% of the bees are considered _ scouts , _ all other agents being in one of two states , _ active _ or _ inactive ._ emulated scouts select a random partition of the graph . if the selection is better than what the scout previously found , it stores it in its memory and communicates it to other bees in the hive that are in an _ inactive _ state .some of those store the scout s solution in their own memory , change their state to _ active _ and begin to search for a better partition around this solution .if an active bee finds an improved solution it communicates it to the bees that are left in the hive . after looking at neighboring solutions for a long enough time , the _ active _ bee returns to the hive and becomes _ inactive _ again .the algorithm , therefore , is constantly searching for improvements in the quality of the partition that the bees collectively determine . the partitioning and patrolling multi - agent algorithms that we have surveyed above ,all assume that agents posses substantial internal memory .some algorithms assume that the agents are able to sense and even change values of vertices and/or edges in graph locations that are distant from their position in the graph , and sometimes they can even sense and/or store a representation of the whole graph in their memory .patrolling algorithms may be partition based , and then the task is divided into two stages . in the first stagethe graph is partitioned into disjoint components , and at the second stage each of the agents patrols one of those components . in our case , partitioning the graph , and thereby balancing the workload among our agents , is a requirement .our algorithm does not have stages , the agents simply perform pheromone directed local steps thereby carrying out a _ patrolling _ algorithm , and while doing so also implicitly cooperate in partitioning the graph .our agents have no internal memory at all .their decisions are based on pheromone readings from vertices and edges alone , and they can only sense or leave pheromone marks around their graph location .one may view our solution for patrolling and partitioning the graph environment as using the graph as a shared distributed memory for our oblivious agents .the task analyzed here is the partition of an area or environment into regions of similar size by a set of agents with severe restrictions on their capabilities .the inspiration for the algorithm are gas filled balloons ; consider a set of elastic balloons located inside a box , and being inflated at a constant and equal rate , until the balloons occupy the entire volume of the box . while inflating , it may be that one balloon disturbs the expansion of another balloon .this may cause a momentarily difference of the pressure in the balloons , until the pressure difference is large enough to displace the disturbing balloon and provide space for the expansion of the other . since the amount of gas is equal for all balloons, they will each occupy the same part of the volume , effectively partitioning it into equal parts .our agents mimic this behavior by patrolling a region of the area `` of their own '' , while continuously aiming to expand it .the area is modeled by a graph and the region is a connected component of the graph .when expanding regions touch , the agent on the smaller region may conquer vertices of the larger region .we assume that initially a given number of agents are randomly placed in the environment , they start the process of expanding and this process goes on forever .eventually the expansion is `` contained '' due to the interaction between the regions of the agent , hence the process will lead to an equalization of the sizes of the regions patrolled by the agents . in the discrete world of our agents a partition to regions of exactly the same size may not exist , therefore we define a balanced partition as such that any two neighboring regions may have a size difference of at most one vertex . * * for simplicity , a(ge)nts operate in time slots , in a strongly asynchronous mode , i.e. within a time interval every agent operates at some random time , so that they do not interfere with each other . during a given time slot, each agent may move over an edge to another vertex , and may leave pheromone marks on a vertex and/or edges .the marks , if made , are assumed to erase or coexist with the pheromone that remained there from the previous visit .agents have no control over the amount of pheromone they leave , its initialization level being always the same .thereafter , the pheromone level decays in time .each agent has its own pheromone , thus pheromones are like colors identifying the disjoint components and hence the partitioning of the graph .the agents themselves can only tell if a pheromone is their own or not .agents are oblivious , i.e. have no internal memory . on each time slot ,an agent reads remaining pheromone levels previously marked on the vertex it is located and its surroundings , and bases its decisions upon these readings .the readings and decisions are transient , in the sense that they are forgotten when the time slot advances .decaying pheromone marks on vertices and edges linger , serving both as distributed memory as well as means of communication . in our model, agents leave pheromones in two patterns : one pheromone pattern is marked when agents advance in their patrolling route , and the second pattern is used when agents decide to remain on the same vertex .pheromones are decaying in time , thus once marked on a vertex or edge , their level on the vertex or edge decreases with each time step .a straightforward way for implementing such behavior in a computer program , is to use the equivalent `` time markings '' , i.e. stamping the _ time _ at which a pheromone is marked on the vertex or edge .we therefore denote by the time of pheromone marking on vertex , hence means that an agent left a pheromone on vertex at time . as time advances , the age of the pheromone on vertex ,i.e. the time interval since it was marked , which can be calculated as where is the current time , advances as well .this is equivalent to measuring the level of the temporally decaying pheromone on vertex , and using its value along with the known rate of decay to calculate its `` age '' .similarly , is a time marking , equivalent to the decaying pheromone level on the edge , where and are not necessarily the same .the use of time markings require the computer program implementation to know the current time in order to be able to calculate the age of pheromones .however , the knowledge of current time is strictly limited to its use in the emulation of temporally decaying pheromones by equivalent time markings , thus it does not depart from our paradigm of obliviousness and local decisions based on decaying pheromone markings only .when an agent decides to leave a pheromone mark on a vertex , it may avoid erasing the pheromone that remains from the previous ( most recent ) visit .we denote the previous time marking as , thus when an agent marks a pheromone on vertex , the computer program implementation moves the value stored in to and afterwards sets the new time mark to .hence , the value , encodes the _ idle time _ of the vertex . * * agents patrol their region in a dfs - like route , in the sense that they advance into each vertex once and backtrack through the same edge once during a complete traversal of their region . when an agent completes traversing the region it resets ( i.e. it stays at the same location for one time step and refreshes its pheromone mark ) , andsubsequently starts the search again .the cycles repeat the same route as long as the region is unchanged . when the region does change either expanding or shrinking out agents persist on keeping advancing and backtracking into a vertex through the same edge that was used to conquer the vertex .this is implemented by marking _`` pair trails '' , _ i.e. leaving pheromones over edges as well as vertices , when conquering and ( subsequently ) advancing into a vertex .a pair trail is a directed mark from a vertex to an adjacent vertex , of the form , and is one of the two pheromone patterns that agents leave .this behavior results in a patrolling process that _ follows the pair trails _ , were agents advance through the earliest marked pair trail , refreshing the marks while doing so . when all pair trails to advance through are exhausted , it backtracks through the same pair trail it entered .an example of a route and the spanning tree it defines are depicted in figure [ fig : patrolling route and tree ] .the departure from the classic dfs is that edges that are not marked as pair trails are ignored .the pair trails mark a _ spanning tree _ ( which is not necessarily a palm tree ) of the region , where its root is the vertex where the search cycle begins , and each pair trail marks the path advancing upwards the tree .when an agent backtracks to the root , it has no untraveled pair trail to advance through , and it restarts the search cycle remaining one time slot in the root .it then uses the second marking pattern which is simply leaving a pheromone on the vertex , denoted as , where is the root . since agents advance andbacktrack once from each vertex in their region ( except the root ) and then restart a patrolling cycle in the root , the number of steps in one patrolling cycle , called the _ cover time _, is . denotes the region of agent , the set of vertices that are part of patrolling cycle . patrolling cyclesrepeat the exact same route as long as the region remains unchanged , hence the _ idle time _ of any vertex of the region is also the region s cover time .thus the pheromone markings on the vertex can be used to calculate the size of the region . * * to expand their region , agents may conquer vertices adjacent to ( vertices of ) their region . for agent to attempt to launch a conquest from a vertex to a target vertex , the following conditions must apply : 1 . is not part of s region , , let us then assume that of another agent 2 . is subject to a _double visit _ by , i.e. visits leaving pheromone marks _ twice _ , while was not visited even once by during the same period of time .since the time difference between the two visits by is the _ cover time _ and the cover time is proportional to the size of the region , it means that s region is smaller than s , . an agent may check for this condition by evaluating if .if the double visit condition is met , thus s region is larger , allow a conquest attempt if it is not larger by _ exactly one _ vertex since a difference of one vertex is considered balanced .4 . if the double visit condition is met and s region is larger by _ exactly one _ vertex , allow a conquest attempt if vertex is stagnated it s pheromones are older than their purported cover time .an agent checks this by comparing the idle time to the cover time .depending on the above conditions , an agent may stochastically _ attempt _ the conquest of vertex , with a predefined probability .this mechanism works even if is not part of any of the other agent s regions , . in such case the pheromone marks on never be refreshed and the conquest conditions hold . * * when a region expands or shrinks as result of conquests , its becomes _ inconsistent _ in the sense that the size of the region changed , but at least some of the pheromone marks on its vertices encoding the _ cover time _ do not reflect that immediately . to regain consistency on a vertex , the pheromone marks on be _ refreshed _ , hence an agent must leave there a fresh pheromone , and that may occur only when the agent advances into vertex through a pair trail . therefore , there is a delay in the propagation of the change , thus there will be a temporary inconsistency between the actual size of the region and the cover time encoded on region s vertices .that inconsistency is certainly not desirable since it might result in a miscalculation of conquest conditions .consider an agent with a larger region than two of its neighbors and .both neighbors will be attempting to conquer vertices from .since all agents awareness is local , and have no means to know that is shrinking due to the work of the other as well , and as and repeatedly conquer vertices from , the combined conquests may accumulate to `` eat up '' too much out of s region up to a point where the imbalance is reversed , and the areas of both and are now larger than s . nonetheless , the inconsistency is temporary .it is convenient to analyze this issue by considering the spanning tree of pair trails .when an agent conquers a vertex and expands its region , it results in adding a leaf to the spanned tree , and losing a vertex to another agent results in the _ pruning _ of the tree , the splitting of the tree into two or more branches , while the losing agent remains on one of them . in either case ,the _ follow the pair trails _ strategy ensures that the new route remains well defined .it is therefore sufficient for an agent to patrol its region twice , to ensure that the region is consistent , as described in the following lemma : [ lem : a - region - is - consistent]a region is consistent if it has remained unchanged for a period of time which is twice its cover time . on the first cycle, the agent leaves a fresh pheromone , , on each vertex , while the previous most recently visit , , may reflect an inconsistent state .the second cycle repeats the exact same route as the first , since the region remains unchanged , and now both the most recent visit as well as the one preceding it , indicated by pheromone levels and , are updated , thus reflects the cover time on all vertices of the route and the region becomes consistent . * * when balloons are inflated in a box , to the observer it looks as a smooth evolution where the balloons steadily grow and occupy more of the volume until the box is filled .but unlike gas inside a balloon that exerts pressure in all directions concurrently , our discrete agents work in steps , where at each step they attend one vertex of their region , while the other vertices may be subject to conquests by other agents .since regions are defined by patrolling routes , an agent , by conquest of a single vertex from , may prune s region in a way that leaves to patrol a much smaller region effectively rendering it smaller than s .now the `` balance tilts '' , as the region that was larger prior to the conquest becomes the smaller .pruning may cut a spanning tree into two or more sections , but in many cases the sections of the tree may still be _ connected _ by edges that are _ not _ marked by _ _ a pair trail .in such case , has an opportunity to mark a pair trail over such an edge and regain access to a branch still marked with its own pheromones . yet ,sometimes the pruning divides the region into two unconnected components .we call these _ balloon explosions , _ and when these occur it is more difficult for the agent that lost part of its region to regain its loss . therefore , when an agent launches a conquest attempt it is not always clear if its success will advance or set - back the evolution towards convergence .it is then natural to add the following _ vertex loss _ rule : should an agent fail the conquest attempt , there is a predefined probability for _ losing _ the vertex from which the attempt was launched ._ losing _ the vertex may indeed be a better evolution step than succeeding in that conquest , resembling actions of withdrawal from local minima used in _simulated annealing_. in fact , this property becomes instrumental in our convergence proof for the antpap algorithm . in order to preventan agent from `` cutting the branch it is sitting on '' , we limit vertex loss events to steps of the patrolling process in which agents backtrack , and , symmetrically restrict conquests to steps in which the agents advance . we next list the algorithm describing the work of each agent on the graph environment . -2cm-2 cm > p9cm >p7 cm * * & + _ _ entry point of an agent at time step . _ _ _ upon entry , the agent is located on vertex . _ & + & + * * for each * * * * if * * * * or * * * then * * * if * * * * then * * * * if then * * ; ; * * goto * * ; * * return * * * else * * * if * * * then * ; ; ; * * goto * * ; * * return * * * * if * * * then * * * if * * * * then * * ; ; * * goto * * ; * * return * * * * if * * * * then * * * if * * * then * * * * * * goto * * ; * * return * * + * return * & * * * * for each neighbor of , marked by s pheromone , and meeting the _ double hit . _if size difference , + or is stagnated ( pruned or empty ) then , if not backtracked to if chance allows conquest ( x random ] ) * * if has ( self ) mark but _ _ double visit _ _ is met , rejoin .move to and exit this step select of the oldest pair - trail + * * if pair - trail points into , and mark on is older keep previous time - mark refresh the pair - trail pointing to move to and exit this step * * if pair - trail points into * * if _ _ lose _ _ flag is set remove time marks of and its pair - trails move to and exit this step * * refresh vertex + exit this step +during a patrolling cycle , an agent attempts conquests over all the border edges of all vertices of its region .hence its region may expand with additional vertices bordering its route .this causes the spanning tree defined by its dfs - route to have an ever growing number of branches as the patrolling cycles continue , resulting in a tree shape resembling a `` snow flake '' .additionally , agents have a strong tendency to form `` rounded '' regions , if the environment and other agents regions allow it .this happens because vertices that are candidates for conquest and are adjacent to more than one of the region s vertices , have a higher probability to be conquered and incorporated into the patrolled region .these two properties , the snow - flake like spanning tree and rounded build - ups , often assist in achieving a smooth evolution towards convergence .thick regions make the possibility of major `` balloon explosions '' unlikely .the thiner branches at the ends of the snow - flake - like region , cause the pruning of the region by another agent to merely `` shave '' off small fragments from the region , and are less likely to cut out a large portion of its vertices . moreover , if a greater portion was cut out by pruning , it is highly likely that the lost portion is connected by edges that were not pair trails , making it is easier for the losing agent to regain its vertices .figure [ fig : chart region size vs time ] depicts a chart showing a typically observed evolution of 12 regions in a 50x50 square grid .the chart describes how the sizes of the regions change with time .the images of figure [ fig : plausible rounded and thick ] are two snapshots taken during the same evolution , the second snapshot being of the balanced partition that the agents reached .the chart shows that some regions grew faster than others , then at some point , the regions grew enough so that the graph was covered ( or close to being covered due to continuous pruning ) , and the smaller regions began to grow at the expense of the larger regions until they all reach a very similar size , albeit not yet balanced .for practical purposes arriving to this state in the graph might suffice , especially if the graph is constantly changing and a partition that is balanced is not well defined .this process occurs quickly , then there is a much longer phase towards convergence .the `` turbulence '' seen at about are due to some mild `` balloon explosions '' followed by recovery and convergence .the last _ plateau _ towards convergence is short in this particular example , but in some other simulated examples it appeared much longer .this is especially noticeable in cases which the evolution results in the system having two large regions that are adjacent and have very similar sizes but are not yet balanced , e.g. a size difference of 2 vertices .such scenarios may increase the time required for a double visit , which is a condition for conquest .figure [ fig : convergence time as func of graph size ] is a chart depicting time to convergence for a system with 5 agents as a function of the graph size , overlaying results of multiple simulation runs .it shows that the spread in the time to convergence grows with size , but it also clear that the majority , depicted as dense occurrences , are not highly spread in value , indicative of runs of `` typical evolution scenarios '' on square grid , as was the environment tested in these simulations . the dotted line is an interpolation of average convergence time among the results achieved for each graph size .the above discussed experimental evidence showcases an evolution of the system towards a balanced partition ( when the topology of the environment / graph remains stationary ) , an evolution which is smooth without `` dramatic '' incidents , driven by the antpap algorithm mimicking pressure equalization . however , antpap is a heuristic process , and the experimentally observed smooth convergence is by no means guaranteed . in the evolution towards balanced partitionsthere are various events that may substantially alter the size difference between regions , and lead the system to longer and chaotic excursions .chance dictates the way regions expand and , for example , a region may build up with less thickness in some areas allowing other agents to cut across it causing `` major balloon explosions '' .furthermore , even a quite well - rounded region may be subject to an `` unfair '' probabilistic attack , driven to cut through its width and eventually succeeding to remove a large portion of its area . to make things even worse, the portion of the split region that ceased to be patrolled , becomes a readily available prey to neighboring agents .therefore , although the system is relentlessly progressing towards a balanced partition due to the rules of `` pressure equalization '' , such `` balloon explosions '' are singular events that may significantly derail the smooth evolution towards convergence , slowing the process considerably .one may wonder if there are conditions were these events occur repeatedly , making the convergence into a balanced partition an elusive target , that may even never be reached .clearly , there are systems where a balanced partition can not be reached , simply because one does not exist .an evident example is a graph having the shape of a star .consider a graph of 7 vertices , one at the center and three branches of two vertices each .a system with 7 vertices and 2 agents should be partitioned into two connected components , one of 3 vertices and the other of 4 .but such partition does not exist , thus repeated balloon explosions will forever occur .interestingly , a balanced partition does exist for the same graph with 3 agents .therefore , our first step towards proof is to precisely define the systems of interest , which are based on environment - graphs for which a balanced partitions exist for _ any _ number of agents ( up to a bound ) .the most general set of such graphs is an interesting question in itself . for our purposes, we shall limit our analysis to systems of the following type : 1 .the environment is a graph that has a hamiltonian path , a path that passes through every vertex in a graph exactly once , with any number of agents patrolling it . indeed , for any graph that has a hamiltonian path , we can find multiple possibilities for a balanced partition for any number of agents . to name one , the partition where each region includes vertices that are all adjacent one to another along the hamiltonian path , and the regions are chained one after another along the path .some of the regions can be of size and the others of size .note that for our purposes , the path does not need to be closed , so the existence of a _hamiltonian cycle _ is not required .2 . the environment is a _ k - connected _ graph , a graph that stays connected when any fewer than of its vertices are removed , with agents patrolling it . in , gyori shows that a _ k - connected _ graph can always be partitioned into components , including _k _ different and arbitrarily selected vertices. we shall analyze the evolution of the system as a stochastic process , and base our proof of convergence on the theory of markov chains .the remainder of this section is organized as follows : we define a `` system configuration '' by considering a simple evolution example and show that there always exists a mapping from configurations to well defined `` states '' .we then look at more complex configurations and realize that although the set of configurations is not bounded , it can be divided into a finite set of equivalence classes , each class representing a state .hence , we conclude that the number of states in the markov chain is finite , and the evolution of the configurations maps into corresponding transitions between the states of the chain .next we use the concept of consistency of a region , as presented in lemma [ lem : a - region - is - consistent ] , to conclude that if a balanced partition is attained , it may persist indefinitely .this means that balanced partitions map to recurrent states in the markov chain .we use this result to analyze the structure of the stochastic matrix that describes the chain .then we turn to prove that it is only the balanced partitions that are mapped to recurrent states .we first abstract the complexity of the problem by classifying all possible graph partitions into mutually exclusive classes : uncovered , covered but unbalanced , balanced but unstable , balanced and stable. then we proceed to analyze the changes that may cause the system to shift from a configuration in one class to a configuration in another .finally we show that when the graph has a hamiltonian path or is _ k - connected _ , despite the possibility that the system may repeatedly transition between these classes , it can not do so indefinitely and will inevitably have to sink into a recurrent state that belongs to a set of states which are all assigned to the same balanced partition , forming a so - called `` recurrent class '' .the vertices and edges of the graph with their respective pheromone markings , and the agent locations , will be called a configuration of the system , and denoted by .recall that , as discussed in section [ agents model ] ( sub - section `` agents '' ) , more straightforward time markings , ( in which the current time , , is marked ) , can be used to emulate their equivalent pheromone markings with temporal decay .the diagram of figure [ fig : 4 first steps ] depicts an example of transitions between configurations of a system of a 4-vertex graph on which two agents , the _ green _ and the _ cyan _ , are active__. _ _ is the initial configuration at _ _ .agents , shown as dots , are placed at some random initial vertices that are colored according to the agent patrolling them , _ green _ at the top - left vertex and the _ cyan _ at the bottom - right vertex .the pheromone markings on vertices are shown as ordered pairs of time markings , where is the most recent time that a vertex was marked with pheromone , and is the previous time that the vertex was marked ( so , generally , ) .when the two agents _ wake up _ at time slot the readings of pheromone marks around are all zero , therefore the _double visit _condition , is not met , and conquests are prohibited .hence the only possible action for the agents is _ reset _ ,i.e. leaving a fresh pheromone mark ( the current time ) , and thus transitioning to the new configuration . at ,the double visit condition is again not met , and the system transitions to ( recall that according to the antpap algorithm , at the time that a pheromone is marked on a vertex , the previous mark is moved from to ) . at , conquest conditions are met for both agents , and the system may now transition to any one of the configurations , according to whether one or more conquests succeed , or to , with a probability of , if all 3 conquest attempts fail .it is important to notice that is equivalent to ( and in fact the configurations are identical in terms of temporally decaying pheromone markings since ) . inboth we have a pheromone that has been freshly marked , so , and a which has been marked on the immediately preceding time slot , hence .we can , therefore , map each configuration of figure [ fig : 4 first steps ] to distinct `` states '' , states , as depicted in figure [ fig:4 first states ] , and group and into an _ equivalence class _ of system configurations , were configurations in such equivalence class map to the same `` state '' ( in this case the equivalence class that includes and maps to state 3 ) .calculating transition probabilities between `` states '' is straightforward , but sometimes subtleties arise , for example : , ( both agents attempt conquests , but fail ) , and ( due to the strong asynchronous assumption , the _ cyan _ agent has a probability of to move first within the time slot , and if so , its attempted conquest has a probability to succeed .the result is multiplied by a probability that the _ green _ agent fails in its conquest .if the _ green _ moves first , there is a probability that it would fail both attempts , and then a probability that the _ cyan _ succeeds ) .an edge which is part of a pair - trail pattern marking , so that , will be called a pair - trail edge . for a time interval in which no conquests or losses of a vertex in a region occur ,the region is considered `` stable '' .[ lem : consistent and zero = 00003d m states]a system comprising a graph with time - invariant topology , and agents , in a configuration , satisfying 1 .the regions marked by the agents are consistent 2 .pheromone marks exist only on vertices and pair - trail edges inside the k - regions , and no pheromone markings exist elsewhere on the graph , will transition through a finite sequence of m of states , where m is the least common multiple of the cover times of the k - regions ( i.e. , prior arriving to a configuration equivalent to ( i.e. , as long as the k regions are stable ( no conquests or losses occur ) . a consistent region is a region for which , where is the cover time of region ( see lemma [ lem : a - region - is - consistent ] ) . therefore , as long as the region is stable, all vertices and all pair - trail edges of , cyclically return to the exact same decaying pheromone levels , i.e. exactly the same temporal differences ( where is the current time ) every steps .we can thereby consider the regions in the partition , each repeating its pheromone level markings independently , as cyclic processes each with its own cycle time .hence , all the processes complete an integer number of cycles every steps , where is the least common multiple of the cycle times , , which ensures that all vertices of all the regions in the configuration _ _ that was reached have exactly the same temporal differences as in _ , _ and therefore __ in the above lemma , we required to have no pheromones at all on edges that are not pair - trail edges .but , if there were markings on such edges , the patrolling agents would simply ignore them , according to the antpap algorithm .therefore such markings have no influence on the possible future evolutions of the system .we shall formally define states of the system by grouping together configurations that have `` the same future evolutions '' , i.e. same possible future configuration transition sequences with the same probabilities .for example , as seen above , configurations that differ only by levels of pheromones on non pair - trail edges form such equivalence classes , hence each class defines a distinct state . in systemstheory , this is the classical _ nerode _ equivalence way of defining states .accordingly , two configurations that do not have the exact same patrolling routes ( either not having the same regions , or the agents have developed different patrolling paths within the regions ) can not have the same future evolutions , since , even without any conquests or loses , the future sequences of configurations that the systems go through are different due to the different patrolling steps .therefore these two configurations can not belong to the same equivalence class thus represent distinct states .next we turn to discuss pheromone markings that may exist on vertices and edges that are part of any current patrolling route , hence outside of all the regions .such scenarios may occur as result of a successful conquest by an agent that disconnects the region of a neighbor and hence prunes the spanning tree of that agent , splitting it into two or more disjoint branches .clearly , the latter agent remains on one of these branches , while the others cease to be part of its patrolling route and remain `` isolated '' .a segment of a spanning tree ( i.e. a set of vertices marked with pheromones and connected by pair trails ) that is part of a patrolling route , thus not included in any of the regions , forms what we shall call an isolated branch . for completeness , a single such vertex that is not connected by a pair - trail is also considered an isolated branch . in our pheromone marking model we have not limited the pheromone decay , thus , on an isolated branch , pheromones may decay indefinitely .this means that there is no bound to the set of configurations , and raises the question of whether there exists a bound to the set of equivalence classes to which they can belong , hence a bound on the number of states of the system .we shall , therefore , consider configurations that include isolated branches , and analyze the effect of pheromone decay in these branches on the evolution of the system , or more precisely , how such decay influences `` future '' system states .we have already seen that two configurations that do not have the exact same patrolling routes must represent different states , thus we shall verify that this distinction , by itself , does not produce an unbounded number of states . the number of permutations of possible stable regions is finite ( in a finite graph ) , and for each such permutation , the number of permutations of possible routes for the agents must be finite too ( since each of the regions have a finite number of edges ) . we are therefore left to show that starting at any arbitrary configuration with agents patrolling regions that also include isolated branches , all future evolutions in an arbitrarily large interval in which all regions remain stable , can be grouped into a finite number of equivalence classes .let us consider a setup of regions and an isolated branch , where the regions are stable in an arbitrarily large interval , and further assume that a vertex of the isolated branch is adjacent to a vertex in one of the regions ( see figure [ fig : isolated branch prunning ] ) . since the regions are stable , every patrolling cycle the value is refreshed , thus its time - marking increases with each cycle . on the other hand ,the time marking of the vertex in the branch remains unchanged .when the agent is on vertex the following scenarios may arise : 1 .the vertex on the branch might be marked with the same agent s pheromone , and hence moving into the adjacent vertex consists of the action of _ rejoining _ a vertex previously lost . according to antpap, agents check for a double visit condition , i.e. , prior to this action .when traversing onto the vertex , e.g. at time , the agent marks there a fresh pheromone .this may result in splitting the isolated branch into two or more disjoint branches .the agent will then follow the pair - trails emanating from vertex at which it is presently located , oblivious to the fact that pheromone marks on pair - trails and vertices of the branch are old .thereafter the agent traverses the section of the previously - isolated branch it is located on , thus refreshing its marks , until all the section is visited ( e.g. , in the example of figure [ fig : isolated branch prunning ] , the agent will visit all vertices on the lower section ) , then it returns to the vertex from which the conquest was launched .the other disjoint branches ( in the example of figure [ fig : isolated branch prunning ] , the upper section ) remain isolated .if the branch is marked with another agent s pheromone , and conquest conditions are met ( e.g. double visit ( ) and the regions size difference is not exactly one vertex ( or equivalently the difference in cover time is not 2 , i.e. , ) , the agent may attempt a conquest on the vertex and thereafter on all the vertices of the branch , one by one .3 . if the branch is marked with another agent s pheromone but the double visit condition is met ( i.e. ) , it may remain so only for at most two cycles of patrolling .note that is growing with each agent s visit , while remains unchanged and as a result a double visit condition will necessarily arise .furthermore , meeting the double visit condition also ensures that all the additional conquest conditions are met at the same time , since either the cover time encoded in the vertex of the isolated branch indicates a region size conducive to conquests , or the agent recognizes that the neighboring region is stagnated ( not being patrolled for too long , i.e. , ) .therefore any further decay of the pheromone mark on the isolated branch will not influence the future behavior of the system .a double visit is , therefore , a sufficient condition for an agent to conquer or rejoin a vertex on an adjacent isolated branch , hence we conclude the following : [ lem : n+m states]a system comprising a graph with time - invariant topology , and agents , in a configuration that includes exactly one isolated branch will transition through a finite sequence of at most states , where and , where is the cover time of region , as long as all the regions are stable ( no conquests or losses occur ) . the completion of two patrolling cycles of a region by its patrolling agent ensures that the double visit condition is met at any vertex of the region adjacent to a vertex of the isolated branch ( see discussion above ) .therefore , after an interval of _ _ ( i.e. when the agent on the largest region completed two patrolling cycles ) it is certain that the double visit condition is globally met ( i.e. for any vertex on any of the regions adjacent to any vertex in the isolated branch ) .moreover , it will be met on any time step that follows ( as long as the regions are stable ) . hence any two configurations on which the double visit condition is globally met , and have the same levels of pheromones on vertices and pair - trail edges that are in the regions ( but may differ in levels of pheromones on the isolated branch ) are equivalent .since we also know , based on lemma [ lem : consistent and zero = 00003d m states ] , that every + time steps , all pheromones in vertices and pair - trails included in the regions cyclically return to the exact same decaying pheromone levels ( i.e. exactly the same temporal differences ) , we conclude that a system in a configuratio__n _ _ _ , _ will transition at most distinct states to a configuration _ _ ( on which the double visit condition is globally met ) and then will cyclically transition through states reaching , at each cycle , a configuration _ . _our next analysis is of the effect of multiple isolated branches on future evolutions of a system with stable regions .consider two scenarios , both starting with the same configuration that has one isolated branch .an arbitrary time later , a conquest creates another isolated branch , the second branch being the same in both , only the _ time _ of its creation is different .hence , there can be an arbitrarily large time difference between the creation of the second isolated branch in the two scenarios .contemplating the case of an arbitrary number of isolated branches created each at an arbitrary time , the complexity of such presented scenarios may substantially grow .nevertheless , in term of system states the above complexity does not matter .once the decay of pheromones on an isolated branch is such that the double visit condition is globally met , the conquest or rejoin threshold is triggered , and afterwards no amount of further decay affects the future evolutions of the system .this insensitivity holds regardless of the presence of other isolated branches , simply because the double visit is a condition , limited to the time difference encoded in pheromones a on a vertex in a region and an adjacent vertex on the branch .thus , any two configurations that differ only by level of pheromones on isolated branches for which the double visit condition is globally met , are equivalent .particularly , there must exist a configuration such that the level of pheromones on isolated branches is at its `` highest level '' , i.e. the time marking on each vertex of each isolated branch is the highest that allows the double visit condition to be globally met .a branch with such `` highest level '' will have one vertex , where with a time - mark value where is the current time ( i.e. a pheromone was left there steps before the current time ) , and all other vertices and edges with ( lower ) values that agree with the ordered directions of pair - trails .hence we conclude , again , that any configuration that includes multiple isolated branches will transition at most distinct states as long as all the regions remain stable . for a system with a graph of stationary topology , and agents , the set of states is finite .based on the above analysis , we conclude : 1 .any configuration is equivalent to a configuration identical to except for having no pheromone marking on edges that are not pair trails .any configuration that includes isolated branches is equivalent to a configuration identical to except by the levels of pheromones on vertices and pair - trail edges in those isolated branches that globally meet the double visit condition on both configurations .specifically , in , vertices and pair - trail edges on each such isolated branch , will be of a `` highest level '' , i.e. will have one vertex , where with a time - mark value where is the current time ( i.e. a pheromone was left there steps before the current time ) , and , and all other vertices and edges with values that agree to the directions of pair - trails. therefore any configuration is grouped in an equivalence class with a correspondingly `` representative '' configuration . to find out how many such classes exist , we observe that a includes the following elements : regions with patrolling routes , isolated branches that do not globally meet the double visit condition and isolated branches of a highest level of time - markings that globally meet the double visit condition . however , we have that : 1 .the number of possible choices of regions is finite ( in a finite graph ) .2 . for any arbitrary set of regions ,the number of possible routes in the regions is finite .3 . for any arbitrary set of regions ( with a particular selection of routes )the number of vertices included in these is finite , thus the number of possible isolated branches is finite ( and their possible assignments to whether they meet the double visit condition or not is also finite ) .therefore the set of possible representative configurations is finite , each defines an equivalence class corresponding to a distinct state of the markov chain , hence the set of system states is finite . concluding the above analysis we see that in spite of the infinite number of configurations possible for a system , the number of system states , though quite large , is finite .let us denote the finite set of states of a system by _ s. _( gallager ) , a markov chain is an integer - time process , for which the sample values for each random variable lie in a countable set s and depend on the past only through the most recent random variable .clearly , any state of the system at time , formally represented by , is dependent only on the previous state , since our agents have no memory , and their decisions are based solely on readings from vertices and edges of the configuration , which are completely described by .we can , therefore , analyze the evolution of the system based on the theory of markov chains .our aim is to prove that the markov chain is not irreducible ( i.e. given enough time , the probability to reach some of its states tends to zero ) , and that all its recurrent states represent balanced partitions . to proceed with our analysis, we notice that the size of set grows very fast with the size of the graph .calculations show that even the simple example of figure [ fig:4 first states ] develops to a surprisingly large chain . in order to be able to describe the evolution of the system in a simple manner, we also define a _ partition _ of the environment .the coloring of each vertex of a configuration by its patrolling agent along with the set of unvisited vertices form a partition of the graph .partitions are unconcerned about the levels of pheromones on the vertices and indifferent to agent locations , thus only exhibit the regions of .many different configurations ( and hence states too ) correspond to the same partition , therefore we can use the concept of a _ partition _ as an abstraction referring to all those configurations .figure [ fig : float initial - partition ] is an example of a partition of the environment graph that the system we discussed above arrived to . from our previous discussionwe know that it represents a set of states of the underlying markov chain .one characteristic of that set of states is that it contains a cyclic path .this reflects the fact that agents may cyclically repeat their patrolling route for some period of time during which conquests or losses do not occur , and the partition remains stationary .in fact , having a cyclic path in the underlying markov chain is characteristic of any reachable partition .eventually conquests or losses are stochastically enabled leading to a different partition , and , as a result , to a different set of underlying states . in figure[ fig : transition of partitions ] , the system may remain in partition a for a while , as the underlying chain cycles through the relevant states , but eventually it will probabilistically transition to one of the partitions b , c , d , e . notethat the transition from a to d means that both agents conquered one vertex each during the same time - slot .a _ recurrent class _ in a markov chain is a set of states which are all accessible from each other ( possibly passing through other states ) , and no state outside the set is accessible ( gallager ) .the following lemma shows that the set of states corresponding to any balanced partition includes recurrent classes : [ lem : c4]if a system remains in a balanced partition for a period of time equal to twice the cover time of its largest region , it will remain so indefinitely . we knowfrom lemma [ lem : a - region - is - consistent ] , that if a region remains unchanged for a period of time which is twice its cover time , it becomes consistent , so the pheromone levels in all of its vertices correctly indicate its cover time , .therefore if the system remains in a balanced partition for a period twice the largest cover time ( the cover time of its largest region ) , it is guaranteed that all the regions are consistent .hence , we conclude that no conquest attempts are subsequently possible , since the system is balanced and conquest conditions can not be satisfied across any border edge .the conclusion of lemma [ lem : c4 ] is that a balanced partition with all its regions consistent , must correspond to a _ recurrent _ ( and _ periodic _ ) _ class _ in the markov chain__. _ _ the random process continuously repeats a series of states based on the individual agents patrolling cycles . since agents may reach different patrolling routes for the same region , a balanced partition may correspond to multiple recurring classes .additionally we can conclude the following : [ cor : c3 - 1]a system may enter a state in which the partition is balanced , and then move into a state in which the partition is not balanced . clearly , we see while the conditions for consistency are not satisfied for one or more regions in the partition , a conquest or loss may possibly happen , hence the partition may become unbalanced . since recurrent classes exist , the stochastic transition matrix of the markov chain of the patrolling system must have the form : ] + the rank of is ( i.e. the least common multiple of the cover times of the _ k _ regions ) , and is a function of the sizes of the regions in the partition .the contribution of a particular balanced partition would be a matrix that comprises of a set of * * matrices on its diagonal , + * \end{aligned}\ ] ] * + where here is the finite number of possible route combinations in the regions that form the partition . therefore our goal is to show that the structure of * r * is + ] represent balanced and stable partitions , classified as . a markov chain described by a stochastic matrix of this form ,will eventually enter a recurrent state , regardless of the initial state , and the probability that this takes more than steps approaches zero geometrically with ( see , for example , gallager ) .we conclude that a system with a graph _ _ ( of _ _ stationary _ _ topology ) and agents implementing antpap converges with probability 1 and a finite expected time to a balanced and stable partition .we presented and thoroughly analyzed the antpap algorithm for continuously patrolling a graph - environment with simple finite state automaton agents ( or bots ) using `` pheromone traces '' .the simulations presented so far were on an environment in the shape of a square .on such an environment , we know that many balanced partitions do indeed exist .practical scenarios are seldom so simplistic . in many important cases ,the environment graph is , in fact , uncharted and much more complex in its structure .still , agents implementing antpap will find a balanced partition with probability one ( almost surely ) , if such a partition exists , and will certainly divide their work fairly even when such partitions do not exist .the shape of the environment considerably affects the time to convergence .the number of balanced partitions that the environment graph has is , naturally , one of the major factors .so is their diversity , i.e. how different the balanced partitions are from each other .if the balanced partitions are similar to one another , the dependency of the time to convergence on the initial locations of the agents tends to be higher than if the solutions are further apart .consider the system of figure [ fig : t with odd ] .the initial positions of 7 agents are shown in the first snapshot at .next , at , the lower section becomes almost covered . at upper section is almost covered , and the two agents there clearly have larger regions than the agents in the lower section . at , the cyan agent is trapped in the upper section , and we witness a competition between the agents from the lower section to grow their regions into the prolonged section , that the cyan agent abandoned . at ,the competition ends after the yellow agent traversed into the upper section .now we have 3 agents on each of the upper and lower sections , and one on the prolonged section .soon after , the system reaches a balanced partition . clearly , there are many balanced partitions for this system , but all of them have one agent on the prolonged section , and 3 agents on each of the upper and lower sections. initial conditions with 3 agents on the upper and lower sections each will ensure faster convergence to a balanced partition . following this experiment and discussion , it is interesting to consider a system with the same environment graph and an number of agents .since only one balanced partition exists , it is reasonable to predict that the required time for convergence might be substantial .figure [ fig : t with 2 ] shows snapshots of an evolution of this system .both the violet and yellow agents are initially located in the lower section .after a while , the violet agent expands its region so that part of it extends into the upper section .a while later , the violet region covers almost all of the upper section as well as the prolonged section .then , the yellow agent begins to expand into the prolonged section , eventually causing a `` balloon explosion '' of violet s region .soon enough , the violet agent responds , and causes a balloon explosion of the yellow s region . due to the shape of the graph, this cycle may repeat over and over again .it will stop only when the single possible balanced partition is reached , and subsequently the regions `` lock - in '' , and the system remains stable . for that to happen, an agent must conquer the appropriate half of the vertices of the prolonged section . we knowthat this will eventually happen with probability 1 , however the time it will take can be very very long .the number of agents is also an important factor of convergence time .generally , more agents hasten the convergence .figure [ fig : plus w 5 agents ] shows an evolution of a system with 5 agents on a different environment .we shall call this environment graph the `` cross '' .figure [ fig : plus w 100 agents ] shows a system with the same `` cross '' graph and 100 agents .here , the `` pressure '' that an agent `` feels '' from other `` balloons '' quickly accumulates around its region , and the convergence is swift .figure [ fig : plus agents vs. convergence ] depicts results of multiple simulation runs , of systems with the `` cross '' graph of figure [ fig : plus w 5 agents ] , exhibiting convergence time as a function of the number of agents . in some systems, particular numbers of agents may cause a substantially larger time to convergence .in figure [ fig : rooms balanced ] we present a balanced partition in a graph environment that we call `` 6 rooms '' . systems with a `` 6 rooms '' graph and 6 agents sometimes require a substantially longer convergence time , as shown in figure [ fig : rooms agents vs. convergence ] . ignoring the outliers at 6 agents , figure [ fig : rooms agents vs. convergence no outliers ] shows that the chart exhibiting convergence time as a function of the number of agents is similar in shape to the one we have seen for the `` cross '' graph , in figure [ fig : plus agents vs. convergence ] . in the simulations described above , we tested the evolution of the multi - agent patrolling process until convergence to a stable and balanced partition .however remarkably , the system evolves rather quickly to close - to - balanced partitions due to the `` balloon '' forces implicitly driving the agents behavior . therefore , for practical purposes we see that the antpap algorithm balances the work of the agents much earlier than its convergence time , and the partitioning becomes reasonably good rather quickly .this property is crucial in case of time varying topologies .hence , antpap is a versatile and adaptive process .considering again the `` 6 rooms '' example with 10 agents , we see in figure [ fig : rooms region size evoultion ] a temporal evolution of antpap until a stable and balanced partition is achieved . as is clear on the chart displaying the time evolution of the sizes of the 10 regions , the system reached convergence at approximately steps .however it is also clear that after approximately steps , the difference between the largest and smallest regions in the partition of the environment graph is already insignificant . in the chart ,a system is defined as `` close to balanced '' when more than 99% of the graph is covered , and the difference between the largest and smallest regions is less than 5% the ideally balanced size ( i.e. the graph size divided by the number of agents ) .figure [ fig : rooms balanced with contours ] exhibits a partition reached when the system was `` close to balanced '' .both snapshots show the same partition ( at two different times ) .the snapshot at the right also shows the borders between regions that `` reached balance '' ( i.e. their size difference is at most 1 vertex ) depicted in purple .there is only one border which is not yet balanced , between the magenta region and the dark yellow region , located in the right `` corridor '' .these regions are close in their sizes , and as a result the double visit condition does not occur very often . despite the partition not being balanced yet ,the division of work between agents is already fair , hence for practical purposes , a `` close to balanced '' condition is good enough .we note in summary that antpap does not produce _ k - cut _partitions , and generally assumes that there are no constraints on the grouping of vertices .some important real - world problems impose such constraints , for example , the allocation of users in a social network to hosting servers , according to their interconnections .other real - world problems , however , do not impose such constrains , for example , the division of work patrolling the world - wide - web for content analysis and classification . in view of the good properties discussed above, we envision that antpap could become a building block for distributed algorithms aiming to fairly divide between agents the labor of patrolling an environment , using very simple agents constrained to local interactions based on tiny `` pheromone '' marks left in the environment .10 y. elor , a. m. bruckstein , multi - a(ge)nt graph patrolling and partitioning .proceedings of the 2009 ieee / wic / acm international joint conference on web intelligence and intelligent agent technology - volume 02 .ieee computer society , 2009 .r. g. downey , v. estivill - castro , m. r. fellows , e. prieto , f. a. rosamond , `` cutting up is hard to do : the parameterized complexity of k -cut and related problems '' , electronic notes in theoretical computer science , 78():209222,2003 y. chevaleyre , f. sempe , g. ramalho , a theoretical analysis of multi - agent patrolling strategies .proceedings of the third international joint conference on autonomous agents and multiagent systems - volume 3 .ieee computer society , pp .1524 - 1525 , 2004 .
a team of identical and oblivious ant - like agents a(ge)nts leaving pheromone traces , are programmed to jointly patrol an area modeled as a graph . they perform this task using simple local interactions , while also achieving the important byproduct of partitioning the graph into roughly equal - sized disjoint sub - graphs . each a(ge)nt begins to operate at an arbitrary initial location , and throughout its work does not acquire any information on either the shape or size of the graph , or the number or whereabouts of other a(ge)nts . graph partitioning occurs spontaneously , as each of the a(ge)nts patrols and expands its own pheromone - marked sub - graph , or region . this graph partitioning algorithm is inspired by molecules hitting the borders of air filled elastic balloons : an a(ge)nt that hits a border edge from the interior of its region more frequently than an external a(ge)nt hits the same edge from an adjacent vertex in the neighboring region , may conquer that adjacent vertex , expanding its region at the expense of the neighbor . since the rule of patrolling a region ensures that each vertex is visited with a frequency inversely proportional to the size of the region , in terms of vertex count , a smaller region will effectively exert higher `` pressure '' at its borders , and conquer adjacent vertices from a larger region , thereby increasing the smaller region and shrinking the larger . the algorithm , therefore , tends to equalize the sizes of the regions patrolled , resembling a set of perfectly elastic physical balloons , confined to a closed volume and filled with an equal amount of air . the pheromone based local interactions of agents eventually cause the system to evolve into a partition that is close to balanced rather quickly , and if the graph and the number of a(ge)nts remain unchanged , it is guaranteed that the system settles into a stable and balanced partition .
organic solar cells ( osc ) manufactured from organic blends represent a promising low - cost , rapidly deployable strategy for harnessing solar energy . in contrast to traditional silicon based solar cells , organic ( or polymer ) solar cells are low weight , printable on flexible substrate , and most importantly , can be produced at room temperature at very low cost .solvent - based thin - film deposition technologies ( e.g. , spin coating , drop casting , doctor blading , roll - to - roll manufacturing ) are the most common organic photovoltaic manufacturing techniques .these techniques , especially doctor blading and roll manufacturing , are very attractive , due to their ease of scale - up for large commercial production .all solution - processing techniques usually involve preparing dilute solutions of electron - donor and electron - accepting materials in a volatile solvent . after some form of coating onto a substrate , the solvent evaporates .an initially homogeneous mixture separates into electron - accepting rich regions and electron - donor rich regions as the solvent evaporates ( see figure [ fig : evapse : scheme ] ) . depending on the specifics of the polymer blend and processing conditions ( spin coating time , solvent type , nature of substrates ) ,different morphologies are typically formed in the active layer .the active layer of osc is a blend of two types of materials : electron donating and electron accepting material .the active layer is sandwiched between electrodes ( see figure [ fig : evapse : scheme ] right ) .the morphological distribution of the electron - donor and electron - acceptor subregions in the device strongly determines the power conversion efficiency ( pce ) of oscs .in fact , every stage of the photovoltaic process is affected by this morphology .consequently , there is immense interest to understand morphology evolution during fabrication .an important technological goal ( that understanding morphology evolution will help achieve ) is the ability to design fabrication processes to obtain tailored morphologies for high efficiency osc devices .current state - of - the - art approaches towards tailoring the manufacturing process are limited to combinatorial trial - and - error - based experimental investigations .furthermore , the inability to experimentally visualize morphology evolution hinders the ability to quantify the effect of various process and system variables ( such as evaporation rate , blend ratio , solvent type ) on morphology evolution .experimental techniques provide only limited data for analysis : ( a ) mainly limited to observations of the lateral organization of the top layer , and ( b ) mainly limited to the final morphology .in addition , visualizing 3d morphology remains challenging .these challenges serve as a compelling rationale for developing a computational framework that can model morphology evolution , thus significantly augmenting experimental analysis . computational approaches to this problemexist , but are mostly limited to one scale : atomistic scale or macro scale . from a macro scale perspectivethe problem of thin film formation is well - studied .the series of work summarized in link the macroscale film thickening during evaporation process with angular velocity of the coater , concentration , evaporation rate , and solution viscosity ._ however , morphology evolution is not analyzed in these studies ._ on the other end of the spectrum , morphology evolution in a typical osc system was recently studied using molecular dynamics simulations .the authors were able to predict phase separation between two typical components in a cubical domain of size and only for , without incorporating the macroscale effects of evaporation or substrate .to the authors best knowledge , there exists no meso - scale approach that links morphology evolution at the nano scale with macro - scale phenomena like evaporation , and substrate patterning .the development of such a model will provide significant advantages , it will in particular : * serve as a powerful tool to analyze morphology evolution over time in three dimensions .this can be used as a `` stereological microscope '' to visualize morphology evolution from early stages until the formation of the stable morphology .it is worth mentioning that three dimensional experimental reconstruction of the final morphology is possible using electron tomography but this approach to polymeric systems is exceedingly rare because of the required proper contrast between components . *allow for independent control over various process and system variables , thus making it easy to isolate factors affecting the process ( such as substrate patterning , solvent annealing , or blend ratio ) .* ability to perform high throughput analysis .such an ability allows automated exploration of the phase space of various manufacturing and system variables ( such as different blend ratios , evaporation rates , solvent choices , effect of substrates ) to understand their effect on morphology and performance .this opens up the possibility of data - driven knowledge discovery to understand the effects of different competing phenomena and , subsequently , tailoring the manufacturing process . in this work, we develop a computational framework to model morphology evolution during fabrication of organic solar cells .we formulate a model that takes into account all the important processes occurring during solvent - based fabrication of oscs .the model is based on a phase field approach to describe the behavior of multicomponent system with various driving forces .we develop an efficient numerical framework that enables three dimensional , long time - scale analysis of the fabrication .we illustrate the framework by investigating the effect of independent control over various external conditions : solvent evaporation , blend ratio , and interaction parameters .we further quantify the interplay between the solvent evaporation and diffusion within the film .to the best knowledge of the authors , this is the first comprehensive effort to construct a virtual framework to study 3d morphology evolution during solvent - based fabrication of organic solar cells .all solution - processing techniques usually involve preparing dilute solutions of electron - donor and electron - accepting materials in a volatile solvent . after some form of coating onto a substrate , the solvent evaporates .an initially homogeneous mixture separates into electron - accepting rich regions and electron - donor rich regions as the solvent evaporates . depending on the specifics of the blend and processing conditions ( spin coating time , annealing time , solvent type , nature of substrates ) ,different morphologies are typically formed .the two materials usually used in fabricating osc are a conjugated polymer and a fullerene derivative , the conjugated polymer is the electron donor material , while the fullerene derivative is the electron - acceptor .we will use the terms electron - accepting material ( electron - donating material ) , acceptor ( donor ) and fullerene ( polymer ) interchangeably in this article .there is a rich and complex collection of interacting phenomena that direct the morphology evolution during solvent - based fabrication ._ phase - separation _ is a key phenomena triggered by the _ evaporation of the volatile solvent_. the atmosphere on the free surface determines the evaporation rate of the solvent .the resulting morphology can form multiple phases , making the system a _ multi - component and multi - phase _ system .in addition , the morphology on the substrate s surface may differ from that in the ` bulk ' state . both _free surface and substrate _ influence the organization of the morphology and directly affect characteristics of the devices .in particular , chemical and physical patterning of the substrate have been shown to affect morphology evolution . in order to enable predictive modeling ,each of the following phenomena must be included in the computational model .the rate at which solvent is removed from the top layer depends on various factors ( e.g. solvent volatility , spinning velocity during spin - coating ) . during evaporation ,the initially dilute solution becomes enriched in the two solutes due to depletion of solvent .this enrichment results in increased interaction between the solutes and triggers morphology evolution .the evolution is critically determined by the evaporation profile and diffusion of solutes ( and solvent ) within the film . .the binodal curve is related to the equilibrium phase boundary between the single phase and the phase separated region . during typical fabrication ,initial dilute solution , , is pushed into two - phase region ( delimited by the spinodal line ) as solvent evaporates ( ) .solution separated into polymer - rich ( ) and fullerene - rich phase ( ) .the equilibrium compositions of two phases lie on the binodal curve .line connecting corresponding equilibrium compositions are called tie - lines ( right ) . , title="fig:",scaledwidth=40.0% ] .the binodal curve is related to the equilibrium phase boundary between the single phase and the phase separated region . during typical fabrication ,initial dilute solution , , is pushed into two - phase region ( delimited by the spinodal line ) as solvent evaporates ( ) .solution separated into polymer - rich ( ) and fullerene - rich phase ( ) .the equilibrium compositions of two phases lie on the binodal curve .line connecting corresponding equilibrium compositions are called tie - lines ( right ) . , title="fig:",scaledwidth=40.0% ]as the solvent is removed , the blend is pushed into the spinodal range ( immiscible conditions ) and induces phase separation ( see figure [ fig : ps : scheme ] left ) .phase separation ( or spinodal decomposition ) is a mechanism by which solution separates to create phases of different properties . under these conditions ,the solution is unstable and even small fluctuations lead to fast phase separation .the composition of phases changes and reaching equilibrium concentrations .usually , the solution separates into two phases .one phase is rich in donor , the other phase is rich in acceptor material .the exact composition of phases is determined by thermodynamic conditions .the creation of the two phases is followed by slow coarsening .the kinetics of phase separation and coarsening is affected by the kinetics of solvent removal . in a confined thin film geometry , the evolution of the morphology close to the walls / interfaces can be significantly different from that of the bulk .in particular , chemical interactions between the substrate and the solute as well as surface patterning can significantly affect morphology evolution .recent experimental studies of osc explore this possibility . by changing substrate properties and by ( chemically or physically ) patterning the substrate , it was possible to direct vertical segregation , percolation and control phase separation . in certain systems ,crystallization is an additional mechanism of morphology evolution . while incorporating crystallization is relatively straightforward in the current framework, we primarily focus on modeling evaporation and substrate induced spinodal decomposition in this paper .this is in line with experimental results which seem to suggest that phase separation is a key mechanism in morphology evolution for polymer - polymer blends .phase field methods have been used to model morphology evolution in heterogeneous materials typically consisting of grains or domains characterized by different structure , orientation or chemical composition .these methods are highly versatile ( due to a diffuse - interface formulation ) and easily represent the evolution of complex morphologies with no assumption made about shape or distribution of domains .various thermodynamic driving forces for morphology evolution , e.g. bulk energy of the system , interfacial energy , substrate energy can be easily introduced .additionally , the effect of different transport processes , such as mass diffusion , convection , or shear rates , can be directly introduced and exploited using this technique .the advantages and flexibility offered by the phase field method provide an ideal framework to model morphology evolution during fabrication of osc . in particular , the mechanisms that direct morphology evolution ( spinodal decomposition and crystallization )can be naturally modeled using this method .they have been well studied for alloy system ( dendritic growth and spinodal decomposition in alloy system ) . additionally , various driving forces for phase transformation , such as effect of substrate and evaporation ,can be introduced in a very straightforward way without substantial model reformulation .finally , phase field methods can be scaled to predict morphology evolution for device - scale problems , which is currently impossible using any other framework , like molecular dynamics .in this section , we formulate the phase field model to simulate morphology evolution in a ternary system with solvent evaporation and substrate interaction included into the model .[ ch : mo ] formulating a phase field model usually consists of three stages : ( 1 ) identifying order parameters ; ( 2 ) postulating free energy functional that depends on the order parameters ; ( 3 ) constructing governing equations that describe the evolution of the system towards a minimum energy state .we consider a ternary system consisting of polymer , fullerene , and solvent .we denote the volume fractions of polymer , fullerene and solvent as , and , respectively .we set the volume fractions as the conserved order variables ( since ) .note that during evaporation , while volume is not conserved , the sum of the volume fractions by definition is conserved . in the second step ,we construct the energy functional for this system .energy , , consists of two bulk terms : homogeneous energy , and interfacial energy between phases .the total energy is given as : dv+ f_s(\phi_p,\phi_f,\textbf{x } ) \label{eq : energy}\ ] ] the energy of the homogeneous system , , also called configurational energy or free energy of mixing , is the quantity which governs phase separation .homogeneous energy depends only on local volume fractions and is at least double - welled .wells correspond to the equilibrium concentrations of separated phases .in contrast , the interfacial energy depends on the composition gradient and is scaled by an interfacial parameter ( see second term in eq . [ eq : energy ] ) . in the current work ,we assume .we construct homogeneous energy , , using the flory - huggins formulation , which is suitable for polymer solutions . , to the flory - huggins energy function .this modification allows to improve the efficiency of computation , while not changing the rate of morphology evolution or the shape of forming structure .the parameter is set to a small value . ]according to this theory , the energy of the system is given by : \label{eqfloryhuggins}\ ] ] where : is the gas constant ; is the temperature ; is the volume fraction of component ; is the degree of polymerization of component . here, is the molar volume ; is a reference molar volume ( e.g. solvent ) ; and is the flory - huggins binary interaction parameter between component and .the interaction parameters and degree of polymerization define the shape of the binodal and spinodal lines ( figure [ fig : ps : scheme ] ) .one of the advantages of the phase - field method is the ability to introduce additional driving forces to the model in a straightforward way .for instance , the effect of the substrate is a surface energy term , that is simply added to the total energy of the system ( eq .[ eq : energy ] ) .a generalized form of the substrate energy is given in eq .[ eq : fs ] .patterning is introduced through the space dependent functions and in eq .[ eq : fs ] .these functions determine the geometry of patterning . at a point on the substrate that is chemically tuned to component , the value of , otherwise .the chemical specificity of patterning is reflected in the parameters and .parameter is a chemical potential favoring component at the substrate , and expresses the way interactions between the components near the substrate are modified by the presence of a pattern at the substrate . ds \label{eq : fs}\ ] ] once the energy of the system is specified , the governing equations of the evolution can be formulated .this is usually done by defining the chemical potentials of the system .the chemical potential quantifies how much the energy changes when the configuration changes .the chemical potential for polymer and fullerene are and , respectively .next , using fick s first law for the flux ( ) and the continuity equation ( ) we get the governing equation for each component .we consider only two of the three variables as independent ( since ) .the resulting cahn - hilliard - cook equations are given by : + \xi_p \label{eq : goveq1}\ ] ] + \xi_f \label{eq : goveq2}\ ] ] where is the mobility of component .the diffusivity , , is a linear combination of self - diffusivities of all components and their volume fractions : .the energy of ideal solution , , is used to link mobility with diffusivity and to comply with classic fick s law .the ideal solution is one with zero interaction parameters .the advection term ( second term in lhs of both equations ) accounts for the change in height of the film and is scaled according to the height . scaling term is zero at the bottom surface ( ) and one at the top surface ( ) , where is the total current height of the film .substrate effects are included in the third rhs term of both equations .this term enters the equation only for the bottom surface ( zero height ) where the heaviside function , .the last term of rhs ( in both equations ) is the langevin force term , and .this term mimics the conserved noise due to fluctuations in composition .we set the noise to be a gaussian space - time white noise with the following constrains : and , where is the kronecker delta .variance is determined by the fluctuation - dissipation theorem ( fdt ) .cahn - hilliard equation with noise considered is called the cahn - hilliard - cook equation .this formulation allows for a natural extension to include crystallization . to do this ,an additional phase variable must be considered , the energy functional expanded accordingly and the governing equation formulated . from a mathematical perspective , an evaporation process is classified as a moving boundary problem , since the height of the thin film changes over time ( ) .following , we explicitly trace the surface evolution .we assume that solvent is the only component that evaporates from the top surface .solvent lost from the top layer results in height decrease. the rate at which the height decreases , , is proportional to the flux of the solvent out of system , : where is the normal component of molar flux ( of solvent into the air ) .this equation constitutes the mass balance for moving film - air interface : with two assumptions : only solvent evaporates ( ) and solvent content in the air is very low ( ) .the superscript and denote air and film , respectively . the molar flux of solvent can be further linked with evaporation rate of the solvent , , and the average content of the solvent at the top layer , , using equation : in this work , we assume that the solvent evaporates uniformly from the top surface and film height decreases homogenously .this assumption simplifies analysis and quantification of the competing effects of evaporation and substrate shown in the results section .we investigate the effect of inhomogeneous evaporation on the film surface evolution in a forthcoming publication .the evaporation rate , , depends on various parameters including the solvent partial pressure , solvent vapor pressure , temperature , air flow and is specific to the fabrication process . for spin - coating ,the evaporation rate depends on the angular velocity , while for solvent annealing or drop - casting , the evaporation rate can be assumed constant ( ) .we apply boundary conditions at the top surface to satisfy the balance of two other components within the film .removal of the solvent from the top layer , results in enrichment of the volume fraction of polymer and fullerene .therefore , to account for this enrichment we apply neumann boundary conditions at top surface for two other components : .although the phase field method is a well used technique for simulating the morphological evolution of a wide variety of materials and processes , employing it to predict morphology evolution during fabrication of active layer for organic solar cells requires resolving several challenges .we detail these challenges below .morphology evolution during fabrication of organic solar cells is an _ intrinsically multiscale process both in time and space _ which makes this process difficult to solve accurately and efficiently using reasonable computational resources .the governing equations ( eqns .[ eq : goveq1 ] , [ eq : goveq2 ] ) are forth - order nonlinear partial differential equations , which are difficult to solve numerically .the complexity of these equations is related to two competitive subprocesses : _ phase separation and coarsening_. these processes occur at two widely different temporal and spatial scales , all of which must be resolved properly .initially , very fast phase separation is the dominant process and followed by slow coarsening .phase separation is a fast process and results in thin layer creation .in contrast , coarsening is slow process consisting of rare events and involving merging of bulky regions .another numerical challenge is related to the _ evaporation process_. in case of the solvent - based thin - film deposition , the volume fraction of solvent changes from 99% to almost zero .this poses a severe problem for numerical techniques that have to reliably model the huge change in domain size in 3d .a key objective of the formulation is the ability to predict morphology evolution at _ the device scale_. the inherent complexity of the process makes this objective demanding . this is because we are interested in resolving nano scale morphological evolution while investigating device - scale domains . from a computational perspectivethis involves solving differential equations with a very large number of degrees of freedom ( ) , which can not be solved using current serial processing machines .this require developing modules heavily based on parallel processing , including applying domain decomposition strategies .the phase field method is a generic technique , and can be applied to almost any type of system undergoing morphology evolution .thus , in order to provide quantitative prediction , it is necessary to determine material - specific set of parameters , both thermodynamic and kinetic .these parameters very often show compositional , directional or temperature dependence , which pose additional difficulties . in thisregard , significant work has been done to determine several parameters for materials and systems utilized for photovoltaic applications using both molecular dynamics and experimental techniques .the strong form is formulated as follows : find \to \mathbb{r} ] ( is an auxiliary variable ) such that : , \label{eq : sfchpa}\\ \frac{\partial \phi_f}{\partial t } + u\frac{h}{h^{curr}}\frac{\partial \phi_f}{\partial h } = \nabla \cdot \left(m_f\nabla \mu_f\right ) + \xi_f & in & \;\omega \times [ 0,t ] , \label{eq : sfchfa}\\ \mu_p = \displaystyle \frac{\partial f}{\partial \phi_p } - \epsilon^2 \nabla^2 \phi_p + \left(1-h(h)\right)\frac{\partial f_s}{\partial \phi_p } & in & \;\omega \times [ 0,t ] , \label{eq : sfchpb}\\ \mu_f= \displaystyle \frac{\partial f}{\partial \phi_f } - \epsilon^2 \nabla^2 \phi_f + \left(1-h(h)\right)\frac{\partial f_s}{\partial \phi_f } & in & \;\omega \times [ 0,t ] , \label{eq : sfchf}\\ \nabla ( m_p\mu_p ) \cdot \textbf{n } = h_{\mu_p } & on & \;\gamma_t \times [ 0,t ] , \label{eq : sfchpe}\\ \nabla ( m_f\mu_f ) \cdot \textbf{n } = h_{\mu_f } & on & \;\gamma_t \times [ 0,t ] , \label{eq : sfchfe}\\ \phi_p(\textbf{x},0)=\phi_p^0(\textbf{x } ) & in & \;\omega , \\ \phi_f(\textbf{x},0)=\phi_f^0(\textbf{x } ) & in & \;\omega .\end{aligned}\ ] ] we apply neumann boundary conditions on the top surface , , to account for polymer and fullerene fraction increase at the top layer due to solvent evaporation .flux of polymer equals to , while flux of the fullerene equals to . on other boundaries, we apply zero flux conditions for both variables : volume fraction , , and chemical potential , . to account for domain change due to solvent removal from the top layer, we introduce a coordinate transformation : where is the height coordinate scaled according to current total height of the film , .this coordinate transformation permits recasting the model equations into a system of equations with fixed boundaries which is more convenient for numerical solution ._ in this way , there is no need for remeshing . _since we assume homogeneous evaporation , film has an uniform height , at any point in time , .recasting the model equation using the coordinate transformation [ eq : eta ] involves defining the gradient operator in the new coordinate system : where the first component is the direction along the height of the film .strong forms of the recast equations are given by : , \label{eq : t : sfchpa}\\ \frac{\partial \phi_f}{\partial t}+u\frac{\vartheta}{h^{curr}}\frac{\partial \phi_f}{\partial \vartheta}&=&\tilde{\nabla}\cdot\left(m_f\tilde{\nabla}\mu_f\right)+\xi_f\qquad in\;\omega \times [ 0,t ] , \label{eq : t : sfchfa}\\ \mu_p&=&\displaystyle \frac{\partial f}{\partial \phi_p } - \epsilon^2 \tilde{\nabla}^2 \phi_p + \left(1-h(h)\right)\frac{\partial f_s}{\partial \phi_p } \qquad in\;\omega \times [ 0,t ] , \label{eq : t : sfchpb}\\ \mu_f&=&\displaystyle \frac{\partial f}{\partial \phi_f } - \epsilon^2 \tilde{\nabla}^2 \phi_f + \left(1-h(h)\right)\frac{\partial f_s}{\partial \phi_f } \qquad in\;\omega \times [ 0,t ] , \label{eq : t : sfchf}\\ \tilde{\nabla } ( m_p\mu_p ) \cdot \textbf{n}&= & h_{\mu_p } \qquad on\;\gamma_t \times [ 0,t ] ,\label{eq : t : sfchpe}\\ \tilde{\nabla } ( m_f\mu_f ) \cdot \textbf{n}&= & h_{\mu_f } \qquad on\;\gamma_t \times [ 0,t ] , \label{eq : t : sfchfe } \end{aligned}\ ] ] the advection term ( second lhs term of eqns [ eq : t : sfchpa ] and [ eq : t : sfchfa ] ) is non zero only along the height direction .velocity corresponding to this term , , is proportional to the molar flux of the solvent out of system : ( eq . [ eq : u ] ) .we use the finite element method to solve the governing equations ( eqns [ eq : t : sfchpa ] - [ eq : t : sfchfe ] ) .we solve equations in the split form , to avoid constraints related to the continuity of the basis functions when using the finite element method with a primal variational formulation .more precisely , the standard -continuous finite element formulation is not suitable for forth - order operators , and consequently basis functions which are piecewise smooth and -continuous should be utilized .however , there are only a limited number of finite elements that fulfill the above continuity condition , especially in two and three dimensions . the weak form of the split cahn - hilliard equation is given by : where are weighting functions, is the inner product on , is the energy inner product on , and define natural boundary conditions .second term in lhs of eqns [ eq : wfchap ] and [ eq : wfchaf ] accounts for domain size change due to evaporation .fourth term in lhs of eqns [ eq : wfchap ] and [ eq : wfchaf ] accounts for the conserved noise , where is the vector of the stochastic flux terms .fourth term in lhs of eqns [ eq : wfchbp ] and [ eq : wfchbf ] accounts for substrate effect and are included only for surface elements belonging to bottom boundary .we note that substrate term is not a typical boundary condition but an additional term resulting from the additional energy in the system .we use the galerkin approximation to solve the two split cahn - hilliard equations .we define and to be the finite dimensional approximation of polymer and fullerene volume fraction fields , to be the finite dimensional approximation of polymer and fullerene chemical potential fields and to be the finite dimensional weighting function .the approximate solutions are computed on the following function spaces : with being the space of the standard polynomial finite element shape functions on element , where is the polynomial order .we additionally introduce the supg stabilization term for the advection terms in eqs [ eq : wfchap ] and [ eq : wfchaf ] . we discretize the conserved noise by generating gaussian- distributed random numbers for each component of flux that satisfy and .we generate -dimensional vector ( depending on 2d or 3d problem ) for each component ( polymer and fullerene ) in each node at each time step , .parameters and ( where is size of the time step ) account for fluctuation - dissipation theorem requirement . for more detailssee .we use linear basis functions for variables , , , , and . we also tested quadratic and cubic basis functions but have not noticed any significant improvements . this choice is based on an extensive analysis of the cahn - hilliard equation in . in our approach , we take advantage of implicit time schemes due to their unconditional stability .explicit methods are often intractable due to severe restrictions on size of time step ( ) arising from the stiffness of the equation .this makes them computationally prohibitive even for simple problems .consequently , implicit methods arise as a natural alternative . since they are unconditionally stable they allow for much larger time step .however , because of nonlinear nature of the cahn - hilliard equation , implicit schemes require nonlinear solvers .the popular implicit time schemes in this context are euler backward and crank - nicholson methods .because of the multiple temporal scales that occur during phase separation , we tested adaptive various time stepping strategies .we noticed significant improvement in terms of number of times steps required to reach steady state as well as in total run times . in these time stepping strategies , step size is adjusted on the basis of the error between solutions of different order .however , whenever noise is considered , the error computed using such strategies is highly affected by the noise and can not be used in standard time stepping strategies .therefore , we use a euler backward scheme with a heuristic strategy to adjust the time step as used in ( see [ sec : app ] ) .to solve the nonlinear system of two split cahn - hilliard equations , the formulation is linearized consistently and a newton raphson scheme is used . to solve large problems with several millions of degrees of freedom , we use a domain - decomposition based mesh - partitioner to divide the mesh anddistribute it across computational nodes . in our frameworkwe use the parmetis partitioner . additionally , to enable parallel solution of the algebraic systems, we use the petsc solver library .all results reported are obtained using the generalized minimal residual method .in this section , we showcase the formulation by investigating the effect of evaporation rate , blend ratio , degree of polymerization as well as effect of choice of solvent on morphology evolution . finally , we investigate the possibility of additional control of morphology through substrate patterning .we investigate a representative osc system .such a system consists of polymer , fullerene and solvent . in the default configuration , we consider degree of polymerization of solvent , fullerene and solvent .degree of polymerization is computed on the basis of molar volumes of the components .we assume , , .interaction parameter between polymer and fullerene reflects the low solubility of two components and is set as .interaction parameters between solvent and fullerene or polymer are assumed to be much lower : . in subsection [ subsec : n ] , we investigate the effect of varying degree of polymerization to and . in subsection[ subsec : solvent ] , we investigate the effect of solvent and various interaction parameters .all these values are representative for osc and correspond well with experimentally determined values .we take the solvent self - diffusion coefficient as , which are typical for solvents ( chlorobenzene , chloroform and xylene ) used in osc .self - diffusion of polymer and fullerene is much lower .we assume that .we present one , two and three dimensional results . for each simulationwe generate one mesh in the reference coordinate system .mesh density is adjusted to the estimated width of the interface between polymer and fullerene .we use the interface width as a metric to determine mesh density to accurately capture the dynamics of the interface evolution .our detailed analysis presented in showed that at least four elements per interface are required to capture the physics of phase separation and coarsening accurately . in , we showed that the analytical estimation of interface width is fairly accurate for two and three dimensional cases , even though it was derived for the one dimensional case in .the interface width is defined as a distance required to span by the concentration profile across the accessible range of the composition : . for more detailssee .the profile of concentration , and subsequently width of the interface , depends on the interfacial parameter , interaction parameters , degree of polymerization . in ternary system range of the compositionchanges with time at the intermediate stages .consequently , the interface width also changes with evaporation . to estimate interface width, we consider the case when the interface width is the smallest .this corresponds to the fully evaporated , binary fullerene - polymer system . for the system modeled in this paper , the interface width between polymer and fullerene varies from ( for ) to ( for ) .the interfacial parameter is also estimated for a binary system consisting of polymer and fullerene . following the analysis in , one can link with the interfacial energy , , between polymer - rich and fullerene - rich phases .we assume , which is comparable with interfacial energies for organic compounds . for more details regarding detailed analysis and computations ,we refer the reader to . computed value of is ( for ) , ( ) and ( ) .the initial solvent fraction is chosen to guarantee that the ternary solution is homogeneous and there is no phase separation at time . in 3d simulations , we start with a solvent fraction . in 2d simulations , we start with a solvent fraction . in 1d simulations , we start with a solvent fraction . the higher initial volume fraction of solvent in 2d simulations is required to cover the wide range of degree of polymerization and interaction parameters investigated .simulation is stopped when the fraction of solvent within the film is . at this time , the diffusion coefficient reduces significantly , and the morphology is frozen . in twodimensional simulation , we generate meshes that consist of linear elements .the computational domain is rectangular with height , and width .height changes from to , when all the solvent evaporates . in three dimensional simulations, we generated meshes that consist of linear elements to discretize a computational domain that is a rectangular prism of length , breadth and height .height changes from to , as the solvent evaporates .in one dimensional simulations , we generate meshes that consist of linear elements to discretize a height .height changes from to .the large number of degree of freedom ( million for 3d simulation ) require using parallel solvers and domain decomposition .the average run time for a three dimensional case is around 50h solved using 256cpus .the average run time for two dimensional case is around 1.5h using 8cpus .detailed scalability analysis of the solver has been reported elsewhere .evaporation of the solvent from the top surface is one of the key phenomena that induces phase separation .the way morphology evolves is an interplay between two competing processes : ( i ) evaporation of the solvent from the top surface , and ( ii ) the diffusion of the solvent within the film .this interplay can be expressed as a mass biot number .biot number is defined in eq .[ eq : bi ] and expresses the ratio between external mass transport by evaporation of the solvent from the top surface and internal mass transfer to the top surface by diffusion . where is the characteristic length - height of the film , is the diffusion coefficient of the thin film and is the mass transfer coefficient - evaporation rate .we compute the biot number for initial height and solvent diffusion coefficient . in such case ,the solution consists mostly of solvent , and the diffusion is dominated by it . to better understand the interplay between evaporation and diffusion , we first perform experiments in 1d .we consider the default case of system variables ( , , , ) . in figure[ fig : res : bi ] , we plot change of height for three biot numbers as a function of time . in the same figurewe also show the 1d volume fraction profiles of one component , polymer , as function of height .initially , polymer is distributed uniformly along the height and its volume fraction is 0.2 .similarly , the initial volume fraction of fullerene is 0.2 , since the blend ratio between polymer and fullerene 1:1 . for the symmetric case , the fullerene profiles mirror images of polymer profiles ( and hence are not plotted ) .we investigate the cases with three biot number : equal to one , much larger than one ( ) , and much lower than one ( ) . for this specific system ,biot number correspond to initial height and evaporation rate .when biot number is much larger than one , evaporation is the dominant process .this is reflected in shorter total time of evaporation compared to other cases ( see figure [ fig : res : bi ] top ) .consequently , the solvent removed from the top layer can not be balanced by mass diffusion within the film .thus , a boundary layer lean in solvent and rich in other components is created close to the top ( figure [ fig : res : bi ] ( right ) ) .enrichment of polymer and fullerene results in the initiation of phase separation . in this way ,boundary layer becomes the region of the thin film when solution is unstable even to small fluctuations , which leads to phase separation .phase separation subsequently propagates into the depth of the film .this is clearly seen in figure [ fig : res : bi ] ( right ) .close to the top layer , a blocking layer rich in polymer is created . because of low diffusion coefficient of polymer , the top layer blocks solvent evaporation from the top .when biot number is much smaller than one , diffusion is the dominant process . correspondingly , the total time of solvent removal is much longer .the system has more time to balance solvent lost from the top layer .diffusion is not suppressed by fast solvent removal ( as for high biot numbers ) and consequently no top boundary layer rich in polymer is created .there is no significant gradient of the solvent in the composition within the film .consequently , phase separation in initiated homogeneously along the height .biot number provides important insight into interplay between evaporation and diffusion .it can be used to link two types of morphology evolution : homogeneous across the film or initiated close to the top surface . for symmetric systems ( ) andlow biot number , solvent content within the film is homogeneous .for such a scenario , an assumption of homogeneous decrease of solvent volume fraction _ within the entire volume is valid_. such assumption was made in a recent work .however , if one of the non - solvent components has larger molar volume , and consequently large which is the case in osc this assumption is invalid and evaporation must be included in the model explicitly .+ + the evaporation rate affects not only the region where phase separation is initiated but also affects the average size of the separated phases . in figure[ fig : res : bi:3d ] , we plot morphology evolution for three different evaporation rates characterized by biot number , and .the magnitude of the evaporation rate affects the total time of the process ( as shown in figure [ fig : res : bi ] ) . for lower evaporation rate , and biot number ,total process time is longer .once phases separates , domains rich in each component have additional time to coarsen and create larger domain ( see figure [ fig : res : bi:3d ] left ) .for higher evaporation rate , and biot number , total time is shorter .once phases separates they have very short time to coarsen ( see figure [ fig : res : bi:3d ] right ) .this is due to the fact that solvent is removed from the system rapidly , and consequently the diffusion coefficient of the film decreases significantly leaving the morphology frozen . , and .consecutive rows correspond to the morphology at height : initial , , , and the final height.,title="fig:",scaledwidth=15.0% ] + the blend ratio between polymer and fullerene is a key system variable during fabrication of osc , that is additionally relatively easy to manipulate .the blend ratio affects the type of morphology that develops during spinodal decomposition .two basic classes of morphologies typically develop percolated morphology and morphology with islands .intuitively , the former is more suitable for osc than the latter .this is because the latter structure has more islands not connected to relevant electrodes and can not provide useful pathways for charges to reach the boundaries .therefore , for osc application , it is necessary to clearly identify process and system variables that lead to the percolated type of structure .we investigate two blend ratios 1:1 and 1:0.8 . in figure[ fig : res:3d : br ] , we plot snapshots of the morphology evolution with time . percolated morphology develops for blend ratio 1:1 .however , a small change in blend ratio of the analyzed system results in significantly different morphology type .this is clearly seen for blend ratio 1:0.8 , where multiple stripes form spontaneously and break up as the solvent evaporates .such multiple layer formation was also observed in experiments and is reported .we note that although a layered morphology has no application in oscs , such morphology is desired in organic transistors . , , and the final height.,title="fig:",scaledwidth=15.0% ] + the degree of polymerization , , is another tunable variable that allows for controlling morphology evolution . in practice , for osc fabrication , only polymer degree of polymerization can be controlled and it has been shown experimentally that efficiency is affected by its choice . in figure[ fig : res:2d : n ] , we compare the morphology evolution for three different degree of polymerization : , and .all simulations have been performed for the same blend ratio 1:1 and biot number .as we increase the degree of polymerization , the morphology changes significantly . for the symmetric case : , a percolatedmorphology evolves ; while for asymmetric cases multiple layers are created .notice also that with increasing degree of polymerization , phase separation is initiated earlier .moreover , when polymer degree of polymerization is larger than fullerene degree of polymerization , morphology type changes from percolated into multiple layered morphology . , and .consecutive rows correspond to the morphology at height : initial , , , , , , and the final height.,scaledwidth=18.0% ] in solvent - based fabrication , the solvent creates the environment for morphology evolution .both components must be soluble in the common solvent to create an initially homogeneous solution .choice of the solvent has significant effect on morphology evolution and provides additional system variable for morphology control .this is manifested as differences in relative values of interaction parameters between all three components . in figure[ fig : res:2d : solvent ] , we show morphology evolution for three combinations of the interaction parameters ( , , ) : ( , , ) , ( , , ) and ( , , ) . in the first case ,solvent is chosen such that interactions between solvent and two components are the same and much lower than interaction between polymer and fullerene .this means that polymer and fullerene have similar solubilities in solvent . in the second case ,fullerene is less soluble in the solvent compared to polymer .interaction parameter between fullerene and solvent is two times higher than between polymer and solvent . in the third case ,we assume that polymer is less soluble in the solvent . in all three case, we assume the same evaporation rate ( ) , blend ratio ( 1:1 ) , and degree of polymerization ( , and ) . changing solvent results in dramatically different morphology evolution . multiple layer formation that we observe for higher polymerization is broken in the third case , due to the different solvent used . , , , , , and the final height.,scaledwidth=18.0% ] the height of a typical osc active layer is around .for such thin geometries , morphology close to the substrate can be significantly different than that of the bulk .we consider the effect of chemical patterning of substrate to selectively affect each component .we consider two cases . in the first case ,the substrate is patterned to attract the polymer preferentially . in the second case ,substrate is patterned with two chemistries : one preferentially attracting polymer and one preferentially attracting fullerene . in both cases , patterns of wavelength the substrate . in each case , we assume the chemical potential of patterned material , and . in most cases , is small compared to .figure [ fig : res:2d : substrate ] shows morphology evolution on three different patterned substrates : neutral , preferentially attracting polymer , and preferentially attracting polymer and fullerene .we run these tests for polymer of high degree of polymerization ( last column in figure [ fig : res:2d : n ] right ) , which becomes the reference simulation and is repeated in figure [ fig : res:2d : substrate ] ( left ) . _these results clearly demonstrate that substrate patterning provides additional degree of control over morphology . _multiple layers observed without patterning can be broken close to the substrate .the breakage depends on the frequency of the patterning and combination of the material types ( figure [ fig : res:2d : substrate ] middle and right ) . patterning with alternating chemical preference allows for better control of domain size .the size along the substrate of the polymer - rich induced by such patterning is maintained during evaporation .this is not the case when patterning is purely of one preference .the domains created close to the substrate grow in size along the substrate .however , when substrate is patterned with one chemical preference , domains created close to the substrate penetrate deeper into film , compared to the other case ., , , , and final.,scaledwidth=18.0% ] in previous subsections , we showed that by independently changing system and process variables we obtain various types of morphologies . in figure [ fig : res:2d : sum ] , we summarize these types of morphologies .it is important to notice that significantly different morphologies can develop for various system and process variables . in general, there are three main types of morphologies : percolated morphology ( b ) ; morphology with multiple layers ( c and d ) and morphology with islands ( e ) .substrate patterning gives an additional means to initiate and direct morphology close to the substrate .for example , adding substrate patterning leads to breaking the bottom layer , as shown in figure [ fig : res:2d : substrate ] ._ it is interesting to note that controlling different variables may lead to the same type of morphology ._ for example , increasing degree of polymerization from to leads to multiple layer creation ( figure [ fig : res:2d : n ] ) .similar effect is observed by changed solvent ( figure [ fig : res:2d : solvent ] ) .this emphasize the importance of further systematic studies of solvent - based fabrication .morphology is a key element affecting the performance of organic solar cells .the morphology evolution during solvent - based fabrication of organic solar cells is a complex , multi physics process that is affected by a variety of material and process parameters .a virtual framework that can model 3-d morphology evolution during fabrication of osc can relate these fabrication conditions with morphology. this will significantly augment organic photovoltaic research which has been predominantly based on experimental trial and error investigations .such a framework will also allow high throughput analysis of the large phase space of processing parameters , thus yielding considerable insight into the process - structure - property relationships governing organic solar cell behavior that is currently in its infancy . in this work ,we develop a phase field - based framework to study 3d nanomorphology evolution in the active layer of osc during solvent - based fabrication process .in particular , we formulate physical and mathematical model that takes into account all important processes that occur during solvent - based fabrication of oscs .we select phase field method to model the behavior multicomponent system with various driving forces for morphology evolution .we outline an efficient numerical implementation of the framework , to enable three dimensional analysis of the process .we showcase our framework by investigating the effect of various process and system variables , that lead to following observations : 1 .mass biot number expresses the interplay between solvent evaporation from the top surface and diffusion within the thin film . for high biot number ,evaporation is a dominant process which results in top boundary layer creation enriched in two main components .consequently , phase separation initiates close to the top and propagates into the film . for low biot number , in turn, diffusion is a dominant process , solvent removed from the top layer is diffused back from the resulting in homogeneous profile .thanks to this uniform distribution phase separation initiates and evolve homogeneously within film .the morphology evolution is affected not only by kinetics through evaporation and diffusion but also by thermodynamics .in particular , interaction parameters between components and degree of polymerization have a large effect on morphology evolution .the accessibility of possible configurations provided by the free energy landscape is controlled by system variable such as blend ratio .small change of blend ratio lead to large variation in morphology evolution .4 . finally , surface induced phase separation provides another opportunity to locally affect the morphology , by creating additional sinks in the energy landscape .we are currently investigating extensions of the framework along three directions : nonhomogeneous evaporation , fluid shear effects ( based on ) , and crystallization .this research was supported in part by the national science foundation through teragrid resources provided by tacc under grant number tg - cts110007 and tg - cts100080 .bg & ow were supported in part by nsf phy-0941576 and nsf ccf-0917202 .we use euler backward scheme with the heuristic strategy to adjust size of time step .this strategy is based on the number of newton s iterations required to solve a nonlinear problem for given time step .if number of iteration is lower than 20 size of time step is increased by 25% in new time step , otherwise it is reduced to 25% of previous time step . when solution can not be found in 50 iterations ,such step is rejected and time step decreased by half . in this way , time step is decreased when rare coarsening events occur and increased when morphology evolves slowly . in figure[ fig : res : ts ] , we show example profile of time step size for various dimensionality considered .size of time step is adjusted by few orders of magnitude .efficient time stepping strategy allows to perform simulations that cover several time scales .in the figure , we show the time scale spanning over two to three orders of magnitude when height decreases by up to 80% . 63 natexlab#1#1[2]#2 , , , , ( ) ., , , ( ) ., , , , , , , , , , , ( ) . , , ( ) . ., , , , , , , , , , ( ) . , , , , , , , ( ) . , , , ( ) ., , , , , ( ) . . , , , , , , , , ( ) . , , , , , ( ) . , , , , , , ( ) . , , , , , , , , ( ) . , , , , ( ) . , , , , ( ) . , , , , , ( ) . , , , , , , ( ) . , , , , , , ( ) . , , , , , ( ) . , , , , , , , ( ) . , , , , ( ) . , , , , , , , , , , ( ) . , , , , ( ) . , , , . , , , , , ( ) . , , , , , ( ) . , , , ( ) . , , , ( ) ., , , ( ) ., , , , , ( ) . , , , , . , , , ., , , , , , ( ) . , , , ( ) ., , , , , , ( ) . , ,( ) . , , ph.d .thesis , mit , ., , , , in : , pp . . , , , , ( ) ., , , , ( ) . , , , , , , , , , , ( ) ., , , , , , , , , , ., , , , ( ) . , , , , , ( ) . , , , , ( ) ., , , , ( ) ., , , , , ( ) . , , , ( ) ., , , , ( ) ., , , ( ) . , , , , , ( ) . , , , , , ( ) ., , , , , , , , , . ., , , , , , , , , , , argonne national laboratory , . , , , ( ) ., , , , , , , ( ) . , , , , , , , , , , ( ) . , , , , , , , , , , ( ) ., , , , ( ) . , , , , , , , ( ) . , , ( ) ., , , , , ( ) .
solvent - based thin - film deposition constitutes a popular class of fabrication strategies for manufacturing organic electronic devices like organic solar cells . all such solvent - based techniques usually involve preparing dilute blends of electron - donor and electron - acceptor materials dissolved in a volatile solvent . after some form of coating onto a substrate to form a thin film , the solvent evaporates . an initially homogeneous mixture separates into electron - acceptor rich and electron - donor rich regions as the solvent evaporates . depending on the specifics of the blend , processing conditions , and substrate characteristics different morphologies are typically formed . experimental evidence consistently confirms that the resultant morphology critically affects device performance . a computational framework that can predict morphology evolution can significantly augment experimental analysis . such a framework will also allow high throughput analysis of the large phase space of processing parameters , thus yielding considerable insight into the process - structure - property relationships governing organic solar cell behavior . in this paper , we formulate a computational framework to predict evolution of morphology during solvent - based fabrication of organic thin films . this is accomplished by developing a phase field - based model of _ evaporation - induced and substrate - induced phase - separation in ternary systems_. this formulation allows most of the important physical phenomena affecting morphology evolution during fabrication to be naturally incorporated . we discuss the various numerical and computational challenges associated with a three dimensional , finite - element based , massively parallel implementation of this framework . this formulation allows , for the first time , to model three - dimensional nanomorphology evolution over large time spans on device scale domains . we illustrate this framework by investigating and quantifying the effect of various process and system variables on morphology evolution . we explore ways to control the morphology evolution by investigating different evaporation rates , blend ratios and interaction parameters between components . phase separation , evaporation , cahn - hilliard equation , substrate patterning , organic solar cells , morphology evolution .
this paper is concerned with stability in the numerical solution of initial - boundary value problems for time - dependent multidimensional diffusion equations containing mixed spatial - derivative terms , here is any given integer and denotes any given real , symmetric , positive semidefinite matrix .the spatial domain is taken as and .multidimensional diffusion equations with mixed derivative terms play an important role , notably , in the field of financial option pricing theory . herethese terms represent correlations between underlying stochastic processes , such as for asset prices , volatilities or interest rates , see e.g. . in this paperwe consider for given ] was recently started in in t hout & mishra . for the mcs scheme and ,the useful result was proved here that the lower bound is sufficient for unconditional stability ( whenever ) .the present paper substantially extends the work of reviewed above .section [ main ] contains the two main results of this paper .the first main result is theorem [ theorem1 ] , which provides for each of the do , cs , mcs , hv schemes in and spatial dimensions a _ sufficient _ condition on the parameter for unconditional stability under ( [ gamma ] ) for arbitrary given ] a _ necessary _ condition on for unconditional stability under ( [ gamma ] ) . for each scheme ,the obtained necessary and sufficient conditions coincide if or .section [ numexp ] presents numerical illustrations to the two main theorems .the final section [ concl ] gives conclusions and issues for future research .in the following we always make the minor assumption that the matrix with ( ) is positive semidefinite .thus , in particular , for all .we note that this assumption on is weaker than the corresponding assumption that was made in .this section gives two lemmas that shall be used in the proofs of the main results below .[ lemma2 ] let , be given real numbers with .consider the polynomial then if and only if the critical points of are given by the equations a straightforward analysis , using , shows that there is precisely one critical point in the domain , and it is given by . inserting this into and rewriting yields hence if and only if the first inequality in ( [ pol ] ) holds .consider next the polynomial on the boundary of the pertinent domain .it is clear that if .next , on the boundary part , , there holds thus ( whenever , ) if and only if the second inequality in ( [ pol ] ) holds . by symmetry ,the result for the other two boundary parts is the same , which completes the proof .the subsequent analysis relies upon four key properties of the scaled eigenvalues ( [ eig ] ) .[ lemma1 ] let be given by ( [ eig ] ) . let ] and assume ( [ gamma ] ) holds . let ( [ ode ] ) , ( [ splitting ] ) be obtained by central second - order fd discretization and splitting as described in section [ intro ] .then for the following parameter values the do , cs , mcs , hv schemes are unconditionally stable when applied to ( [ ode ] ) , ( [ splitting ] ) : + + do scheme : cs scheme : mcs scheme : hv scheme : the proofs for the four schemes are similar . in view of this, we shall consider here the hv scheme and leave the proofs for the do , cs and mcs schemes to the reader .. ] using the properties ( [ zproperties]a)([zproperties]c ) of the scaled eigenvalues , it is readily seen that for the stability requirement ( [ stab ] ) is equivalent to [ condhv ] & & 2p^2+(2p-1)(z_0+z)+(z_0+z)^2 0 , + & & 2p-1+(z_0+z ) 0 .condition ( [ condhv]a ) is always fulfilled since the discriminant .subsequently , using ( [ defyj ] ) and ( [ z0plusz ] ) it is easily seen that if there exists such that for all ( ) , then condition ( [ condhv]b ) is fulfilled whenever .+ + the inequality ( [ kappa ] ) reads and this can be rewritten as hence , the inequality is satisfied if which is equivalent to selecting the rightmost value for , yields that ( [ condhv]b ) holds for whenever the inequality ( [ kappa ] ) reads which , by the identity is equivalent to let , , . then , , and the above inequality can be written as where with hence , if and , then ( [ kappa ] ) is satisfied for .if , i.e. , then obviously .next consider .lemma [ lemma2 ] yields that whenever set . then and it follows that ( [ beta ] ) is equivalent to , which means .hence , if then ( [ kappa ] ) holds for . selecting the upper bound for , yields that ( [ condhv]b ) is fulfilled for whenever upon setting in theorem [ theorem1 ] the resulting sufficient conditions on for the cs , mcs , hv schemes agree with those in ( * ? ?2.2 , 2.5 , 2.8 ) .the above theorem thus forms a proper generalization of results in .for the do scheme , the obtained sufficient condition generalizes and improves the corresponding result for this scheme from . in particular ,if and , then yields , whereas theorem [ theorem1 ] yields . in view of theorem [ theorem1 ] , a smaller parameter value can be chosen while retaining unconditional stability if it is known that ( [ gamma ] ) holds with certain given .this is useful , in particular since a smaller value often yields a smaller , i.e. , more favorable , error constant .the mcs scheme with the lower bound for given by theorem [ theorem1 ] has been successfully used recently in the actual application to the three - dimensional heston white pde from financial mathematics , see haentjens & in t hout .the following theorem provides , for each adi scheme , a necessary condition on for unconditional stability . as in the foregoing proof, we consider here the hv scheme and leave the ( analogous ) proofs for the other three adi schemes to the reader .consider the matrix with and whenever .clearly , is symmetric positive semidefinite and ( [ gamma ] ) holds .let , so that .first , choose the angles in ( [ eig ] ) equal to zero for .then the eigenvalues are given by , and .in view of ( [ condhv]b ) , we have whenever , .this immediately implies . next , choose all angles in ( [ eig ] ) the same , i.e. , . then the eigenvalues are given by z_0&=&-rk(k-1 ) , + z_j & = & -2r(1- ) , j=1,2, ,k , where note that . by ( [ condhv]b ) , it must hold that + 2\theta r ( 1-\cos\phi ) } { 2[\ , 1 + 2\theta r ( 1-\cos\phi)\,]^k-1}\ ] ] whenever , . setting , this yields /2+\alpha}{2(1+\alpha)^k-1}\ ] ] for all , .taking the supremum over , gives \quad { \rm for~all}~~\alpha > 0,\ ] ] where by elementary arguments ( cf . ) it follows that the function has an absolute maximum which is equal to given in the theorem . upon setting in theorem [ theorem2 ] , the necessary conditions on for the cs , mcs, hv schemes reduce to those given in ( * ? ? ?2.3 , 2.6 , 2.9 ) .further , for the do scheme and there is agreement with the necessary condition from .it is readily verified that , for each adi scheme , the sufficient conditions of theorem [ theorem1 ] and the necessary conditions of theorem [ theorem2 ] are identical whenever or and .hence , in two and three spatial dimensions , these conditions are sharp .the interesting question arises whether the necessary conditions of theorem [ theorem2 ] are also sufficient in spatial dimensions . in it was proved that this is true for the hv scheme and .it can be seen , however , that the proof from does not admit a straightforward extension to values .further , in the case of the do , cs , mcs schemes a proof is not clear at present .accordingly , we leave this question for future research .in this section we illustrate the main results of this paper , theorems [ theorem1 ] and [ theorem2 ] .we present experiments where all four adi schemes ( [ do])([hv ] ) are applied in the numerical solution of multidimensional diffusion equations ( [ pde ] ) possessing mixed derivative terms .the pde is semidiscretized by central second - order finite differences as described in subsection [ fd ] , with , and the semidiscrete matrix is splitted as described in subsection [ adi ] .the boundary condition is taken to be periodic ( in this case ) .figure [ fig1 ] shows for the semidiscrete solution values and displayed on the grid in , so that they represent the exact solution at and .0.3 cm [ cols="^,^ " , ] exactly the same observations can be made as in the experiment ( above ) for the two - dimensional case .the large temporal errors for each adi scheme when applied with the smaller value , as seen in the right column of figure [ 3derrors ] , correspond to instability of the scheme .when applied with their lower bound values , given by theorem [ theorem1 ] , the error behavior for all adi schemes is in agreement with unconditional stability of the schemes .moreover , in this case a stiff order of convergence is observed that is equal to one for the do scheme and equal to two for the cs , mcs and hv schemes .in this paper we analyzed stability in the von neumann sense of four well - known adi schemes - the do , cs , mcs and hv schemes - in the application to multidimensional diffusion equations with mixed derivative terms .such equations are important , notably , to the field of financial mathematics .necessary and sufficient conditions have been derived on the parameter for unconditional stability of each adi scheme by taking into account the actual size of the mixed derivative terms , measured by the quantity ] .also , it is of much interest to extend the results obtained in this paper to equations with advection terms .this leads to general complex , instead of real , eigenvalues , which forms a nontrivial feature for the analysis , cf .e.g. .k. j. in t hout & c. mishra , _ a stability result for the modified craig sneyd scheme applied to 2d and 3d pure diffusion equations _, in : numerical analysis and applied mathematics , eds. t. e. simos et .al . , aip conf . proc .* 1281 * ( 2010 ) 20292032 .k. j. in t hout & b. d. welfert , _ unconditional stability of second - order adi schemes applied to multi - dimensional diffusion equations with mixed derivative terms _ , appl .numer . math .* 59 * ( 2009 ) 677692 .
in this paper the unconditional stability of four well - known adi schemes is analyzed in the application to time - dependent multidimensional diffusion equations with mixed derivative terms . necessary and sufficient conditions on the parameter of each scheme are obtained that take into account the actual size of the mixed derivative coefficients . our results generalize results obtained previously by craig & sneyd ( 1988 ) and in t hout & welfert ( 2009 ) . numerical experiments are presented illustrating our main theorems .
calibrating measurement instruments is a important problem that engineers frequently need to address .there exist several statistical methods that address this problem that are based on a simple linear regression approach . in tradition simple linear regressionthe goal is to relate a known value of x to a uncertain value of y using a linear relationship .in contrast , the statistical calibration problem seeks to utilize a simple linear regression model to relate a known value of y to an uncertain value of x. this is why statistical calibration is sometimes called _inverse regression _ due to its relationship to simple linear regression ( osborne 1991 ; ott and longnecker 2009 ) .recall in linear regression the model is given as follows : where * y * is a response vector , * x * is a matrix of independent variables with total model parameters , is a vector of unknown fixed parameters and is a vector of uncorrelated error terms with zero mean ( myers 1990 ; draper and smith 1998 ; montgomery _ et al .it is assumed that the value of the predictor variable * x * = * x * are nonrandom and observed with negligible error , while the error terms are random variables with mean zero and constant variance ( myers 1990 ) . typically , in regression , of interestis the estimation of the parameter vector ; , and possibly the prediction of a future value corresponding to a new value .the prediction problem is relatively straightforward , due to the fact that a future value can be made directly by substituting into ( [ eq : linear_matrix_error ] ) with =0 ] and is known as the error .+ the model may have different defining parameters at different times .one approach is to model and by using random walk type evolutions for the defining parameters , such as : where and are independent zero - mean error terms with finite variances . at any time the calibration problem is given by : bayesian dynamic linear models ( dlms ) approach of west _ et al . _ ( 1985 ) ; west and harrison ( 1997 ) can be employed to achieve this goal .recall the dlm framework is : \label{eq : observe}\\ \mbox{system equation } : \hspace{1 cm } & { \boldsymbol \theta_{t } } = { \bf g}_{t}{\boldsymbol \theta_{t-1 } } + { \boldsymbol \omega_{t } } , & { \boldsymbol \omega}_{t } \sim n_{d}[{\bf 0 , w}]\label{eq : system}\\ \mbox{initial information } : \hspace{1 cm } & ( { \boldsymbol \theta_{0 } } | d_{0 } ) \sim n_{d}[{\bf m_{0 } , c_{0}}],\end{aligned}\ ] ] for some prior mean and variance with the vector of error terms , and independent across time and at any time .+ to update the model through time west and harrison ( 1997 ) give the following method : a. posterior distribution at : for some mean and variance , + ] , where + and . c. one - step forecast : ] , with + and + where + and .the dlm framework is used to establish the evolving relationship between the fixed design matrix and by estimating , which is a matrix of time - varying regression coefficients and . for our calibration situation is a matrix of responses and is a known ( ) system matrix .the error and are independent normally distributed random matrices with zero mean and constant variance - covariance matrices * e * and * w*. for simplification is set equal to , is set equal to and is ^{-1} ] .for the two reference case , the fixed design matrix is \ ] ] and for the five reference case the design matrix is .\ ] ] the vector of regression parameters , , are randomly drawn from a multivariate normal distribution with mean vector ^{'} ] for , where . for each ,the random multivariate error vector is \ ] ] where the errors are mutually independent .the relationship of the values for and will be explained later .+ the dynamic and static calibration methods are evaluated for three distinct system fluctuations , , on the regression slope calculated in the first stage of calibration . the value is added to , therefore making equation ( [ eq : sim_model ] ) for the calibration references .the three scenarios for the fluctuations are as follows : 1 . a constant zero ( ) for all , representing a stable system ; 2 . a stable system with abrupt shifts ( in system , with ; and 3 . a constant sinusoidal fluctuation ( ) for all . figure [ fig : gain_fluct ] ) explains the relationship of across time .+ ; ( b ) with ; ( c ) ,title="fig:",width=192 ] ; ( b ) with ; ( c ) ,title="fig:",width=192 ] + ; ( b ) with ; ( c ) ,title="fig:",width=192 ] the magnitude and relationship of the variance pair influence the dlm and hence to study this influence we set the variances to reflect various _ signal - to - noise ratios_. the true values for and used in the simulation study are and , respectively . petris _( 2009 ) define the signal - to - noise ratio as follows : the signal - to - noise ratios in the simulation study were examined in two sets .first , is set equal to 10 , 100 , and 1000 .next , the ratio was set to equal 2 , 20 , and 200 .the variety of values allow us to examine the methods under different levels of noise .each simulation is repeated 100 times for both the 2- and 5-point calibration models , thus providing us with 36 possible models for examination from the settings of .+ after the data was fit with each of the methods we considered the following measures for assessing the performance of the dynamic methods compared to the familiar static approaches : ( 1 ) average mean square error ; ( 2 ) average coverage probability ; and ( 3 ) average interval width . for each of the simulated data sets , the mean squared error ( )is calculated as the are averaged across the 100 simulated data sets thus deriving an average mean squared error ( ) as the coverage probability based on the coverage interval is estimated for all of the calibration methods .the coverage interval for the dynamic and static bayesian approaches is the credible interval and the confidence interval is used for the frequentist methods .note , that for credible intervals is the 0.025 posterior quantile for , and is the 0.975 posterior quantile for , where is the true value of the calibration target from the second stage of experimentation , then is a credible interval . the coveraged probability ( ) is calculated as such where = \left\ { \begin{array}{ll } 0 & \mbox{if };\\ & \\ 1 & \mbox{if }.\end{array } \right.\ ] ] the average coverage probability ( ) is calculated by averaging across the number of replications in the simulation study , where another quantity of interest to compare the average interval widths ( ) for the methods , where the average interval widths ( ) across the simulated time series is calculated as follows : with the average interval width across the simulation study given as where is the average interval width for the simulation replicate .the performance of the dynamic calibration approaches will be assess using the average coverage probability ( ) , average interval width ( ) and average mean square ( ) .+ we consider the performance of the methods under two conditions : interpolation and extrapolation .interpolation case is of interest to understand how the method performed when the calibrated time series is within the range of the reference values , [ 20 , 100 ] .extrapolation case also conducted to examine the methods when falls outside of the range of the calibration references , where .while it not preferable to do extrapolation in the regression case , it is often done in practice in microwave radiometry . +all simulations were carried out on the compile server running ( r development core team , 2013 ) at virginia commonwealth university .the compile server has a linux os with 16 cpu cores and 32 gb ram .each iteration in the study took approximately 15 minutes with a total of 25.63 hours . in the following tables ,the simulation results for the dynamic and static calibration methods are provided .the results of simulation studies provide insight into the properties of the calibration approaches .the results in tables [ tab : constant_inter1 ] and [ tab : constant_inter2 ] indicate that all of the estimators do a good job at approximating the true values of when the gain flucuation is set to 0 . even in this casewe see as the signal - to - noise ratio increases so does the values .all of the methods have an average coverage probability of 1 or close .the high coverage rate is of no surprise for a stable system .there does not appear to be an advantage by including more reference measurements ( i.e 2- or 5-points ) in the model when the system is stable in time .the clear difference is the values for the dynamic methods compared to the static methods . in tables [ tab : constant_inter1 ] and [ tab : constant_inter2 ] when and , the interval for the dynamic methods is wider than those of the four static methods but as increases the interval width of the dynamic methods remain nearly unchanged as the interval widths for the static methods are 4 to 5 times wider .+ the simulation results for the stepped gain fluctuations are provided in tables [ tab : stepped_inter1 ] and [ tab : stepped_inter2 ] . clearlythe presence of the stepped has an effect on the fit of the models .the results in tables [ tab : stepped_inter1 ] show that in nearly all cases , the two dynamic methods and have values smaller than the two static bayesian approaches . the values for the dynamic methods are reasonably lower for .when , notice the dynamic models and have smaller average mean square errors smaller than the static method .the average coverage probability is comparable for all of the methods and number of references .the dynamic methods consistently have shorter interval widths .the widths of the 95% credible intervals for and is not affected by the increases in .+ [ comparison of calibration approaches when interpolating to estimate without gain fluctuations ] comparison of calibration approaches when interpolating to estimate without gain fluctuations based on 100 data sets .avmse is the average mean squared error , avcp is the average coverage probability , and aviw is the average 95% interval width .the signal - to - noise ratio is denoted as .ccrcr|rcr|rcr + & & & & & & & & & & + ref . & model & avmse & avcp & aviw & avmse & avcp & aviw & avmse & avcp & aviw + 2 & & 0.0008 & 0.995 & 2.519 & 0.0035 & 0.983 & 2.523 & 0.0307 & 0.939 & 2.517 + & & 0.0012 & 1.000 & 3.782 & 0.0038 & 1.000 & 3.782 & 0.0308 & 1.000 & 3.782 + & & 0.0001 & 1.000 & 1.224 & 0.0012 & 1.000 & 3.868 & 0.0123 & 1.000 & 12.229 + & & 0.0001 & 1.000 & 1.223 & 0.0016 & 1.000 & 3.863 & 0.0335 & 1.000 & 12.168 + & & 0.0002 & 0.997 & 1.182 & 0.0022 & 1.000 & 3.866 & 0.0386 & 1.000 & 12.177 + & & 0.0014 & 1.000 & 1.458 & 0.0139 & 1.000 & 4.606 & 0.1391 & 1.000 & 14.565 + 5 & & 0.0008 & 0.995 & 2.496 & 0.0035 & 0.983 & 2.509 & 0.0307 & 0.941 & 2.514 + & & 0.0013 & 1.000 & 3.983 & 0.0039 & 1.000 & 3.983 & 0.0307 & 1.000 & 3.983 + & & 0.0001 & 1.000 & 1.223 & 0.0012 & 1.000 & 3.865 & 0.0123 & 1.000 & 12.220 + & & 0.0001 & 1.000 & 1.222 & 0.0022 & 1.000 & 3.860 & 0.0813 & 1.000 & 12.113 + & & 0.0002 & 1.000 & 1.223 & 0.0023 & 1.000 & 3.861 & 0.0792 & 1.000 & 12.116 + & & 0.0014 & 1.000 & 1.457 & 0.0139 & 1.000 & 4.604 & 0.1069 & 1.000 & 10.748 + [ tab : constant_inter1 ] [ comparison of calibration approaches when interpolating to estimate without gain fluctuations ] comparison of calibration approaches when interpolating to estimate without gain fluctuations based on 100 data sets .avmse is the average mean squared error , avcp is the average coverage probability , and aviw is the average 95% interval width .the signal - to - noise ratio is denoted as .ccrcr|rcr|rcr + & & & & & & & & & & + ref . & model & avmse & avcp & aviw & avmse & avcp & aviw & avmse & avcp & aviw + 2& & 0.0012 & 0.992 & 2.519 & 0.0041 & 0.981 & 2.520 & 0.0323 & 0.939 & 2.528 + & & 0.0015 & 1.000 & 3.782 & 0.0044 & 1.000 & 3.782 & 0.0325 & 1.000 & 3.782 + & & 0.0001 & 1.000 & 1.230 & 0.0010 & 1.000 & 3.871 & 0.0114 & 1.000 & 12.231 + & & 0.0001 & 1.000 & 1.229 & 0.0012 & 1.000 & 3.866 & 0.0314 & 1.000 & 12.170 + & & 0.0001 & 1.000 & 1.230 & 0.0019 & 1.000 & 3.869 & 0.0371 & 1.000 & 12.179 + & & 0.0190 & 1.000 & 1.155 & 0.0243 & 1.000 & 3.767 & 0.1381 & 1.000 & 14.567 + 5& & 0.0011 & 0.992 & 2.508 & 0.0041 & 0.981 & 2.510 & 0.032 & 0.939 & 2.514 + & & 0.0017 & 1.000 & 3.983 & 0.0045 & 1.000 & 3.983 & 0.032 & 1.000 & 3.983 + & & 0.0001 & 1.000 & 1.228 & 0.0010 & 1.000 & 3.868 & 0.011 & 1.000 & 12.222 + & & 0.0001 & 1.000 & 1.227 & 0.0019 & 1.000 & 3.863 & 0.081 & 1.000 & 12.114 + & & 0.0001 & 1.000 & 1.227 & 0.0021 & 1.000 & 3.864 & 0.082 & 1.000 & 12.118 + & & 0.0013 & 1.000 & 1.462 & 0.0137 & 1.000 & 4.607 & 0.138 & 1.000 & 14.560 + [ tab : constant_inter2 ] [ comparison of calibration approaches when interpolating to estimate with stepped gain fluctuations ] comparison of calibration approaches when interpolating to estimate with stepped gain fluctuations based on 100 data sets .avmse is the average mean squared error , avcp is the average coverage probability , and aviw is the average 95% interval width .the signal - to - noise ratio is denoted as .ccrcr|rcr|rcr + & & & & & & & & & & + ref .& model & avmse & avcp & aviw & avmse & avcp & aviw & avmse & avcp & aviw + 2& & 0.0191 & 0.961 & 2.509 & 0.0198 & 0.953 & 2.506 & 0.0406 & 0.926 & 2.543 + & & 0.0196 & 1.000 & 3.782 & 0.0201 & 1.000 & 3.782 & 0.0408 & 1.000 & 3.783 + & & 0.0001 & 1.000 & 9.094 & 0.0004 & 1.000 & 9.813 & 0.0094 & 1.000 & 15.209 + & & 0.0046 & 1.000 & 9.065 & 0.0073 & 1.000 & 9.779 & 0.0528 & 1.000 & 15.098 + & & 0.0859 & 1.000 & 9.072 & 0.0838 & 1.000 & 9.786 & 0.1866 & 1.000 & 15.109 + & & 0.1399 & 1.000 & 10.830 & 0.0823 & 1.000 & 11.687 & 0.1836 & 1.000 & 18.115 + 5& & 0.0191 & 0.961 & 2.510 & 0.0197 & 0.954 & 2.511 & 0.0405 & 0.924 & 2.516 + & & 0.0196 & 1.000 & 3.983 & 0.0201 & 1.000 & 3.983 & 0.0405 & 1.000 & 3.983 + & & 0.0001 & 1.000 & 9.087 & 0.0004 & 1.000 & 9.806 & 0.0094 & 1.000 & 15.199 + & & 0.0184 & 1.000 & 9.041 & 0.0267 & 1.000 & 9.749 & 0.1620 & 1.000 & 14.995 + & & 0.0199 & 1.000 & 9.044 & 0.0267 & 1.000 & 9.752 & 0.1559 & 1.000 & 14.999 + & & 0.0706 & 1.000 & 10.826 & 0.0618 & 1.000 & 8.625 & 0.1742 & 1.000 & 15.091 + [ tab : stepped_inter1 ] [ comparison of calibration approaches when interpolating to estimate with stepped gain fluctuations ] comparison of calibration approaches when interpolating to estimate with stepped gain fluctuations based on 100 data sets .avmse is the average mean squared error , avcp is the average coverage probability , and aviw is the average 95% interval width .the signal - to - noise ratio is denoted as .ccrcr|rcr|rcr + & & & & & & & & & & + ref . & model & avmse & avcp & aviw & avmse & avcp & aviw & avmse & avcp & aviw + 2& & 0.0209 & 0.957 & 2.520 & 0.0219 & 0.950 & 2.522 & 0.0436 & 0.921 & 2.526 + & & 0.0214 & 1.000 & 3.782 & 0.0222 & 1.000 & 3.782 & 0.0438 & 1.000 & 3.782 + & & 0.0001 & 1.000 & 9.103 & 0.0003 & 1.000 & 9.822 & 0.0086 & 1.000 & 15.216 + & & 0.0047 & 1.000 & 9.075 & 0.0073 & 1.000 & 9.788 & 0.0511 & 1.000 & 15.105 + & & 0.0084 & 1.000 & 9.081 & 0.0115 & 1.000 & 9.795 & 0.0601 & 1.000 & 15.116 + & & 0.0709 & 1.000 & 10.842 & 0.0826 & 1.000 & 11.698 & 0.2054 & 1.000 & 18.122 + 5& & 0.0209 & 0.957 & 2.509 & 0.0218 & 0.949 & 2.511 & 0.0436 & 0.920 & 2.516 + & & 0.0214 & 1.000 & 3.983 & 0.0221 & 1.000 & 3.983 & 0.0435 & 1.000 & 3.983 + & & 0.0001 & 1.000 & 9.096 & 0.0003 & 1.000 & 9.815 & 0.0086 & 1.000 & 15.205 + & & 0.0185 & 1.000 & 9.050 & 0.0267 & 1.000 & 9.758 & 0.1616 & 1.000 & 15.002 + & & 0.0199 & 1.000 & 9.053 & 0.0281 & 1.000 & 9.761 & 0.1641 & 1.000 & 15.006 + & & 0.0708 & 1.000 & 10.836 & 0.0825 & 1.000 & 11.693 & 0.2052 & 1.000 & 18.114 + [ tab : stepped_inter2 ] [ comparison of calibration approaches when interpolating to estimate with sinusoidal gain fluctuations ] comparison of calibration approaches when interpolating to estimate with sinusoidal gain fluctuations based on 100 data sets .avmse is the average mean squared error , avcp is the average coverage probability , and aviw is the average 95% interval width .the signal - to - noise ratio is denoted as .ccrcr|rcr|rcr + & & & & & & & & & & + ref . & model & avmse & avcp & aviw & avmse & avcp & aviw & avmse & avcp & aviw + 2 & & 4.4088 & 0.628 & 2.657 & 4.4794 & 0.629 & 2.648 & 4.7214 & 0.638 & 2.681 + & & 4.4002 & 0.829 & 3.783 & 4.4708 & 0.825 & 3.783 & 4.7123 & 0.810 & 3.783 + & & 0.0001 & 1.000 & 21.980 & 0.0012 & 1.000 & 22.307 & 0.0123 & 1.000 & 25.206 + & & 0.1541 & 1.000 & 21.665 & 0.1670 & 1.000 & 21.978 & 0.2943 & 1.000 & 24.738 + & & 0.1689 & 0.975 & 20.933 & 0.1868 & 1.000 & 21.994 & 0.3174 & 1.000 & 24.757 + & & 0.4127 & 1.000 & 26.178 & 0.4258 & 1.000 & 26.567 & 0.5531 & 1.000 & 30.020 + 5& & 4.4087 & 0.628 & 2.646 & 4.4793 & 0.630 & 2.648 & 4.7214 & 0.635 & 2.653 + & & 4.3906 & 0.845 & 3.984 & 4.4609 & 0.839 & 3.984 & 4.7023 & 0.824 & 3.984 + & & 0.0001 & 1.000 & 21.964 & 0.0012 & 1.000 & 22.291 & 0.0123 & 1.000 & 25.188 + & & 0.5810 & 1.000 & 21.371 & 0.6218 & 1.000 & 21.671 & 1.0152 & 1.000 & 24.306 + & & 0.5956 & 1.000 & 21.377 & 0.5909 & 1.000 & 21.678 & 0.9628 & 1.000 & 24.314 + & & 0.4123 & 1.000 & 26.166 & 0.3087 & 1.000 & 18.973 & 0.4658 & 1.000 & 25.009 + [ tab : sine_inter1 ] [ comparison of calibration approaches when interpolating to estimate with sinusoidal gain fluctuations ] comparison of calibration approaches when interpolating to estimate with sinusoidal gain fluctuations based on 100 data sets .avmse is the average mean squared error , avcp is the average coverage probability , and aviw is the average 95% interval width .the signal - to - noise ratio is denoted as .ccrcr|rcr|rcr + & & & & & & & & & & + ref . &model & avmse & avcp & aviw & avmse & avcp & aviw & avmse & avcp & aviw + 2& & 4.4504 & 0.625 & 2.658 & 4.5213 & 0.628 & 2.660 & 4.7643 & 0.6331 & 2.665 + & & 4.4419 & 0.828 & 3.783 & 4.5127 & 0.823 & 3.783 & 4.7553 & 0.8083 & 3.783 + & & 0.0001 & 1.000 & 21.968 & 0.0010 & 1.000 & 22.295 & 0.0114 & 1.000 & 25.196 + & & 0.1538 & 1.000 & 21.653 & 0.1672 & 1.000 & 21.966 & 0.2896 & 1.000 & 24.729 + & & 0.1732 & 1.000 & 21.669 & 0.1867 & 1.000 & 21.982 & 0.3127 & 1.000 & 24.748 + & & 0.4122 & 1.000 & 26.164 & 0.4253 & 1.000 & 26.553 & 0.5518 & 1.000 & 30.008 + 5& & 4.4504 & 0.625 & 2.647 & 4.5213 & 0.627 & 2.648 & 4.7643 & 0.633 & 2.654 + & & 4.4322 & 0.844 & 3.984 & 4.5029 & 0.838 & 3.984 & 4.7451 & 0.823 & 3.984 + & & 0.0001 & 1.000 & 21.952 & 0.0010 & 1.000 & 22.278 & 0.0114 & 1.000 & 25.178 + & & 0.5799 & 1.000 & 21.359 & 0.6206 & 1.000 & 21.660 & 1.0133 & 1.000 & 24.297 + & & 0.5851 & 1.000 & 21.366 & 0.6256 & 1.000 & 21.667 & 1.0183 & 1.000 & 24.304 + & & 0.4119 & 1.000 & 26.152 & 0.4247 & 1.000 & 26.541 & 0.5513 & 1.000 & 29.995 + [ tab : sine_inter2 ] the results provided in tables [ tab : sine_inter1 ] and [ tab : sine_inter2 ] summarize the performance of the methods when the gain fluctuation is sinusoidal noise .the results for values of 10 , 100 , and 1000 are given in table [ tab : sine_inter1 ] with and 200 given in table [ tab : sine_inter2 ] .when is sinusoidal , the values for the dynamic methods are consistently larger than any of the static methods .for all of the chosen values , the is considerably lower than the opposing methods .the dynamic methods still have average interval widths extremely shorter than any of the static methods .the is constant across the signal - to - noise ratios .+ the simulation study shows that methods and do a good job at estimating calibrated values that are interior to the range of reference measurements .both methods display high coverage probabilities in the presence of drifting parameters .for the three possible gain fluctuations , the interval widths for the dynamic methods were consistently shorter than the static calibration approaches .when fitting data where there is a definite linear relationship the dynamic methods are invariant to the number of reference measurements .when using the proposed methods in this paper , not much will be gained by using more than 2 reference measurements .overall , when interpolating to estimate , the dynamic methods outperform the static bayesian approaches across the different signal - to - noise ratios . in the following section the performance of the dynamic methodsare assessed when the calibrated values fall outside of the range of reference measurements . at this point in the paperwe examine the calibration approaches when the calibrated values are outside of the reference measurements .the range of the measurement references is from 20 to 100 .the true behaved as a random walk bounded between 100 and 110 .we assessed the performance of the dynamic methods under three possible gain fluctuation patterns .first , the simulation study is conducted without the presence of additional gain fluctuation ( i.e. ) ; second , the gain is a stepped pattern influencing the time - varying slope over time ; lastly , a sinusoidal is added to . just as the previous results ,the methods are assessed by the average mean square error ( ) , average coverage probability ( ) , and the average interval width ( ) under different signal - noise - ratios .+ [ comparison of calibration approaches when extrapolating to estimate without gain fluctuations ] comparison of calibration approaches when extrapolating to estimate without gain fluctuations based on 100 data sets .avmse is the average mean squared error , avcp is the average coverage probability , and aviw is the average 95% interval width .the signal - to - noise ratio is denoted as .ccrcr|rcr|rcr + & & & & & & & & & & + ref . & model & avmse & avcp & aviw & avmse & avcp & aviw & avmse & avcp & aviw + 2& & 0.0018 & 1.000 & 5.255 & 0.0043 & 1.000 & 5.255 & 0.0309 & 1.000 & 5.255 + & & 0.0016 & 1.000 & 3.910 & 0.0042 & 1.000 & 3.910 & 0.0311 & 1.000 & 3.910 + & & 0.0001 & 1.000 & 1.224 & 0.0012 & 1.000 & 3.869 & 0.0123 & 1.000 & 12.234 + & & 0.0001 & 1.000 & 1.223 & 0.0001 & 1.000 & 3.863 & 0.1019 & 1.000 & 12.168 + & & 0.0001 & 1.000 & 1.225 & 0.0008 & 1.000 & 3.867 & 0.1115 & 1.000 & 12.181 + & & 0.0014 & 1.000 & 1.458 & 0.0139 & 1.000 & 4.606 & 0.1391 & 1.000 & 14.565 + 5& & 0.0019 & 1.000 & 5.233 & 0.0043 & 1.000 & 5.233 & 0.0309 & 1.000 & 5.233 + & & 0.0029 & 1.000 & 4.106 & 0.0054 & 1.000 & 4.106 & 0.0323 & 1.000 & 4.106 + & & 0.0001 & 1.000 & 1.223 & 0.0012 & 1.000 & 3.866 & 0.0123 & 1.000 & 12.224 + & & 0.0001 & 1.000 & 1.222 & 0.0027 & 1.000 & 3.860 & 0.5502 & 1.000 & 12.113 + & & 0.0001 & 1.000 & 1.223 & 0.0030 & 1.000 & 3.862 & 0.5566 & 1.000 & 12.120 + & & 0.0014 & 1.000 & 1.457 & 0.0139 & 1.000 & 4.604 & 0.1389 & 1.000 & 14.558 + [ tab : constant_extrap1 ] [ comparison of calibration approaches when extrapolating to estimate without gain fluctuations ] comparison of calibration approaches when extrapolating to estimate without gain fluctuations based on 100 data sets .avmse is the average mean squared error , avcp is the average coverage probability , and aviw is the average 95% interval width .the signal - to - noise ratio is denoted as .ccrcr|rcr|rcr + & & & & & & & & & & + ref . & model & avmse & avcp & aviw & avmse & avcp & aviw & avmse & avcp & aviw + 2& & 0.0031 & 1.000 & 5.253 & 0.0060 & 1.000 & 5.253 & 0.0340 & 1.000 & 5.253 + & & 0.0034 & 1.000 & 3.910 & 0.0064 & 1.000 & 3.910 & 0.0347 & 1.000 & 3.910 + & & 0.0001 & 1.000 & 1.230 & 0.0008 & 1.000 & 3.872 & 0.0107 & 1.000 & 12.236 + & & 0.0001 & 1.000 & 1.229 & 0.0003 & 1.000 & 3.866 & 0.1068 & 1.000 & 12.170 + & & 0.0001 & 1.000 & 1.230 & 0.0010 & 1.000 & 3.871 & 0.1164 & 1.000 & 12.183 + & & 0.0013 & 1.000 & 1.465 & 0.0135 & 1.000 & 4.610 & 0.1376 & 1.000 & 14.567 + 5& & 0.0032 & 1.000 & 5.231 & 0.0060 & 1.000 & 5.231 & 0.0340 & 1.000 & 5.232 + & & 0.0053 & 1.000 & 4.106 & 0.0083 & 1.000 & 4.106 & 0.0365 & 1.000 & 4.106 + & & 0.0001 & 1.000 & 1.228 & 0.0008 & 1.000 & 3.869 & 0.0107 & 1.000 & 12.226 + & & 0.0001 & 1.000 & 1.227 & 0.0035 & 1.000 & 3.863 & 0.5616 & 1.000 & 12.114 + & & 0.0001 & 1.000 & 1.228 & 0.0039 & 1.000 & 3.865 & 0.5680 & 1.000 & 12.121 + & & 0.0013 & 1.000 & 1.462 & 0.0135 & 1.000 & 4.607 & 0.1375 & 1.000 & 14.560 + [ tab : constant_extrap2 ] the results are provided in tables [ tab : constant_extrap1 ] and[tab : constant_extrap2 ] for the statistical calibration methods without gain fluctuations .the performance of the proposed method is stable across the signal - to - noise ratios .a point of interest is the reported values for methods and .we see for and that the is 3 to 5 times wider than those for the static approaches . when and the interval width for all competing methods are relatively close .the dynamic approaches outperform the static methods in noisy conditions such as and .the interval widths for the dynamic methods are considerably shorter than the those for the static methods .the simulation results reveal that when the data is characteristic of having a large signal - to - noise ratio , the dynamic methods , and , will outperform static bayesian approaches and the inverse approach .+ [ comparison of calibration approaches when extrapolating to estimate with stepped gain fluctuations ] comparison of calibration approaches when extrapolating to estimate with stepped gain fluctuations based on 100 data sets .avmse is the average mean squared error , avcp is the average coverage probability , and aviw is the average 95% interval width . the signal - to - noise ratio is denoted as .ccrcr|rcr|rcr + & & & & & & & & & & + ref . & model & avmse & avcp & aviw & avmse & avcp & aviw & avmse & avcp & aviw + 2& & 0.0206 & 1.000 & 5.247 & 0.0210 & 1.000 & 5.247 & 0.0412 & 1.000 & 5.247 + & & 0.0225 & 1.000 & 3.910 & 0.0230 & 1.000 & 3.910 & 0.0435 & 1.000 & 3.910 + & & 0.0001 & 1.000 & 9.097 & 0.0004 & 1.000 & 9.817 & 0.0094 & 1.000 & 15.215 + & & 0.0581 & 1.000 & 9.065 & 0.0656 & 1.000 & 9.779 & 0.3191 & 1.000 & 15.098 + & & 0.0634 & 1.000 & 9.075 & 0.0718 & 1.000 & 9.789 & 0.3361 & 1.000 & 15.115 + & & 0.0707 & 1.000 & 10.830 & 0.0826 & 1.000 & 11.687 & 0.2060 & 1.000 & 18.115 + 5& & 0.0209 & 1.000 & 5.226 & 0.0213 & 1.000 & 5.226 & 0.0412 & 1.000 & 5.226 + & & 0.0268 & 1.000 & 4.106 & 0.0273 & 1.00 & 4.106 & 0.0483 & 1.000 & 4.106 + & & 0.0001 & 1.000 & 9.090 & 0.0004 & 1.000 & 9.809 & 0.0094 & 1.000 & 15.203 + & & 0.2274 & 1.000 & 9.041 & 0.2812 & 1.000 & 9.749 & 1.4628 & 1.000 & 14.995 + & & 0.2307 & 1.000 & 9.047 & 0.2851 & 1.000 & 9.755 & 1.4744 & 1.000 & 15.004 + & & 0.0706 & 1.000 & 10.826 & 0.0825 & 1.000 & 11.682 & 0.2058 & 1.000 & 18.106 + [ tab : stepped_extrap1 ] [ comparison of calibration approaches when extrapolating to estimate with stepped gain fluctuations ] comparison of calibration approaches when extrapolating to estimate with stepped gain fluctuations based on 100 data sets .avmse is the average mean squared error , avcp is the average coverage probability , and aviw is the average 95% interval width .the signal - to - noise ratio is denoted as .ccrcr|rcr|rcr + & & & & & & & & & & + ref . & model & avmse & avcp & aviw & avmse & avcp & aviw & avmse & avcp & aviw + 2& & 0.0242 & 1.000 & 5.245 & 0.0250 & 1.000 & 5.245 & 0.0466 & 1.000 & 5.245 + & & 0.0266 & 1.000 & 3.910 & 0.0275 & 1.000 & 3.910 & 0.0494 & 1.000 & 3.910 + & & 0.0001 & 1.000 & 9.106 & 0.0002 & 1.000 & 9.826 & 0.0080 & 1.000 & 15.222 + & & 0.0620 & 1.000 & 9.075 & 0.0698 & 1.000 & 9.788 & 0.3284 & 1.000 & 15.105 + & & 0.0674 & 1.000 & 9.085 & 0.0760 & 1.000 & 9.799 & 0.3447 & 1.000 & 15.121 + & & 0.0710 & 1.000 & 10.842 & 0.0825 & 1.000 & 11.698 & 0.2048 & 1.000 & 18.122 + 5& & 0.0245 & 1.000 & 5.224 & 0.0254 & 1.000 & 5.224 & 0.0427 & 1.000 & 5.226 + & & 0.0315 & 1.000 & 4.106 & 0.0324 & 1.000 & 4.106 & 0.0485 & 1.000 & 4.106 + & & 0.0001 & 1.000 & 9.099 & 0.0002 & 1.000 & 9.818 & 0.0089 & 1.000 & 14.255 + & & 0.2354 & 1.000 & 9.050 & 0.2902 & 1.000 & 9.758 & 1.1896 & 1.000 & 14.078 + & & 0.2388 & 1.000 & 9.056 & 0.2941 & 1.000 & 9.764 & 1.1995 & 1.000 & 14.086 + & & 0.0709 & 1.000 & 10.836 & 0.0824 & 1.000 & 11.693 & 0.1831 & 1.000 & 16.977 + [ tab : stepped_extrap2 ] [ comparison of calibration approaches when extrapolating to estimate with sinusoidal gain fluctuations ] comparison of calibration approaches when extrapolating to estimate with sinusoidal gain fluctuations based on 100 data sets .avmse is the average mean squared error , avcp is the average coverage probability , and aviw is the average 95% interval width .the signal - to - noise ratio is denoted as .ccrcr|rcr|rcr + & & & & & & & & & & + ref . & model & avmse & avcp & aviw & avmse & avcp & aviw & avmse & avcp & aviw + 2& & 4.4096 & 0.873 & 5.127 & 4.4800 & 0.872 & 5.127 & 4.7214 & 0.866 & 5.127 + & & 4.4410 & 0.833 & 3.904 & 4.5114 & 0.825 & 3.904 & 4.7530 & 0.813 & 3.904 + & & 0.0001 & 1.000 & 21.988 & 0.0012 & 1.000 & 22.315 & 0.0123 & 1.000 & 25.216 + & & 1.8193 & 1.000 & 21.665 & 1.8636 & 1.000 & 21.978 & 2.7760 & 1.000 & 24.739 + & & 1.8602 & 1.000 & 21.688 & 1.9056 & 1.000 & 22.002 & 2.8312 & 1.000 & 24.766 + & & 0.4127 & 1.000 & 26.178 & 0.4258 & 1.000 & 26.567 & 0.5531 & 1.000 & 30.020 + 5& & 4.4105 & 0.872 & 5.106 & 4.4808 & 0.872 & 5.106 & 4.7222 & 0.866 & 5.107 + & & 4.4889 & 0.842 & 4.100 & 4.5593 & 0.835 & 4.101 & 4.8007 & 0.822 & 4.100 + & & 0.0001 & 1.000 & 21.971 & 0.0012 & 1.000 & 22.297 & 0.0123 & 1.000 & 25.195 + & & 6.9539 & 1.000 & 21.371 & 7.2327 & 1.000 & 21.671 & 11.0337 & 1.000 & 24.306 + & & 6.9852 & 1.000 & 21.383 & 7.2650 & 1.000 & 21.684 & 11.0772 & 1.000 & 24.320 + & & 0.4123 & 1.000 & 26.166 & 0.4254 & 1.000 & 26.555 & 0.5526 & 1.000 & 30.007 + [ tab : sine_extrap1 ] [ comparison of calibration approaches when extrapolating to estimate with sinusoidal gain fluctuations ] comparison of calibration approaches when extrapolating to estimate with sinusoidal gain fluctuations based on 100 data sets .avmse is the average mean squared error , avcp is the average coverage probability , and aviw is the average 95% interval width .the signal - to - noise ratio is denoted as .ccrcr|rcr|rcr + & & & & & & & & & & + ref . & model & avmse & avcp & aviw & avmse & avcp & aviw & avmse & avcp & aviw + 2& & 4.4491 & 0.872 & 5.125 & 4.5199 & 0.871 & 5.125 & 4.7626 & 0.866 & 5.126 + & & 4.4809 & 0.828 & 3.904 & 4.5518 & 0.821 & 3.904 & 4.7948 & 0.807 & 3.904 + & & 0.0001 & 1.000 & 21.976 & 0.0008 & 1.000 & 22.303 & 0.0107 & 1.000 & 25.205 + & & 1.8350 & 1.000 & 21.653 & 1.8796 & 1.000 & 21.966 & 2.7956 & 1.000 & 24.729 + & & 1.8759 & 1.000 & 21.676 & 1.9216 & 1.000 & 21.990 & 2.8508 & 1.000 & 24.756 + & & 0.4123 & 1.000 & 26.164 & 0.4250 & 1.000 & 26.553 & 0.5511 & 1.000 & 30.008 + 5& & 4.4497 & 0.872 & 5.10 & 4.5205 & 0.871 & 5.105 & 4.7633 & 0.865 & 5.105 + & & 4.5292 & 0.836 & 4.100 & 4.6000 & 0.832 & 4.100 & 4.842 & 0.814 & 4.100 + & & 0.0001 & 1.000 & 21.958 & 0.0008 & 1.000 & 22.285 & 0.0107 & 1.000 & 25.185 + & & 6.9764 & 1.000 & 21.359 & 7.2560 & 1.000 & 21.660 & 11.0629 & 1.000 & 24.297 + & & 7.0077 & 1.000 & 21.372 & 7.2882 & 1.000 & 21.673 & 11.1064 & 1.000 & 24.311 + & & 0.4119 & 1.000 & 26.152 & 0.4246 & 1.000 & 26.541 & 0.5507 & 1.000 & 29.995 + [ tab : sine_extrap2 ] next , we impose a stepped gain fluctuation to the data generated and wanted to evaluate the behavior of the calibration methods .the results for the stepped case are given in tables [ tab : stepped_extrap1 ] and [ tab : stepped_extrap2 ] .we see by the values in both tables that the dynamic methods perform better than most static methods .if the calibrated values by chance drift outside of the reference range the dynamic methods will do a good job at capturing it with certainty while having a narrower credible interval than confidence intervals of the static methods .the dynamic approaches outperform all of the static method in terms of .these results of the simulation study do not change much across the number of references used . once again , when the relationship is assumed to be linear there is no benefit to adding more references .+ lastly , the study is conducted with a sinusoidal gain fluctuation while extrapolating to estimate .the results for the sinusoidal case are given in tables [ tab : sine_extrap1 ] and [ tab : sine_extrap2 ] .the dynamic methods and exhibit the same behavior as before in tables [ tab : sine_inter1 ] and [ tab : sine_inter2 ] with values ranging for 4.4 to 4.8 .even though the average mean square errors are larger than those of the static methods when using a 2-reference model , the two dynamic methods outperform the static methods and which are based on the inverse approach .the dynamic models have average coverage probabilities smaller than the static model across all of the signal - to - noise ratios. we can not fail to point out that once again the are 4 to 6 times shorter than the average widths for the static models .in this example , we apply the dynamic calibration approaches to the calibration of a microwave radiometer for an earth observing satellite .engineers and scientist commonly use microwave radiometers to measure the electromagnetic radiation emitted by some source or a particular surface such as ice or land surface .radiometers are very sensitive instruments that are capable of measuring extremely low levels of radiation .the transmission source of the radiant power is the target of the radiometers antenna . when the region of interest , such as terrain , is observed by a microwave radiometer , the radiation received by the antenna is partly due to self - emission by the area of interest and partly due to the reflected radiation originating from the surroundings ( ulaby _ et al ._ 1981 ) such as cosmic background radiation , ocean surface , or a heated surface used for the purpose of calibration . + a basic diagram of a radiometer is shown in figure [ fig : radiometer ] where the radiant power with equivalent brightness temperature ( i.e. the term brightness temperature represents the intensity of the radiation emitted by the scene under observation ) enters the radiometer receiver and is converted to the output signal .the schematic features the common components of most microwave radiometers .as the radiometer captures a signal ( i.e. brightness temperature ) , it couples the signal into a transmission line which then carries the signal to and from the various elements of the circuit . in figure [ fig : radiometer ] , a signal is introduced directly into the antenna , then it is mixed , amplified and filtered to produce the output signal .this filtering and amplification of the signal is carried out through the following components of the radiometer : an amplifier ; pre - detection filter ; a square law detector ; and a post - detection filter .the output of the radiometer is denoted as .see ulaby _et al . _ ( 1981 ) for a detailed discussion .+ racette and lang ( 2005 ) state that at the core of every radiometer measurement is a calibrated receiver .calibration is required due to the fact that the current electronic hardware is unable to maintain a stable input / output relationship . for space observing instruments , stable calibration without any driftsis a key to detect proper trends of climate ( imaoka _ et al ._ 2010 ) . due to problems such as amplifier gain instability and exterior temperature variations of critical components that may cause this relationship to drift over time ( bremer 1979 ) . during the calibration process, the radiometer receiver measures the voltage output power , and its corresponding input temperature of a known reference .two or more known reference temperatures are needed for calibration of a radiometer .( 1981 ) ; racette and lang ( 2005 ) state that the relationship between the output , and the input , is approximately linear , and can be expressed as where , is the estimated value of the brightness temperature , is the observed output voltage . using this relationship, the output value , , is used to derive an estimate for the input , ( racette and lang , 2005 ) .+ traditional calibration methods use measurements taken from known calibration references , for example see figure [ fig : refdata ] . due to possible cost constraintsit is common to use between two and five references .the reference temperatures are converted to their equivalent power measurement prior to the calibration algorithm .the radiometer outputs are observed when the radiometer measures the reference temperatures , giving an ordered calibration pair .the values are observed from the process of the electronics within the radiometer ( see figure [ fig : radiometer ] ) ( ulaby _ et al . _ 1981 ; racette and lang 2005 ) . through the process of calibration, the unknown brightness temperature is estimated by plugging its observed output into either equation ( [ eq : classeq ] ) or equation ( [ eq : inveq ] ) .+ it is of interest to develop a calibration approach that can detect gain abnormalities , and/or correct for slow drifts that affect the quality of the instrument measurements . to demonstrate the dynamic approach in terms of application appeal ,the two dynamic methods were used to characterize a calibration target over time for a microwave radiometer .the data used for this example was collected during a calibration experiment that was conducted on the millimeter - wave imaging radiometer ( mir ) ( racette et al .the purpose of the experiment was to validate predictions of radiometer calibration .+ [ ] time series of mir output voltage measurement data , , and .[ fig : ex_volt ] the mir was built with two internal blackbody references which will be used to observe a third stable temperature reference for an extended period of time .the third reference was a custom designed cryogenically cooled reference .racette ( 2005 ) conducted the mir experiment under two scenarios : the first experiment denoted as examined the calibration predictions when the unknown target is interior ( i.e. interpolation ) to the reference measurements ; the second set of measurements ( denoted as ) where taken when the unknown temperature to be estimated is outside ( i.e. extrapolation ) of the range of calibration references .+ for demonstration purposes we will only consider the experiment , for details of the experiment see racette ( 2005 ) .for the run of the experiment , the reference temperatures are as follows : 1 . 2 . with the unknown target temperature that must be estimated denoted as .each temperature measure has a corresponding observed time series of output measurements ; , , and ( see figure [ fig : ex_volt ] ) .therefore in this example we only consider a 2-point calibration set - up as we use and as the known reference standards and use to derive estimates of for the first 1000 time periods .+ the results of the dynamic approaches : and , will be compared to the inverse " calibration method ( krutchkoff 1967 ) implemented by racette ( 2005 ) .the method considered by racette ( 2005 ) will be denoted as . as in practice , rarely does one know the value of the true temperature to be estimated so the aim of this example is to assess the contribution of the calibration approach to the variability in the measurement estimate .racette ( 2005 ) analysis did not consider biases that may exist in calibration , continuing in the same spirit , the existence of biases will not be considered in the analysis .we will apply the , , and approaches to the data to estimate the temperature ; the standard deviation of the estimated time series is used as a measure of uncertainty including the contribution of the calibration algorithm .+ [ ] time series of calibrated temperature for mir experiment .black lines indicate the results using the inverse " calibration approach ; the green lines are the results using the dynamic approach with the 95% credible intervals in purple .[ fig : ex_mod1 ] [ ] time series of calibrated temperature for mir experiment .black lines indicate the results using the inverse " calibration approach ; the red lines are the results using the dynamic approach with the 95% credible intervals in purple .[ fig : ex_mod2 ] figures [ fig : ex_mod1 ] shows the time series of the temperature estimates for using krutchkoff s ( 1967 ) inverse " approach and the dynamic approach .the standard deviations for the and approaches are and , respectfully .we see the dynamic model improves the estimation process over the static model by observing the corresponding standard deviation values .the dynamic model decreased the measurement uncertainty by roughly 33% . in figure[ fig : ex_mod2 ] the time series of the temperature estimates for using the inverse " approach and the dynamic approach is given .the standard deviations for the and approaches are and . again, the dynamic approach outperforms the static model . in this case ,dynamic model decreased the measurement uncertainty by roughly 34% .two new novel approaches to the statistical calibration problem have been presented in this paper . inwas shown by the simulation results that the use of the dynamic approach has its benefits over the static methods .if the linear relationship in the first stage of calibration is known to be stable then the traditional methods should be used .the dynamic methods showed promise in the cases when the signal - to - noise ratio was high .there is also a computation expense to implementing the dynamic methods compared to the static methods , but in the sense of electronics these methods allow for near real time calibration and monitoring .+ it is worth noting that the dynamic method shows possible deficiencies when the gain fluctuations is sinusoidal , referring to results in table [ tab : sine_inter1 ] . in figure[ fig : sine1 ] it is evident the largest source of the error is in the beginning of estimation process , roughly from to .the mse values for the dynamic approaches ; and were 4.41 and 4.40 , respectively , which was vastly different than those reported for the static methods .this problem can be addressed by extending the burn - in period .+ we increased the burn - in period to 200 which allowed the algorithm more time to learn and hence results in a lower mse value . in figure[ tab : sine2 ] we see that the estimates fit better to the true values of .the mse decreased from 4.41 to 0.64 for and 0.63 for .the increased burn - in period improves the coverage probability but the interval width is nt noticeably affected .the coverage probability increased from 0.628 to 0.722 for and from 0.829 to 0.964 for .+ [ comparison of calibration approaches when interpolating to estimate with sinusoidal gain fluctuations ] comparison of calibration approaches and when interpolating to estimate with sinusoidal gain fluctuation . for completenesswe consider the behavior of the method if crosses zero .it is absurd to believe that this would happen in practice because one would test the significance of ( myers 1990 ; montgomery _ et al ._ 2012 ) for using any method where the possibility of dividing by zero could occur .we demonstrate this by generating data where for all time and drifts from 1 to -1 over time where ( see figure [ fig : beta_cross ] ) .figure [ fig : beta_cross_x0 ] shows the dynamic method is close to the true values of until get close to 0 . within the region where the slope crosses the posterior estimates become _unstable_. here we define unstable as meaning that we are within a region where there is division by zero .this instability is only present when , for every . as long as the dynamic methodwill perform well when estimating .+ some calibration problems are not linear or approximately linearly related in and .future work is to investigate the dynamic calibration methods in the presence of nonlinearity . in such settingswe may not have the ability to use only 2-points as references .any approach will require more references in order to accurately capture the nonlinear behavior .another area to be explored is using semiparametric regression which also allow for parameter variation across time and could be implemented in a near real time setting .imaoka , k. , kachi , m. , kasahara , m. , ito , n. , nakagawa , k. , and oki , t. ( 2010 ) .instrument performance and calibration of amsr - e and amsr2 . _international archives of the photogrammetry , remote sensing and special information science_. * 38*(part 8) .racette , p. , adler , r. f. , wang , j. r. , gasiewski , a. j. , and zacharias , d. s. ( 1996 ) . an airborne millimeter - wave imaging radiometer for cloud , precipitation , and atmospheric water vapor studies ._ journal of atmospheric and oceanic technology _ , * 13*(3 ) , 610 - 619 .
the problem of statistical calibration of a measuring instrument can be framed both in a statistical context as well as in an engineering context . in the first , the problem is dealt with by distinguishing between the classical " approach and the inverse " regression approach . both of these models are static models and are used to estimate exact " measurements from measurements that are affected by error . in the engineering context , the variables of interest are considered to be taken at the time at which you observe it . the bayesian time series analysis method of dynamic linear models ( dlm ) can be used to monitor the evolution of the measures , thus introducing an _ dynamic _ approach to statistical calibration . the research presented employs the use of bayesian methodology to perform statistical calibration . the dlm s framework is used to capture the time - varying parameters that maybe changing or drifting over time . two separate dlm based models are presented in this paper . a simulation study is conducted where the two models are compared to some well known static calibration approaches in the literature from both the frequentist and bayesian perspectives . the focus of the study is to understand how well the _ dynamic statistical calibration _ methods performs under various signal - to - noise ratios , . the posterior distributions of the estimated calibration points as well as the coverage intervals are compared by statistical summaries . these dynamic methods are applied to a microwave radiometry dataset .
since the publication of the graph pebbling survey there has been a great deal of activity in the subject .there are now over 50 papers by roughly 80 authors in the field ; the web page maintains a current list of these papers .many researchers have asked for an updated survey thanks to the new york academy of sciences for providing the opportunity .the following is based on the talk , `` _ everything you always wanted to know about graph pebbling but were afraid to ask _ '' , an obvious ripoff / homage to one of new york s favorite directors .we begin by introducing relevant terminology and background on the subject . here , the term _ graph _ refers to a simple graph without loops or multiple edges . for the definitions of other graph theoretical termssee any standard graph theory text such as .the pebbling number of a disconnected graph will be seen to be undefined .henceforth we will assume all graphs to be connected .a configuration of pebbles on a graph can be thought of as a function .the value equals the number of pebbles placed at vertex , and the _ size _ of the configuration is the number of pebbles placed in total on .a pebbling step along an edge from to reduces by 2 the number of pebbles at and increases by 1 the number of pebbles at .we say that a vertex can be _ reached _ by if one can repeatedly apply pebbling steps so that , in the resulting configuration , we have ( and for all ) . the _ pebbling number _ , , is defined to be the smallest integer so that any specified _ root _vertex of can be reached by every configuration of size . a configuration that reaches every vertexis called _ solvable _ , and _unsolvable _ otherwise .the origins of graph pebbling reside in combinatorial number theory and group theory .a sequence of elements of a finite group is called a _ zero - sum sequence _ if it sums to the identity of . a simple pigeonhole argument ( on the sequence of partial sums )proves the following theorem .[ origin ] any sequence of elements of a finite group contains a zero - sum subsequence .in fact , a subsequence of consecutive terms can be guaranteed by the pigeonhole argument .furthermore , one can instead stipulate that the zero - sum subsequence has at most terms , where is the exponent of ( i.e. the maximum order of an element of ) , and this is best possible . initiated in 1956 by erds , the study of zero - sum sequences has a long history with many important applications in number theory and group theory . in 1961erds et al . proved that every sequence of elements of a cyclic group contains a zero - sum subsequence of length exactly . in 1969 van emde boas and kruyswijk proved that any sequence of elements of a finite abelian group contains a zero - sum sequence . in 1994alford et al . used this result and modified erds s arguments to prove that there are infinitely many carmichael numbers .much of the recent study has involved finding davenport s constant , defined to be the smallest such that every sequence of elements contains a zero - sum subsequence .there are a wealth of results on this problem and its variations , as well as applications to factorization theory and to graph theory . in 1989kleitman and lemke , and independently chung , proved the following theorem ( originally stated number - theoretically ) , strengthening theorem [ origin ] .let denote the cyclic group of order , and let denote the order of an element in the group to which it belongs .[ cyclic ] for every sequence of elements from there is a zero - sum subsequence such that . for a sequence the sum known as the _ cross number _ of and is an important invariant in factorization theory . guaranteeing cross number at most 1 strengthens the extension of theorem [ origin ] that , and shows that equality holds if and only if every .the concept of pebbling in graphs arose from an attempt by lagarias and saks to give an alternative ( and more natural and structural ) proof than that of kleitman and lemke ; it was chung who carried out their idea .see also for another extension of this result .kleitman and lemke then conjectured that theorem [ cyclic ] holds for all finite groups . for a subgroup of ,call a sequence of elements of an -_sum sequence _ if its elements sum to an element of .in is proved the following theorem ( the methods of use graph pebbling ) .[ subgroupconj ] let be a subgroup of a finite abelian group with .for every sequence of elements from there is an -sum subsequence such that .the case here gives theorem [ cyclic ] for finite abelian groups , strengthening the van emde boas and kruyswijk result .kleitman and lemke also conjectured that theorem [ subgroupconj ] holds for all finite groups , and verified their conjecture for all dihedral groups ( see ) . for other nonabelian groups ,it has been shown recently to hold for the nonabelian solvable group of order 21 ( see ) .it would be interesting to see whether graph pebbling methods can shed light on the davenport constant for finite abelian groups . in this regard ,one of the most pressing questions is as follows .write , where . then is the _ rank _ of .it is natural to guess that from pigeonhole intuition .this was conjectured in and is true by theorem [ cyclic ] for rank 1 groups .moreover , it was proven in for rank 2 groups and -groups as well .however , it was proven in that the conjecture is false for some groups each rank at least 4 .what remains open is the instance of rank 3 .[ davenport ] if is a finite abelian group of rank then its davenport constant satisfies .there are many known results regarding .if one pebble is placed at each vertex other than the root vertex , , then no pebble can be moved to . also , if is at distance from , and pebbles are placed at , then no pebble can be moved to .thus we have that .graphs that satisfy are known as _class 0 _ graphs , which include the complete graph , the -dimensional cube , complete bipartite graphs , and many others .we will say more about such graphs in section [ dcc ] .any graph with a cut vertex has .( indeed , let and , where and are two components of .define the configuration by , and for every other vertex .then and can not reach . )the path , the cube , the petersen graph , the even cycle , and the line graph of the complete graph are examples of graphs that satisfy , while the odd cycle is an example of a graph not satisfying either lower bound . another standard result is the pebbling number of a tree , which is worked out in .regarding upper bounds , it follows immediately from the pigeonhole principle that a graph on vertices with diameter has pebbling number . it would be interesting to find better general bounds on , especially not involving .the independence number seems not to be useful .for example , there is no function such that every graph of independence number and diameter has pebbling number .indeed , we define a family of graphs which satisfy and , but which have pebbling number as .define , where and the edge set . since is a cut vertex we know by the above comment that .however , the domination number ( minimum size of a dominating set ) can be useful .chan and godbole made the following improvements on the general upper bound .[ betterupper ] let denote the domination number of . then 1 . , 2 . , and 3 . .the inequalities in parts 1 and 2 are sharp , and the coefficient of in part 3 can be reduced to in the case of perfect domination .for any two graphs and , we define the _ cartesian product _ to be the graph with vertex set and edge set and or and }. the cube can be built recursively from the cartesian product , and chung s result that ( and more generally theorem [ pathprod ] below ) would follow easily from graham s conjecture , which has generated a great deal of interest .( graham)[graham ] .it is worth mentioning that there are some results which verify graham s conjecture . among these, the conjecture holds for a tree by a tree , a cycle by a cycle , and a clique by a graph with the -pebbling property ( see below ) .recently , feng and kim verified the conjecture for complete bipartite graphs and for wheels or fans .it is also proven in that the conjecture holds when each is a path .let be a path with vertices and for let denote the graph .[ pathprod ] for nonnegative integers , .the conjecture was also verified recently for graphs of high minimum degree , using theorem [ conn ] .[ mindeg ] if and are connected graphs on vertices that satisfy and , then . in particular, there is a constant so that if then is class 0 .we will present probabilistic versions of conjecture [ graham ] in sections [ graphthresh ] and [ pebblingthresh ] .a graph is said to have the _2-pebbling property _ if two pebbles can be moved to any specified vertex when the initial configuration has size , where is the number of vertices with .this property is crucial in the proof of theorem [ pathprod ] . until recently , only one graph ( see figure [ lemke ] ) , due to lemke , was known not to have the 2-pebbling property , although a family of related graphs were conjectured in not to have the property either .although that conjecture is still unresolved , wang proved that a slight modification of snevily s graphs yield an infinite family of graphs , none of which have the 2-pebbling property .also found in is the conjecture that all bipartite graphs have the 2-pebbling property .it is possible that the square of the lemke graph might be a counterexample to graham s conjecture .find .in order to describe the generalization employed by chung to prove the kleitman - lemke theorem , we need to introduce generalized pebbling . a_ -pebbling step _ in consists of removing pebbles from a vertex , and placing one pebble on a neighbor of .we say that a pebbling step from to is _ greedy _ ( _ semigreedy _ ) if ( ) , where is the root vertex , and that a graph is _ ( semi ) greedy _ if for any configuration of pebbles on the vertices of we can move a pebble to any specified root vertex , in such a way that each pebbling step is ( semi ) greedy .let be a product of paths , where . theneach vertex can be represented by a vector , with for each .let , be the standard basis vector .denote the vector by * 0*. then two vertices and are adjacent in if and only if for some integer . if , then we may define * q*-_pebbling _ in to be such that each pebbling step from * u * to * v * is a -pebbling step whenever .we denote the * q*-pebbling number of by .chungs s proof of theorem [ pathprod ] uses the following theorem . for integers , , we use as shorthand for the product .[ genpathprod ] suppose that pebbles are assigned to the vertices of and that the root .then it is possible to move one pebble to via greedy * q*-pebbling .in addition , it was shown in that , and moreover , that is greedy . also in the following generalization of graham s conjecture to * q*-pebbling was made .[ gengraham ] . can conjecture [ graham ] be proved in the case when both graphs share certain extra properties such as greediness and tree - solvability ?we say that a graph is _ tree - solvable _ if , whenever is a configuration on of size , it is possible to solve in such a way that the edges traversed by pebbling steps form an acyclic graph .there are graphs which are neither greedy nor tree - solvable .for example , let be the vertices ( in order ) of the cycle , and form from by adjoining new vertices to and , and to and .it is not difficult to show that . with root and , has no greedy solution ( nor does it have a semi - greedy solution ) . with and , has no tree - solution .( it is worth noting that is bipartite . )another illustrative graph is , which is also bipartite .( is the _ star _ with vertices , also denoted ) .easily , and ( these numbers also follow from our classification of diameter two graphs below ) .although each pebbling number is only one more than the number of vertices , is far greater than ; it would be interesting to discover how much greater this gap can be for other graphs .also , notice that , a strict inequality .more importantly , as observed by moews , is not semi - greedy .indeed , think of as three pages of a book , let be the corner vertex of one of the pages , the farthest corner vertex of another page , , and the three vertices of the third page , and let .this shows that even semi - greediness is not preserved by the product of two greedy graphs .probably , the following holds .[ greedyconj ] almost every graph is greedy and tree - solvable .because of the lower bound of on the pebbling number , it is natural to try to classify class 0 graphs , or at least give conditions which either guarantee or prohibit class 0 .this is of course extremely difficult , although some preliminary results have proved quite interesting . as argued above , graphs of connectivity 1 are not class 0 . in find the following theorem .[ diam2 ] if then or . class 0 graphs of diameter 2 are classified in .a particularly crucial graph in the characterization is the graph , built from the bipartite graph by connecting all the vertices of one of the parts of the bipartition to each other . has connectivity 2 and diameter 2 but , as witnessed by the configuration , where , and the root are independent .the following corollary to the characterization appears in .denote the connectivity of a graph by .[ 3conn ] if , and then is of class 0 .from this it follows that almost all graphs ( in the probabilistic sense ) are of class 0 , since almost every graph is 3-connected with diameter 2 .the following result , conjectured in , was proved in .this result was used to prove a number of other theorems , including theorems [ mindeg ] , [ girth ] and [ class ] .[ conn ] there is a function such that if is a graph with and then is of class 0 .moreover , .an upper bound for diameter 3 graphs was recently obtained in .[ diam3 ] if then , which is best possible .another guarantee for class 0 membership may reside in the following , as yet unexplored , question . [ greedy ] is it true that every greedy graph is of class 0 ?a nice family of graphs in relation to theorem [ conn ] is the following . for , the _ kneser graph _ , , is the graph with vertices \choose t} ] as .if it turns out that the threshold for class 0 is bigger than that for connectivity , then it would be a very interesting to investigate the pebbling numbers of graphs in for in that range .another important problem , especially in light of the previous conjecture , is the following .[ 2ppthresh ] find the threshold for the 2-pebbling property of the random graph .for this section we will fix notation as follows . the vertex set for any graph on vertices will be taken to be \} ] .that way , any configuration is independent of .( here we make the distinction that is the index of the graph in a sequence , whereas denotes its number of vertices . )let denote the sequence of complete graphs , the sequence of paths , and the sequence of -dimensional cubes .let \rar \n$ ] denote a configuration on vertices .let and for fixed consider the probability space of all configurations of size , we denote by the probability that is -solvable and let .we say that is a _ pebbling threshold _ for , and write , if whenever and whenever .the existence of such thresholds was recently established in .[ existthresh ] every graph sequence has nonempty .the first threshold result is found in .the result is merely an unlabeled version of the so - called `` birthday problem '' , in which one finds the probability that 2 of people share the same birthday , assuming days in a year .[ cliquethresh ] .the same threshold applies to the sequence of stars ( ) .it was discovered in that every graph sequence satisfies , where is meant to signify that for every .the authors also discovered that when is a sequence of graphs of bounded diameter , that , and that .surprisingly , the threshold for the sequence of paths has not been determined .the lower bound found in was improved in to for every , while the upper bound of found in was improved in to for every .finally the lower bound was tightened recently in to nearly match the upper bound .[ lower ] for any constant , we have . frustratingly , this still leaves room for a wide range of possible threshold functions .it is interesting that even within the family of trees , the pebbling thresholds can vary so dramatically , as in the case for paths and stars .diameter seems to be a critical parameter .it is quite natural to guess that families of graphs with higher pebbling numbers have a higher threshold , but this kind of monotonicity result remains unproven .[ monothresh ] if for all then .in fact , this conjecture remains unproven even in the case that and are sequences of trees . moreover , there is some reason to believe the conjecture may be false , since the pebbling number is a worst - case scenario , while the threshold is an average - case scenario .consider the following .for a positive integer and a graph denote by the probability that a randomly chosen configuration of size on solvable . then conjecture [ monothresh ]would follow from the statement that , if then for all we have . unfortunately , although seemingly intuitive , this implication is false . using the class 0 pebbling characterization theorem of , in found a family of pairs of graphs , one pair for each , for which the implication fails .however , the implication may yet hold when and are trees .[ trees ] let and both be trees on vertices so that .is it true that , for all ( or those `` near '' one of the thresholds ) , we have ? at the heart of such investigations into monotonicity is the following , most natural conjecture . [ spec ]let be any functions satisfying .then there is some graph sequence such that .this conjecture was proven in in the case that is replaced by .in fact , the family of fuses ( defined in section [ coverpebbopt ] ) covers this whole range .what behavior lives above remains unknown .it is interesting to consider a pebbling threshold version of graham s conjecture . given graph sequences and , define the sequence .suppose that , , and , where , , and .[ prodthresh ] is it true that , for , , and as defined above , we have ?in particular , one can define the sequence of graphs in the obvious way . in onefinds tight enough bounds on to show that the answer to this question is yes for and .another important instance is .boyle proved that .this was improved in , answering question [ prodthresh ] affirmatively for squares of complete graphs .[ squarecliques ] for we have .this result is interesting because , by squaring , the graphs become fairly sparse , and yet their structure maintains the low pebbling threshold .the proof of the result tied the behavior of pebbling in to the existence of large components in various models of random complete bipartite graphs . another interesting related sequence to consider is . when we have , and the best result to date is the following theorem of ( obtained independently in ) . [ cubes ] for the sequence of cubes we have for all .let and and denote by the graph sequence , where .most likely , fixed yields similar behavior to theorem [ cubes ]. [ grids ] for fixed we have .in contrast , the results of show that for fixed .thus it is reasonable to believe that there should be some relationship between the two functions and , both of which tend to infinity , for which the sequence has threshold on the order of .[ gridbalance ] find a function for which .in particular , how does compare to ?finally one might consider the behavior of graphs of high minimum degree .define to be the set of all connected graphs on vertices having minimum degree at least .let denote any sequence of graphs with each . in proven the following .[ dense ] for every function , .in particular , if in addition then .a very recent incarnation of graph pebbling is introduced in , in which the _ critical pebbling number _ of a graph is defined . a configuration of pebbles on is _ minimally solvable _ if it is solvable but the removal of any pebble leaves it unsolvable .while the optimal pebbling number measures the size of the smallest minimally solvable configuration on , the critical pebbling number measures the size of the largest minimally solvable configuration .another variation is developed in , combining ideas from sections [ pebnum ] and [ coverpebbopt ] .the _ support _ of a configuration is the set of vertices having . the _ domination cover pebbling number _ is the minimum number so that from every configuration of pebbles one can reach a configuration whose support is a dominating set of .their motivation stems from transporting devices from initial positions to eventual positions that allow them to monitor the entire graph . because of the superficial similarity of graph pebbling to other positional games on graphs , like `` cops - and - robbers '' and `` chip - firing '' for instance , and the high degree of applicability of many of these games to structural graph theory and theoretical computer science , one should nt neglect the possibility that graph pebbling will have similar impact .for example , one can think of the loss of a pebble during a pebbling step as a toll or as a loss of information , fuel or electrical charge .-pebbling is one generalization of this rate of loss ; another is simply to choose any fixed rate of loss . in any case , instead of restricting the initial configuration to integer values , let range among all nonnegative reals .a pebbling step removes weight from one vertex and places weight at an adjacent vertex , for some fixed .still the objective is to place weight 1 at any prescribed root so that there is enough money , fuel , information , or energy at that location in the network . of course ,all of the questions raised herein may be asked about this more general -_pebbling_. it is conceivable that chip - firing may even come into play as a useful model . for a given graph , one might be able to build an auxiliary graph , so that chip - firing results on can be brought to bear on .this opens up the theory to questions of the eigenvalues of the laplacian of , and so on .999 w. r. alford , a. granville and c. pomerance , there are infinitely many carmichael numbers , _ annals math .ser . 2 _ * 139 * , no . 3 ( 1994 ) , 703722 . n. alon , _ personal communication _ ( 2003 ). n. alon , s. friedland and g. kalai , regular subgraphs of almost regular graphs , _j. combin .theory ser .b _ * 37 * ( 1984 ) , 7991 .r. anderson , l. lovsz , p. shor , j. spencer , e. tardos and s. winograd , disks , balls , and walls : analysis of a combinatorial game , _ amer .monthly _ , * 96 * ( 1989 ) , 481493 . s. arnborg , d. g. corneil and a. proskurowski , complexity of finding embeddings in a -tree , _ siam j. algebraic disc. methods _ * 8 * ( 1987 ) , 277284 .a. bekmetjev , g. brightwell , a. czygrinow and g. hurlbert , thresholds for families of multisets , with an application to graph pebbling , _ discrete math ._ * 269 * ( 2003 ) , 2134 .a. bekmetjev and g. hurlbert , the pebbling threshold of square cliques , _ preprint _b. bollobs and a. thomason , threshold functions , _ combinatorica _ * 7 * ( 1987 ) , 3538. j. boyle , thresholds for random distributions on graph sequences with applications to pebbling , _ discrete math ._ * 259 * ( 2002 ) , 5969 .g. brightwell and e. scheinerman , fractional dimension of partial orders , _ order _ * 9 * ( 1992 ) , 139158 .b. bukh , maximum pebbling number of graphs of diameter three , _ preprint _d. bunde , e. chambers , d. cranston , k. milans and d. west , pebbling and optimally pebbling in graphs , _ preprint _ ( 2005 ) .y. caro , zero - sum problems a survey , _ discrete math . _* 152 * ( 1996 ) , 93113 . m. chan and a. godbole , improved pebbling bounds , _ preprint _s. chapman , on the davenport constant , the cross number , and their application in factorization theory , in : _ zero - dimensional commutative rings , lecture notes in pure appl ._ * 171 * , marcel dekker , new york , 1995 , 167190 .chen and k.w .lih , hamiltonian uniform subset graphs , _ j. combin .theory ser .b _ * 42 * ( 1987 ) , 257263 .f. r. k. chung , pebbling in hypercubes , _siam j. disc ._ * 2 * ( 1989 ) , 467472 .f. r. k. chung and r. ellis , a chip - firing game and dirichlet eigenvalues , _ discrete math ._ * 257 * ( 2002 ) , 341355 .t. clarke , pebbling on graphs , _master s thesis _ ,arizona st . univ .t. clarke , r. hochberg and g. hurlbert , pebbling in diameter two graphs and products of paths , _j. graph th . _* 25 * ( 1997 ) , 119128 .b. crull , t. cundif , p. feltman , g. hurlbert , l. pudwell , z. szaniszlo and z. tuza , the cover pebbling number of graphs , _ discrete math . _ * 296 * ( 2005 ) , 1523 .a. czygrinow , n. eaton , g. hurlbert and p. m. kayll , on pebbling threshold functions for graph sequences , _ discrete math ._ * 247 * ( 2002 ) , 93105 .a. czygrinow and g. hurlbert , pebbling in dense graphs , _ austral .j. combin ._ , * 29 * ( 2003 ) , 201208 .a. czygrinow and g. hurlbert , girth , pebbling and grid thresholds ,_ siam j. discrete math ._ , to appear .a. czygrinow and g. hurlbert , on the pebbling threshold of paths and the pebbling threshold spectrum , _ preprint _a. czygrinow , g. hurlbert , h. kierstead and w. t. trotter , a note on graph pebbling , _ graphs and combin . _* 18 * ( 2002 ) , 219225 .a. czygrinow and m. wagner , on the pebbling threshold of the hypercube , _ preprint _t. denley , on a result of lemke and kleitman , _ combin ., prob . and comput ._ * 6 * ( 1997 ) , 3943 . s. elledge and g. hurlbert , an application of graph pebbling to zero - sum sequences in abelian groups , _ integers : elec .j. number theory _* 5(1 ) * ( 2005 ) , # a17 .p. van emde boas and d. kruyswijk , a combinatorial problem on finite abelian groups iii , _report zw-1969 - 007 _ , math .centre , amsterdam .p. erds , on pseudoprimes and carmichael numbers , _ publ .debrecen _ * 4 * ( 1956 ) , 201206 .p. erds , applications of probabilistic methods to graph theory , in _ a seminar on graph theory _ ,holt , rinehart and winston , new york ( 1967 ) , 6061 .p. erds , a. ginzburg and a. ziv , a theorem in additive number theory , _ bull .council israel _ * 10f * ( 1961 ) , 4143 .p. erds and a. rnyi , on random graphs i , _ publ .debrecen _ * 6 * ( 1959 ) , 290297 .r. feng and j. y. kim , graham s pebbling conjecture on product of complete bipartite graphs , _ sci .china ser .a _ * 44 * ( 2001 ) , 817822 .r. feng and j. y. kim , pebbling numbers of some graphs , _ sci .china ser . a _ * 45 * ( 2002 ) , 470478 .j. foster and h. snevily , the 2-pebbling property and a conjecture of graham s , _ graphs and combin . _ * 16 * ( 2000 ) , 231344 .t. friedman and c. wyels , optimal pebbling of paths and cycles , _ preprint _h. l. fu and c. l. shiue , the optimal pebbling number of the complete -ary tree , _ discrete math . _* 222 * ( 2000 ) , 89100 .h. l. fu and c. l. shiue , the optimal pebbling number of the caterpillar , _ elec .notes in disc ._ * 11 * ( 2002 ) .w. gao , on davenport s constant of finite abelian groups with rank three , _ discrete math . _* 222 * ( 2000 ) , 111124 .w. gao and a. geroldinger , zero - sum problems and coverings by proper cosets , _ euro .j. combin . _* 24 * ( 2003 ) , 531549 . w. gao and x. jin , weighted sums in finite cyclic groups , _ discrete math . _* 283 * ( 2004 ) , 243247. w. gao and r. thangadurai , on the structure of sequences with forbidden zero - sum subsequences , _ colloq ._ * 98 * ( 2003 ) , 213222 .j. gardner , a. godbole , a. teguia , a. vuong , n. watson and c. yerger , domination cover pebbling : graph families , _ preprint _a. geroldinger , on a conjecture of kleitman and lemke , _j. number theory _ * 44 * ( 1993 ) , 6065 .a. geroldinger and r. schneider , on davenport s constant , _j. combin .theory ser .a _ * 61 * ( 1992 ) , 147152 . c. gibbons , j. laison and e. paul , critical pebbling numbers of graphs ,_ preprint _a. godbole , m. jablonski , j. salzman and a. wierman , an improved upper bound for the pebbling threshold of the -path , _ discrete math ._ * 275 * ( 2004 ) , 367373 .g. gunda and a. higgins , pebbling on directed graphs , _ elec .day _ * 1 * ( 2003 ) , 113 .d. herscovici , graham s pebbling conjecture on products of cycles , _ j. graph theory _ * 42 * ( 2003 ) , 141154 .d. herscovici and a. higgins , the pebbling number of , _ discrete math ._ * 187 * ( 1998 ) , 123135 .g. hurlbert , two pebbling theorems , _ congr .* 135 * ( 1998 ) , 5563 .g. hurlbert , a survey of graph pebbling , _ congr ._ * 139 * ( 1999 ) , 4164 .g. hurlbert , the graph pebbling page , g. hurlbert and h. kierstead , on the complexity of graph pebbling , _ preprint _g. hurlbert and b. munyan , cover pebbling hypercubes , _ bull ._ , to appear .p. lemke and d. j. kleitman , an addition theorem on the integers modulo , _ j. number th ._ * 31 * ( 1989 ) , 335345 .a. lourdusamy and s. somasundaram , pebbling using linear programming , _ j. discrete math .sci . cryptography _ * 4 * ( 2001 ) , 115. l. lovsz , _ combinatorial problems and exercises _ , north holland pub . ,amsterdam , new york , oxford ( 1979 ) .k. milans and b. clark , the complexity of graph pebbling , _ preprint _ ( 2005 ) .d. moews , pebbling graphs , _ j. combin( b ) _ * 55 * ( 1992 ) , 244252 .d. moews , _ personal communication _ ( 1997 ) .d. moews , optimally pebbling hypercubes and powers ._ discrete math ._ * 190 * ( 1998 ) , 271276 .m. nathanson , _ additive number theory : inverse problems and the geometry of sumsets _( graduate texts in mathematics ; 165 ) , springer - verlag , new york , 1996 , 4851 . r. nowakowski and p. winkler ,vertex - to - vertex pursuit in a graph , _ discrete math . _ * 43 * ( 1983 ) , 235239 . j. olson , a combinatorial problem on finite abelian groups i , _ j. number theory _ * 1 * ( 1969 ) , 810. l. pachter , h. s. snevily , and b. voxman , on pebbling graphs , _ congr ._ * 107 * ( 1995 ) , 6580 .n. robertson and p. d. seymour , graph minors _ iii _ : planar tree - width , _ j. combin .theory ser .b _ * 36 * ( 1984 ) , 4964 .e. scheinerman and d. ullman , _ fractional graph theory : a rational approach to the theory of graphs _ , wiley , new york ( 1997 ) .p. d. seymour and r. thomas , graph searching and a min - max theorem for tree - width ,_ j. combin .theory ser .b _ * 58 * ( 1993 ) , 2233 .j. sjostrand , the cover pebbling theorem , _ preprint _ ( 2004 ) . z. sun , unification of zero - sum problems , subset sums and covers of , _ elec .soc . _ * 9 * ( 2003 ) , 5160 . m. tomova and c. wyels , cover pebbling cycles and certain graph products , _ preprint _ ( 2005 ) .a. vuong and i. wyckoff , conditions for weighted cover pebbling of graphs , _ preprint _ ( 2004 ) .m. wagner and m. wokasch , graph pebbling numbers of line graphs , _ preprint _s. wang , pebbling and graham s conjecture , _ discrete math . _* 226 * ( 2001 ) , 431438 .n. watson , the complexity of pebbling and cover pebbling , _ preprint _n. watson and c. yerger , cover pebbling numbers and bounds for certain families of graphs , _ preprint _ ( 2005 ) .d. b. west , _ introduction to graph theory _ , prentice - hall , upper saddle river , nj ( 1996 ) .
the subject of graph pebbling has seen dramatic growth recently , both in the number of publications and in the breadth of variations and applications . here we update the reader on the many developments that have occurred since the original _ survey of graph pebbling _ in 1999 . c * v * p s # 1#2
to illustrate what is now known as parrondo s paradox , consider the following games : * game a : the fortune of the player after independent games is * game b : let be the 3-periodic function on such that and .the fortune of the player after games is given by it is well - known that game a is fair if and only if .in fact , if , the markov chain is transient and if , while if . when , is recurrent and for game b , the process can be seen as a particular case of a random walk in a random environment ; in fact , the space of environments has dimension with and . for more details on periodic and almost periodic environments , see , e.g. , ( * ? ? ?* examples 1 - 2 ) . studied the behavior of random walks in a random environment when the environments are i.i.d ., which is not necessarily the case here . following instead ( * ? ? ?* theorem 2.1 ) , one concludes that the process is recurrent if and only if , where as a result , .otherwise , when , is transient and if , while if . for example , if and , then . if and , then .suppose now that and .then , according to the previous observations , if a player always plays game a or game b , her fortune will tend to with probability one .however , if she plays game a twice , then game b twice and so on ( game c ) , or if she chooses the game at random with probability ( game d ) , her fortune will tend to .this is parrondo s paradox and it is illustrated in figure [ fig : parrondo ] .now consider game c , where the player alternates between game a and game b , i.e. , she plays game a once , then game b once , and so on .what happens in this case ?the answer will be given at the end of example [ ex : periodic2 ] .games a and b are particular cases of random walks in a random environment , while games c and d are examples of regime switching markov chains in random environment .the aim of this paper is to study their asymptotic behavior .one of the first rigorous work in this field is , who studied some particular cases of random environment , namely the so - called periodic case , where each random walk is like in game b , while the player chooses at random between two games .the author also consider some `` deterministic '' mixtures , namely the cases studied in . for other kind of random walks in random environment that are not covered by our setting , see e.g. , . in what follows , the player chooses the game to play according to a finite markov chain , and each game is a random walk in a random environment , extending the work of and .more specifically , the model is described in the next section , together with characterizations of their asymptotic behavior . in section [ sec : rsrwre ] , the model , which is basically a regime switching random walks in random environments is described and one of the main results ( theorem [ thm : main ] ) is stated , namely that these models behave like simple random walks , in the sense that they are transient , meaning that they converge to either of with probability , for almost every environment , or they are recurrent , meaning that the limsup and liminf converge respectively to with probability , for almost every environment . in section[ sec : hitting ] , the three possible cases are characterized in terms of the rank of the transition matrix and the dimensions of some random subspaces occurring in the famous oseledec s theorem .these results generalize the best known particular cases : ( i ) when the games are chosen independently , the transition matrix has rank , and the results can be recovered as well from standard arguments applied to random walks in random environments ; ( ii ) the periodic choices studied by many authors , starting with , where the transition matrices have full rank . finally , the proofs of the results can be found in a series of appendices .they are inspired by the results of who studied random walks in random environments with bounded increments but no regime switching .the results of were later refined by .we first describe the model and then study its asymptotic behavior .first , let be a complete probability space with a measure preserving transformation , assumed to be -measurable and ergodic , i.e. , the -invariant sigma - field is trivial .next , for any , and any , are -valued variables , where , . hence , the processes are stationary and ergodic . next , for a given , let be the nearest neighbor random walk in a random environment defined by the process , i.e. , these random walks will be the fortunes of the player as she chooses each game .her decision process is based on the irreducible markov chain on , with transition matrix .for example , in game c , one can choose , , , where is the ( deterministic ) process determined by game a , is the stationary ergodic process determined by game b , and . for gamed , and . further note that for game c , then , is the ( deterministic ) process determined by game a , and is the stationary ergodic process determined by game b. finally , set , , i.e. , if , then . under these assumptions ,given , is an homogeneous markov chain on with transition matrix where and .note that for a given environment , called a random chain in a random environment , the sequence playing the role of the `` environment '' .[ rem : bolthausen ] note that our setting is a particular case of a random walk in in a random environment on a strip introduced in .however , their results can not be applied in general here since they assumed that the resulting markov chain has only one communication class ( condition c ) , i.e. , the markov chain is almost surely irreducible ( * ? ? ?* remark 2 ) .this is generally not the case here .for example , in games c and c , there are two closed classes : and . for an explicit example where their main result does not hold , see appendix [ app : bolthausen ] .one is interested in the asymptotic behavior of the process .more precisely , one would like to find conditions under which the so - called parrondo s paradox holds , i.e. , for any , , while here , means that for environment , the process starts from at time ; similarly , means that for environment , the process starts from at at time . for ,define the ( stationary ergodic ) process by , .then the asymptotic behavior of is completely determined by the expectation of , as proven in ( * ? ? ?* theorem 2.1 ) .[ thm : alili ] let be given and suppose that is well defined , with values in ] , , . using , one can write since for every , it follows that the vectors are linearly independent , and they belong to . as a result , .also , , so \in \bar{v}_{0} ] , with .set , , .let the markov chain starting at associated with but absorbed on , and set . since , .then is a bounded martingale with .because of its random walk nature , either is absorbed or . as a result , by the martingale convergence theorem, it follows that , so . since the latter is true for any and any , one may conclude that , contradicting the assumption .thus , one may conclude that the last components of are linearly independent .+ ( b ) based on ( a ) , there exists such that its last components are .then , for , define , so ] , .then , using , one can write , . note also that .one will prove that for all .suppose this is not true .then there exists so that . as before , let the markov chain starting at associated with but absorbed on , and set .then is a bounded martingale with , and since on . as a result , .finally , is also a bounded martingale and on . therefore . finally , showing that .hence for any .it then follows that , and so , a.s ., for any . to complete the proof of ( ii ) , suppose now that . then . to see this , note that for every , since and holds .as a result , . since , it follows that .the proofs of ( 3 ) and ( 4 ) are similar .in fact , setting ] , where is the vector composed of the first components of .next , taking linear combinations of the functions , , one can define , for any and any , where is the first components of ] , , , one can write since the vectors are linearly independent for every , it follows that the vectors are linearly independent as well , and they belong to . as a result , .also , , so \in \check{v}_{0}$ ] .then one proceeds as in the proof of theorem [ thm : oseledecfull ] , with a few minor changes .set , and . using notations , for a given and a given stochastic matrix , for , define , where .it is not easy to compute for a small , but if the probabilities are periodic with period for example , then , , , , and . in (* theorem 1 ) , the authors claim that the limit exists and is independent of .one crucial hypothesis for the proof is the existence of only one communication class ( a.e . ) .as noted in remark [ rem : bolthausen ] , this is usually not the case here , especially for games c and c. we show next that ( * ? ? ?* theorem 1 ) does not hold for game c , because either the limit does not exist , or it depends on the initial value .in fact , starting from , the limit does not exist .now , and .next , if one starts from , then .durrett , r. , kesten , h. , and lawler , g. ( 1991 ) . making money from fair games . in _random walks , brownian motion , and interacting particle systems _ , volume 28 of _ progr ._ , pages 255267 .birkhuser boston , boston , ma .pyke , r. ( 2003 ) . on random walks and diffusions related to parrondo s games . in _ mathematical statistics and applications : festschrift for constance van eeden _ , volume 42 of _ ims lecture notes monogr ._ , pages 185216 .inst . math ., beachwood , oh .
combining losing games into a winning game is a century old dream . an illustration of what can be done is the so - called parrondo s paradox in the physics literature . this paradox " is extended to regime switching random walks in random environments . the paradoxical behavior of the resulting random walk is explained by the effect of the random environment . full characterization of the asymptotic behavior is achieved in terms of the dimensions of some random subspaces occurring in the famous oseledec s theorem . basically , these models have the same asymptotic behavior as simple random walks , in terms of transience and recurrence .
inspired by the success of iterative decoding of low - density parity - check ( ldpc ) codes , originally introduced by gallager and later rediscovered in the mid 1990 s by mackay and neal , on a wide variety of communication channels , the idea of iterative , soft - decision decoding has recently been applied to classical algebraically constructed codes in order to achieve low - complexity belief propagation decoding .also , the classical idea of using the automorphism group of the code , , to permute the code , , during decoding ( known as _ permutation decoding _ ( pd ) ) has been successfully modified to enhance the sum - product algorithm ( spa ) in .we will denote this algorithm by spa - pd .furthermore , good results have been achieved by running such algorithms on several structurally distinct representations of .both reed - solomon and bose - chaudhuri - hocquenghem ( bch ) codes have been considered in this context .certain algebraically constructed codes are known to exhibit large minimum distance and a non - trivial .however , additional properties come into play in modern , graph - based coding theory , for instance , sparsity , girth , and trapping sets . structural weaknesses of graphical codes are inherent to the particular parity - check matrix , , used to implement in the decoder .this matrix is a non - unique -dimensional basis for the null space of , which , in turn , is a -dimensional subspace of .although any basis ( for the dual code , ) is a parity - check matrix for , their performance in decoders is not uniform . is said to be in standard form if the matrix has weight-1 columns .the weight of is the number of non - zero entries , and the minimum weight is lower - bounded by , where denotes the minimum distance of .it is well - known that can be mapped into a bipartite ( tanner ) graph , , which has an edge connecting nodes and _ iff _ . here , , refers to the bit nodes ( columns of ) , and , refers to the check nodes ( rows of ) .the local neighborhood of a node , , is the set of nodes adjacent to , and is denoted by .the terms standard form and weight extend trivially to . in the following , we use bold face notation for vectors , and the transpose of written .this paper is a continuation of our previous work on edge local complementation ( elc ) and iterative decoding , in which selective use of elc ( with preprocessing and memory overhead ) equals spa - pd . in this work ,we use elc in a truly random , online fashion , thus simplifying both the description and application of the proposed decoder .the key difference from our previous work is that we do not take measures to preserve graph isomorphism , and explore the benefits of going outside the automorphism group of the code .this means that we alleviate the preprocessing of suitable elc locations ( edges ) , as well as the memory overhead of storing and sampling from such a set during decoding .our proposed decoding algorithm can be thought of as a combination of spa - pd and multiple bases belief propagation .we also discuss the modification of the powerful technique of damping to a graph - local perspective .the operation of elc , also known as pivot , is a local operation on a simple graph ( undirected with no loops ) , , which has been shown to be useful both for code equivalence and classification , and for decoding purposes .it has recently been identified as a useful local unitary primitive to be applied to _graph states _ a proposed paradigm for quantum computation .[ elc1 ] shows , the local subgraph of a bipartite graph induced by nodes , and their disjoint neighborhoods which we denote by and , respectively .elc on a bipartite graph is described as the complementation of edges between these two sets ; and , check whether edge , in which case it is deleted , otherwise it is created .finally , the edges adjacent to and are swapped see fig .elc on extends easily to elc on when is in standard form .given a bipartite graph with bipartition , we then have a one - to - one mapping to a tanner graph , with check nodes from the set and bit nodes from .[ pivot ] shows an example , where the bipartition is fixed according to the sets and . in fig .[ gb ] , the left and right nodes correspond to and , respectively , for the simple graph . may be obtained by replacing grey nodes by a check node singly connected to a bit node , as illustrated in fig .[ ga ] . figs .[ gb ] and [ ge ] show an example of elc on the edge . although the bipartition changes ( edges adjacent to 0 and 5 are swapped ) , figs .[ ga ] and [ gd ] show how the map to tanner graphs , in fact , preserves the code . by complementing the edges of a local neighborhood of , elc has the effect of row additions on .the complexity of elc on is .the set of vertex - labeled graphs generated by elc on ( or , equivalently , ) is here called the _ elc - orbit _ of .each information set for corresponds to a unique graph in the elc - orbit . note that this is a code property , which , as such , is independent of the initial parity - check matrix , .the set of structurally distinct ( unlabeled ) graphs generated by elc is here called the _ s - orbit _ of , and is a subset of the elc - orbit .graphs are structurally distinct ( i.e. , non - isomorphic ) if the corresponding parity - check matrices are not row or column permutations of each other .each structure in the s - orbit has a set of isomorphic graphs , comprising an _ iso - orbit _ . in the following, we will refer to elc directly on , keeping fig .[ pivot ] in mind .the spa is an inherently local algorithm on , where the global problem of decoding is partitioned into a system of simpler subproblems .each node and its adjacent edges can be considered as a small constituent code , and essentially performs maximum - likelihood decoding ( mld ) based on local information .the key to a successful decoder lies in this partitioning how these constituent codes are interconnected .the summed information contained in a bit node , , is the _ a posteriori _ probability ( app ) , , at codeword position .the vector constitutes a tentative decoding of the received channel vector , .the decoder input is the log - likelihood ratio ( llr ) vector , where is the channel noise standard deviation on an additive white gaussian noise ( awgn ) channel .subtracting the input from the app leaves the extrinsic information , , which is produced by the decoder .the message on the edge from node to , in the direction of , , is computed according to the spa rule on node .the spa computation of all check nodes , followed by all bit nodes , is referred to as one _ flooding _ iteration .classical codes , for which strong code properties are known , are typically not very suitable for iterative decoding mainly due to the high weight of their parity - check matrices , which gives many short cycles in the corresponding tanner graphs .a few recent proposals in the literature have attempted to enhance iterative decoding by dynamically modifying during decoding , so as to achieve diversity and avoid fixed points ( local optima ) in the spa convergence process .efforts to improve decoding may , roughly , be divided into two categories .the first approach is to employ several structurally distinct matrices , and use these in a parallel , or sequential , fashion . these matrices may be either preprocessed , or found dynamically by changing the graph during decoding .however , this incurs an overhead either in terms of memory ( keeping a list of matrices , as well as state data ) , or complexity ( adapting the matrix , e.g. , by gaussian elimination ) . the other approach is to choose a code with a non - trivial , such that diversity may be achieved by permuting the code coordinates . //input : . //output : . . and //. .do flooding iterations , .take the hard decision of into , * stop * if . . draw random permutation . and . .an example is spa - pd , listed in algorithm [ rrd ] , where is represented by a small set of generators , and uniformly sampled using an algorithm due to celler _these permutations tend to involve all , or most , of the code coordinates , making it a global operation .note that line 7 in algorithm [ rrd ] is to compensate for the fact that permutations are applied to in line 12 , rather than to the columns of , after which the messages on the edges no longer ` point to ' their intended recipients .this is yet another global stage .the extrinsic information is damped by a coefficient , , in line 10 before being used to re - initialize the decoder .each time is incremented , the decoder re - starts from the channel vector , .our proposed local algorithm is a two - stage iterative decoder , interleaving the spa with random elc operations .we call this spa - elc , and say that it realizes a local diversity decoding of the received codeword .our algorithm is listed in algorithm [ spaelc ] .both spa - pd and spa - elc perform a maximum of iterations .spa update rules ensure that extrinsic information remains summed in bit nodes , such that an edge may be removed from without loss of information .new edges , , should be initialized according to line 13 in algorithm [ spaelc ] . although neutral ( i.e. , llr 0 ) messages will always be consistent with the convergence process , our experiments clearly indicate that this has the effect of ` diluting ' the information , resulting in an increased decoding time and worse error rate performance . //input : .// output : . .do flooding iterations , .take the hard decision of into , * stop * if .select random edge . , + , . .the simple spa - elc decoder requires no preprocessing or any complex heuristic or rule to decide when or where to apply elc .as elc generates the s - orbit of , as well as the iso - orbit of each structure , diversity of structure can be achieved even for random codes , for which is likely to be 1 while the size of the s - orbit is generally very large . however , going outside the iso - orbit means that we change the properties of , most importantly in terms of density and number of short cycles . ideally , the spa - elc decoder operates on a set of structurally distinct parity - check matrices , which are all of minimum weight . with the exception of codes with very strong structure , such as the extended hamming code ,the elc - orbit of a code will contain structures of weight greater than the minimum .spa - elc should take measures against the negative impact of increased weight . in this paper, we adapt the technique of damping to our graph - local perspective .damping with the standard spa , where is fixed , does not work , so we only want to damp the parts of the graph which change .as opposed to spa - pd , only a subgraph of is affected by elc , so we restrict damping to new edges in line 13 . note that spa - elc simplifies to a version without damping , denoted by spa - elc ,when , , and .this is , simply , flooding iterations interspersed with random elc operations , where new edges are initialized with the adjacent app ( line 13 ) .currently , the spa stopping criterion ( i.e. , the parameters used to flag when decoding should stop ) is still implemented globally . however, a reasonable local solution would be to remove the syndrome check ( ) from the stopping criterion , and simply stop after spa - elc iterations , where can be empirically determined . however , this has obvious implications for complexity and latency . in some scenariosa stopping criterion can be dispensed with anyway for instance when using the decoder as some form of distributed process controller , or for a pipelined implementation in which the iterations are rolled out .we have compared spa - elc against standard spa , and spa - pd . extended quadratic residue ( eqr )codes were chosen for the comparison , mainly due to the fact that for some of these codes , can be generated by generators .in fact , our experiments have shown that eqr codes have tanner graphs well - suited to spa - elc , at least for short blocklengths .the codes considered have parameters ] ( eqr48 ) , and $ ] ( eqr104 ) .parity - check matrices for the codes were preprocessed by heuristics to minimize the weight and the number of -cycles .the results are listed in table [ codes ] , where columns marked ` w ' and ` c ' show the weight and the number of -cycles , respectively .columns marked ` initial ' show the weight and the number of -cycles of the initial tanner graph constructions . `reduced ' and ` reduced ip ' refer to optimized tanner graphs , where the latter is restricted to tanner graphs in standard form .entries marked by an asterisk correspond to minimum weight parity - check matrices ..optimization of codes used in simulations [ cols= " < , > , > , > , > , > , > " , ] [ codes ] in figs. [ beery_golay]-[qr104 ] , we show the frame error rate ( fer ) performance and the average number of spa messages of spa , spa - pd , and spa - elc for the extended golay code , the eqr48 code , and the eqr104 code , respectively , on the awgn channel versus the signal - to - noise ratio , . the specific parameters used are indicated in the figure legends .for the extended golay code and the eqr48 code , we set a maximum at iterations , which we increased to to accommodate the larger eqr104 code .for spa - elc we have also included results without damping . since spa - elc changes the weight of , we can not compare complexity by simply counting iterations .since the complexity of one elc operation is much smaller than the complexity of a spa iteration , the total number of spa messages may serve as a common measure for the complexity of the decoders .we have no initial syndrome check , so the number of iterations approaches at high .in the same way , the complexity approaches the average weight of the matrices encountered during decoding .each fer point was simulated until at least frame errors were observed . from the figures , we observe that the spa - elc decoder outperforms standard spa decoding , both in terms of fer and decoding complexity .the extended golay code is a perfect example for demonstrating the benefits of spa - elc .the s - orbit of this code contains only two structures , where one is of minimum weight ( weight ) and the other only slightly more dense ( weight ) , while the iso - orbit of the code is very large .thus , we can extend spa - pd with multiple tanner graphs ( two structures ) while keeping the density low .not surprisingly , spa - elc achieves the fer performance of spa - pd , albeit with some complexity penalty .note that the simple spa - elc decoder , without damping , approaches closely the complexity of spa - pd at the cost of a slight loss in fer .for the larger codes , the sizes of the s - orbits are very large , and many structures are less suited for spa - pd. still , the same tradeoff between fer performance and complexity holds , based on whether or not we use damping . for the eqr48 code , we have observed a rich subset of the s - orbit containing minimum weight structures ( weight ) . the optimum value of ( see line 10 in algorithm [ spaelc ] ) was determined empirically .we have described a local diversity decoder , based on the spa and the elc operation .the spa - elc algorithm outperforms the standard spa both in terms of error rate performance and complexity , and compares well against spa - pd , despite the fact that spa - pd uses global operations .ongoing efforts are devoted to further improvements , and include ; selective application of elc , rather than random ; devise techniques such that diversity may be restricted to sparse structures in the s - orbit ; identify a code construction suited to spa - elc , for which the s - orbit contains several desirable structures even for large blocklengths .the authors wish to thank alban goupil for providing the mld curve for the eqr48 code .d. j. c. mackay and r. m. neal , `` good codes based on very sparse matrices , '' in _ proc .5th i m a conf .cryptography and coding ( lecture notes in computer science ) _ , vol . 1025 , royal agricultural college , cirencester , uk , dec .1995 , pp . 100111 .j. jiang and k. r. narayanan , `` iterative soft - input soft - output decoding of reed - solomon codes by adapting the parity - check matrix , '' _ ieee trans .inform . theory _52 , no . 8 , pp . 37463756 , aug .2006 .t. hehn , j. b. huber , s. laendner , and o. milenkovic , `` multiple - bases belief - propagation for decoding of short block codes , '' in _ proc .inform . theory _, nice , france , jun .2007 , pp . 311315 .j. g. knudsen , c. riera , m. g. parker , and e. rosnes , `` adaptive soft - decision decoding using edge local complementation , '' in _ proc .castle meeting on coding theory and appl .( 2icmcta ) ( lecture notes in computer science ) _ , vol .5228 , castillo de la mota , medina del campo , spain , sep .2008 , pp . 8294 .m. hein , w. dr , j. eisert , r. raussendorf , m. van den nest , and h .- j .briegel , `` entanglement in graph states and its applications , '' in _ proc .school of physics `` enrico fermi '' on `` quantum computers , algorithms and chaos '' _ , varenna , italy , jul .[ online ] .available : http://arxiv.org/abs/quant-ph/0602096
in this paper , we propose to enhance the performance of the sum - product algorithm ( spa ) by interleaving spa iterations with a random local graph update rule . this rule is known as edge local complementation ( elc ) , and has the effect of modifying the tanner graph while preserving the code . we have previously shown how the elc operation can be used to implement an iterative permutation group decoder ( spa - pd)one of the most successful iterative soft - decision decoding strategies at small blocklengths . in this work , we exploit the fact that elc can also give structurally distinct parity - check matrices for the same code . our aim is to describe a simple iterative decoder , running spa - pd on distinct structures , based entirely on random usage of the elc operation . this is called spa - elc , and we focus on small blocklength codes with strong algebraic structure . in particular , we look at the extended golay code and two extended quadratic residue codes . both error rate performance and average decoding complexity , measured by the average total number of messages required in the decoding , significantly outperform those of the standard spa , and compares well with spa - pd . however , in contrast to spa - pd , which requires a global action on the tanner graph , we obtain a performance improvement via local action alone . such localized algorithms are of mathematical interest in their own right , but are also suited to parallel / distributed realizations .
[ ls def ] we denote by the symmetric difference between two sets , , and by the symmetric difference distance between these sets .( all the sets considered in this chapter are finite . )the symbols and stand for the disjoint union and the proper inclusion of sets respectively .a _ ( knowledge ) structure _ is a pair where is a non empty set and is a family of subsets of containing and .the latter is called the _ domain _ of .the elements of are called _ items _ and the sets in are _ ( knowledge ) states_. since , the set is implicitly defined by and we can without ambiguity call a knowledge structure .a knowledge structure is _ well graded _ if for any two distinct states and with there exists a sequence such that and for .we call such a sequence a _ tight path _ from to .we say that is a _ knowledge space _ if it is closed under union , or _-closed_. a knowledge structure is a _ learning space _ ( cf . * ? ? ?* ) if it satisfies the following two conditions : learning smoothness . for any two , with and ,there is a chain such that , for , we have for some . learning consistency . if are two sets in such that for some , then .a learning space is also known in the combinatorics literature as an ` antimatroid ' , a structure introduced independently by with slightly different , but equivalent axioms ( cf .also * ? ? ?* ; * ? ? ?another name is ` well graded knowledge space ' ; see our lemma [ echan ] .a family of subsets of a set is a_ partial knowledge structure _ if it contains the set .we do not assume that .we also call ` states ' the sets in .a partial knowledge structure is a _ partial learning space _ if it satisfies axioms [ l1 ] and [ l2 ] .note that is vacuously well - graded and vacuously -closed , with .thus , it a partial knowledge structure and a partial learning space ( cf .lemma [ partialechan ] ) .the following preparatory result will be helpful in shortening some proofs .[ toprovewg ] a -closed family of set is well graded if , for any two sets , there is a tight path from to .suppose that the condition holds .for any two distinct sets and , there exists a tight path and another tight path .these two tight paths can be concatenated .reversing the order of the sets in the latter tight path and redefining we get the tight path , with .as mentioned in our introduction , some knowledge structures may be so large that a splitting is required , for convenient storage in a computer s memory for example .also , in some situations , only a representative part of a large knowledge structure may be needed .the concept of a projection is of critical importance in this respect .we introduce a tool for its construction .[ def sim ] let be a partial knowledge structure with and let be any proper non empty subset of . define a relation on by thus , is an equivalence relation on .when the context specifies the subset , we sometimes use the shorthand for in the sequel .the equivalence between the right hand sides of ( [ def sim eq ] ) and ( [ def sim eq 2 ] ) is easily verified .we denote by ] the partition of induced by .we may say for short that such a partition is induced by the set . in the sequelwe always assume that , so that . [ def projection ]let be a partial knowledge structure and take any non empty proper subset of .the family is called the _ projection _ of on .we have thus as shown by example [ examp projection ] , the sets in may not be states of . for any state in and with ] . )the family is called a _-child _ , or simply a _ child _ of ( _ induced by _ ) . as shown by the example below, a child of may take the form of the singleton and we may have } = \kkk_{[l]} ] , would result in having all the children being learning spaces or trivial .this is * not * generally true .the situation is clarified by theorem [ projectiontheorem2 ] .[ rem project ] the concept of projection for learning spaces is closely related to the concept bearing the same name for media introduced by .the projection theorems [ projectiontheorem1 ] and [ projectiontheorem2 ] , the main results of this chapter , could be derived via similar results concerning the projections of media ( cf .theorem 2.11.6 in * ? ? ?this would be a detour , however .the route followed here is direct . in the next two lemmas , we derive a few consequences of definition [ def projection ] .[ proj def k struct = > ] the following two statements are true for any partial knowledge structure .the projection , with , is a partial knowledge structure .if is a knowledge structure , then so is .the function \mapsto k\cap q' ] , we have ) = k \cap q ' = h([l ] ) = l\cap q ' = x ] .[ proj def k space = > ] if is a -closed family , then the following three statements are true . ] is the union of states of , we get \in\kkk ] because for all ] .\(ii ) since by hypothesis , is a knowledge structure .lemma [ proj def k struct = > ] ( i ) , implies that is a partial knowledge structure .any subfamily is associated to the family .as is a partial knowledge space , we get , yielding , with thus is a partial knowledge space .this argument is valid for knowledge spaces .\(iii ) take arbitrarily .we must show that is -closed . if , this is vacuously true .otherwise , for any we define the associated family \in \hhh\}.\ ] ] so , ] and ] and } = \big\{\ , \es,\ , \{a\ } , \{a , b\ } \,\big\} ] and } ] and ) ] , with \sb l'\subset m'.\ ] ] since is well - graded , there is a tight path with all its states in ] , .it is clear that ( [ capksbl ] ) and ( [ tightpathchain ] ) imply and it is easily verified that is a tight path from to . applying lemma [ toprovewg ] , we conclude that is well - graded .[ assinges ] in example [ examp projection ] , we had a situation in which the non trivial children of a learning space were either themselves learning spaces , or would become so by the addition of the set .this can happen if and only if the subset of the domain defining the projection satisfies the condition spelled out in the next definition . [ yielding ]suppose that is a partial knowledge structure , with .a subset is _ yielding _ if for any state of that is minimal for inclusion in some equivalence class ] .we recall that ] and ] .since , we must have .because and are sets of ] ( by the projection theorem [ projectiontheorem1 ] ( ii ) ) a tight path we show that this tight path defines a tight path lying entirely in ( actually , in ) . by definition of a tight path, we have for some . defining ] .the existence of the tight path ( [ wg path 2 in ( ii ) ] ) follows by induction .suppose now that .in view of what we just proved , we only have to show that , for any non empty , there is a singleton set with . by definition of , we have ] .take a minimal state in ] .since is yielding , we get | \leq 1 ] , then =\{q\}\sb m ] . thus = \es ] , which implies that = n ] established in the projection theorem [ projectiontheorem1](ii ) , there exists some such that .we get thus = ( n + \{p\})\setminus n = \{p\}\sb m\quad\text{with}\quad \{p\}\in\kkp.\ ] ] the tight path ( [ wg path 2 in ( ii ) ] ) from to exists thus in both cases . applying lemma [ toprovewg ], we can assert that is well - graded .we have shown earlier that is a knowledge space .accordingly , the plus child is a learning space .\(ii ) ( i ) . if some equivalence class ] , then | = 0 ] contains more than one minimal state .let be one of these minimal states .thus ] . because ] .performing an assessment in a large learning space may be impractical in view of memory limitation or for other reasons .in such a case , a two - step or an n - step procedure may be applied . on step 1 ,a representative subset of items from the domain is selected , and an assessment is performed on the projection learning space induced by ( cf .projection theorem [ projectiontheorem1](i ) ) .the outcome of this assessment is some knowledge state of which corresponds ( 1 - 1 ) to equivalence class ] .the assessment can then be pursued on of which is a partial learning space ( cf .projection theorem [ projectiontheorem1](ii ) ) .the outcome of step 2 is a set ]containing more than one set can be made into a learning space by a trivial transformation .this property is not critical for the 2-step procedure outlined above .d. albert and j. lukas , editors ._ knowledge spaces : theories , empirical research , applications_. lawrence erlbaum associates , mahwah , nj , 1999 .falmagne , e. cosyn , j .-doignon , and n. thiry . .in b. ganter and l. kwuida , editors , _ formal concept analysis , 4th international conference , icfca 2006 , dresden , germany , february 1317 , 2006 _ , lecture notes in artificial intelligence , pages 6179 .springer - verlag , berlin , heidelberg , and new york , 2006 .
any proper subset of the domain of a learning space defines a projection of on which is itself a learning space consistent with . such a construction defines a partition of having each of its classes either equal to , or preserving some key properties of the learning space , namely closure under union and wellgradedness . if the set satisfies certain conditions , then each of the equivalence classes is essentially , via a trivial transformation , a learning space . we give a direct proof of these and related facts which are instrumental in parsing large learning spaces . # 1 _|q this paper is dedicated to george sperling whose curious , incisive mind rarely fails to produce the unexpected creative idea . george and i have been colleagues for the longest time in both of our careers . the benefit has been mine . learning spaces , which are special cases of knowledge spaces ( cf . * ? ? ? * ) , are mathematical structures designed to model the cognitive organization of a scholarly topic , such as beginning algebra or chemistry 101 . the definition of ` learning space ' is recalled in our definition [ ls def ] . essentially , a learning space is a family of sets , called knowledge states , satisfying a couple of conditions . the elements of the sets are ` atoms ' of knowledge , such as facts or problems to be solved . a knowledge state is a set gathering some of these atoms . each of the knowledge states in a learning space is intended as a possible representation of some individual s competence in the topic . embedded in a suitable stochastic framework , the concept of a learning space provides a mechanism for the assessment of knowledge in the sense that efficient questioning of a subject on a well chosen subset of atoms leads to gauge his or her knowledge state . many aspects of these structures have been investigated and the results were reported in various publications ; for a sample , see . the monograph by contains most of the results up to that date . in practice , in an educational context for example , a learning space can be quite large , sometimes numbering millions of states . the concept of a ` projection ' at the core of this paper provides a way of parsing such a large structure into meaningful components . moreover , when the learning space concerns a scholarly curriculum such as high school algebra , a projection may provide a convenient instrument for the programming of a placement test . for the complete algebra curriculum comprising several hundred types of problems , a placement test of a few dozens problems can be manufactured automatically via a well chosen projection . the key idea is that if is a learning space on a domain , then any subset of defines a learning space on which is consistent with . we call a ` projection ' of on , a terminology consistent with that used by and for media . moreover , this construction defines a partition of such that each equivalence class is a subfamily of satisfying some of the key properties of a learning space . in fact , can be chosen so that each of these equivalence classes is essentially ( via a trivial transformation ) either a learning space consistent with or the singleton . these results , entitled ` projection theorems ' ( [ projectiontheorem1 ] and [ projectiontheorem2 ] ) , are formulated in this paper . they could be derived from corresponding results for the projections of media . direct proof are given here . this paper extends previous results from ( * ? ? ? * theorem 1.16 and definition 1.17 ) and .
we assume that for a given city , we have the matrix where is the number of spatial units that compose the city at the spatial aggregation level considered ( for example a grid composed of square cells of size , see methods ) .this od matrix represents the number of individuals living in the location and commuting to the location where they have their main , regular activity ( work or school for most people ) . by convention ,when computing the numbers of inhabitants and workers in each cell we do not consider the diagonal of the od matrix .this means that we omit the individuals who live and work in the same cell ( considered as ` immobile ' at this spatial scale ) . in order to extract a simple signature of the od matrix, we proceed in two steps .we first extract both the residential and the work locations with a large density the so called ` hotspots ' ( see ) .the number of residents of cell is given by and its number of workers is given by .the hotspots then correspond to local maxima of these quantities .it is important to note that the method is general , and does not depend on how we determine these hotspots .once we have determined the cells that are the residential and the work hotspots ( some cells can possibly be both ) , we proceed to the second and main step of the method .we reorder the rows and columns of the od matrix in order to separate hotspots from non - hotspots .we put the residential hotspots on the top lines , and do the same for columns by putting the work hotspots on the left columns .the od matrix then becomes a 4-quadrants matrix where the flows are spatially positioned in the matrix with respect to their nature : on the top left the individuals that live in hotspots and work in hotspots ; at the top right the individuals that live in hotspots and do not work in hotspots ; at the bottom left individuals that do not live in hotspots but work in hotspots ; and finally in the bottom right corner the individuals that neither live or work in hotspots . for each quadrantwe sum the number of commuters and normalize it by the total number of commuters in the od matrix , which gives the proportion of individuals in each of the four categories of flows . in other words , for a given city , we reduce the od matrix to a matrix where is the proportion of * * i**ntegrated flows that go from residential hotspots to work hotspots ; by construction , we have v\in we note in equation [ eq : vua ] that we remove the intersection of the voronoi area with the sea , indeed , we assume that the number of users calling from the sea are negligeable .now we consider the number of mobile phone users and the associated area of the voronoi cells intersecting the ua ( see supplementary figure s2b ) .we can not directly extract an od matrix between the grid cells with the mobile phone data because each users home and work locations are identified by the voronoi cells .thus , we need a transition matrix to transform the bts od matrix into a grid od matrix .let be the number of voronoi cells covering the urban area and be the number of grid cells .let be the od matrix between btss where is the number of commuters between the bts and the bts . to transform the matrix into an od matrix between grid cells we define the transition matrix where is the area of the intersection between the grid cell and the bts . then we normalize by column in order to consider a proportion of the btss areas instead of an absolut value , thus we obtain a new matrix ( equation [ eq : pchap ] ) .a city is divided in cells and the data give us access to its od matrix extracted from the mobile phone data . after straightforward calculationwe obtain the distributions and of the numbers of residents / workers in the cells composing the city .the determination of centres and subcentres is a problem which has been broadly tackled in urban economics ( see for example the references given in the paper and references in ) . starting from a spatial distribution of densities , the goal is to identify the local maxima .this is in principle a simple problem solved by the choice of a threshold for the density : a cell is a hotspot if the density of users .it is however clear that such method based on a fixed threshold introduce some arbitrariness due to the choice of , and also requires prior knowledge of the city to which it is applied to choose a relevant value of . in proposed a generic method to determine hotspots from the lorenz curve of the densities .in the following we quickly introduce the principle of the method and its application to the determination of the residential and work hotspots of each city .we invite the interested readers to refer to for further discussion on this method .we first sort and in increasing order , and denote the ranked values by where is the number of cells .the two lorenz curves of the distribution of residents / workers are constructed by plotting on the x - axis the proportion of cells and on the y - axis the corresponding proportion of commuters with : if all the cells had the same number of residents / workers the lorenz curves would be the diagonal from to .in general we observe a concave curve with a more or less strong curvature . in the lorenz curve ,the stronger the curvature the stronger the inequality and , intuitively , the smaller the number of hotspots .this remark allows us to construct a criterion by relating the number of dominant places ( i.e. those that have a very high number of residents / workers compared to the other cells ) to the slope of the lorenz curve at point : the larger the slope , the smaller the number of dominant values in the statistical distribution .the natural way to identify the typical scale of the number of hotspots is to take the intersection point between the tangent of at point and the horizontal axis ( see suplementary figure s3 ) .this method is inspired from the classical scale determination for an exponential decay : if the decay from were an exponential of the form where is the typical scale we want to extract , this method would give ( see for further discussion ) .on supplementary figure s8 we plot the ,,, values of the 31 spanish urban areas as a function of the density threshold chosen to define hotspots ( here defined relatively to the density value returned by the loubar method see section [ sec : hotspots - def ] ) ) . in the extreme case represented on supplementaryfigure s9 all cells whose number of residents / workers are greater than the mean value of the distribution of residents / workers are tagged as hotspots ( see for a discussion of this criteria and his comparison to the " loubar criteria used in this study ) . with a much broader acceptation of what is an hotspot , it is obvious that the term will increase drastically since we increase both the number of residential hotspots and the number of work hotspots . still what is important to notice is that we can still observe the qualitative trend observed with the loubar method . as the population size increases , the decrease of the proportion of integrated flowsis accompanied by an increase of the proportion of random flows . in order to evaluate towhat extent the values of a given city are characteristic of its commuting structure , we compare these values to the values returned by a reasonable null model of commuting flows . the guiding idea is to generate od matrices that ( i ) have the same size than the city s od matrix ( i.e that contain the same total number of individuals ) ; ( ii ) that respect the city s static spatial organisation ( i.e. the in- and out - degrees of all nodes should stay constant ) ; and ( iii ) that randomize the flows between the nodes , i.e. with different weights of the edges .such a null model of flows that respects the static organization of the city is indeed more reasonnable and realistic than a null model that would respect the total number of individuals in the matrix but that would modify the in- and out - degrees of the nodes . to generate a random graph that conserves the in- and out- degree of each node of the reference graph, we use the molloy - reed algorithm which complexity is in , where is the sum of the weights of the edges ( i.e. the number of individuals in the od case ) . in order to evaluate the sensitivity of the classification of cities to the number of hotspots selected in cities , for each city we make vary the number of work hotspots between the reference value returned by the loubar method ( see supplementary note 3 ) , and two times the reference value : $ ] . as for the sensitivity to noise in the flows , we evaluate the stability of the classification of cities in groups with the jaccard index ( see supplementary note [ sec : icdr - sens ] ) .the values of as a function of are represented on supplementary figure s13 .
the extraction of a clear and simple footprint of the structure of large , weighted and directed networks is a general problem that has many applications . an important example is given by origin - destination matrices which contain the complete information on commuting flows , but are difficult to analyze and compare . we propose here a versatile method which extracts a coarse - grained signature of mobility networks , under the form of a matrix that separates the flows into four categories . we apply this method to origin - destination matrices extracted from mobile phone data recorded in thirty - one spanish cities . we show that these cities essentially differ by their proportion of two types of flows : integrated ( between residential and employment hotspots ) and random flows , whose importance increases with city size . finally the method allows to determine categories of networks , and in the mobility case to classify cities according to their commuting structure . [ [ sec : intro ] ] the increasing availability of pervasive data in various fields has opened exciting possibilities of renewed quantitative approaches to many phenomena . this is particularly true for cities and urban systems for which different devices at different scales produce a very large amount of data potentially useful to construct a ` new science of cities ' . a new problem we have to solve is then to extract useful information from these huge datasets . in particular , we are interested in extracting coarse - grained information and stylized facts that encode the essence of a phenomenon , and that any reasonable model should reproduce . such meso - scale information helps us to understand the system , to compare different systems , and also to propose models . this issue is particularly striking in the study of commuting in urban systems . in transportation research and urban planning , individuals daily mobility is usually captured in origin - destination ( od ) matrices which contain the flows of individuals going from a point to another ( see ) . an od matrix thus encapsulates the complete information about individuals flows in a city , at a given spatial scale and for a specific purpose . it is a large network , and as such does not provide a clear , synthetic and useful information about the structure of the mobility in the city . more generally , it is very difficult to extract high - level , synthetic information from large networks and methods such as community detection and stochastic block modeling ( see for example and ) were recently proposed . both these methods group nodes in clusters according to certain criteria and nodes in a given cluster have similar properties ( for example , in the stochastic block modeling , nodes in a given group have similar neighborhood ) . these methods are very interesting when one wants to extract meso - scale information from a network , but are unable to construct expressive categories of links and to propose a classification of weighted ( directed ) networks . this is particularly true in the case of commuting networks in cities , where edges represent flows of individuals that travel daily from their residential neighborhood to their main activity area . several types of links can be distinguished in these mobility networks , some constitute the backbone of the city by connecting major residential neighborhoods to employment centers , while other flows converge from smaller residential areas to important employment centers , or diverge from major residential neighborhoods to smaller activity areas . in addition , the spatial properties of these commuting flows are fundamental in cities and a relevant method should be able to take this aspect into account . there is an important literature in quantitative geography and transportation research that focuses on the morphological comparison of cities and notably on multiple aspects of polycentrism , ranging from schematic pictures proposed by urban planners and architects to quantitative case studies and contextualized comparisons of cities . so far most comparisons of large sets of cities have been based on morphological indicators ) built - up areas , residential density , number of sub - centers , etc . and aggregated mobility indicators motorization rate , average number of trips per day , energy consumption _ per capita _ per transport mode , etc . , and have focused on the spatial organization of residences and employment centers . but these previous studies did not propose generic methods to take into account the spatial structure of commuting trips , which consist of both an origin and a destination . such comparisons based on aggregated indicators thus fail to give an idea of the morphology of the city in terms of daily commuting flows . we still need some generic methods that are expressive in a urban context , and that could constitute the quantitative equivalent of the schematic pictures of city forms that have been pictured for long by urban planners . in this paper we propose a simple and versatile method designed to compare the structure of large , weighted and directed networks . in the next section we describe this method in detail . the guiding idea is that a simple and clear picture can be provided by considering the distribution of flows between different types of nodes . we then apply the method to commuting ( journey to work ) od matrices of thirty - one cities extracted from a large mobile phone dataset . we discuss the urban spatial patterns that our method reveals , and we compare these patterns observed in empirical data to those obtained with a reasonable null model that generates random commuting networks . finally the method allows determining categories of networks with respect to their structure , and here to classify cities according to their commuting structure . this classification highlights a clear relation between commuting structure and city size .
several large - scale gravitational wave ( gw ) interferometers have achieved long term operation at design sensitivity .different detectors can have very different noise levels and different frequency bandwidth .the directional sensitivity of different detectors can also be very different .for instance , the most sensitive gw detectors in the us , namely the ligo detector at livingston , louisiana , and the two co - located detectors at hanford , washington ( abbreviated as l1 and h1/h2 ) , are designed to be nearly aligned .detectors in europe ( geo600 in germany , and virgo in italy ) and in asia ( tama in japan ) are incidentally nearly orthogonal in directional sensitivities to the ligo detectors .the questions arise on how to best combine data from these detectors in gw data analysis .we present in this paper an application of the singular value decomposition ( svd ) method to data analysis for a network of gw detectors .we show that the svd method provides simple solutions to detection , waveform extraction , source localization , and signal - based vetoing . by means of svd, the response matrix of the detector network can be decomposed into a product of two unitary matrices and a pseudo - diagonal matrix containing singular values .the unitary matrices can be used to form linear combinations of data from all detectors that have one to one correspondence to linear combinations of the gravitational wave signal polarization components .each newly formed data stream has a corresponding singular value representing the network s response to the new signal polarizations .data streams with non - zero singular values represent the signal components while data with zero singular values ( or zero multiplication factors ) represent the null streams .the null streams have null response to gws and can be used for localization of gw sources and for identifying detector glitches from that of real gravitational wave events as proposed by wen and schutz ( 2005) for ground - based gw detectors .the statistical uncertainty in estimating the gw waveforms from the data can be shown to be related to the inverse of the singular values .this can be used to reduce errors in signal extraction by enabling `` bad '' data with relatively small singular values to be discarded .the observed strain of an impinging gw by an interferometric detector is a linear combination of two wave polarizations where is time , and are the detector s response ( antenna beam pattern functions ) to the plus and cross polarizations ( , ) of a gw wave .these antenna beam patterns depend on source sky directions , wave polarization angle , and detector orientation .the subscript is a label for the detector , indicating the dependence of the observed quantities on detectors .data from a gw detector can be written as the sum of the detector s response and noise , , where , ] according to the maximum likelihood ratio principle of `` burst '' gws ( defined as any gws of unknown waveforms ) is where is the number of non - zero singular values .note the components of include two polarization components for each frequencies .the `` standard '' solutions to wave polarizations can be extracted using the svd components of the network response ( eq. [ svd ] , eq .[ new_data ] ) note that the solutions of two signal polarization components at each frequency depends on quantities within that frequency only .faster calculations can therefore be carried out independently at each frequencies . in the limit when , the solution given in eq .[ h_k0 ] is equivalent to that written with the moore - penrose inverse , .note that this is also the same as the effective one - detector strain for data from a network of gw detectors introduced by flanagan and hughes ( 1998). the `` standard '' solution from eq .[ h_k0 ] however can possibly lead to unstable solutions in the sense that a small error in the data can lead to large amplified error in the solution .fisher information matrix ( eq . [ fisher ] ) indicates that the best possible statistical variance for the estimated is a linear combination of .the extracted wave polarizations can contain large errors if we include data corresponding to very small singular values .this situation occurs when the response matrix is `` ill - conditioned '' with .the small singular values can result from machine truncation errors instead of zero values or from nearly degenerated solutions to the equations , e.g. , in our case , when antenna beam patterns of two detectors are nearly aligned .regularization is needed in order to have stable solutions to .one simple solution is to apply the `` truncated singular value decomposition '' ( tsvd ) method by omitting data with very small singular values in eq . where is the number of data included .the main problem is then the decision on where to start to truncate the data streams with small singular values .it depends on the accuracy requirement in waveform extraction , type of waveforms and type of constraints that can be put on the solutions .the fractional error due to truncation , defined as the ratio of the sum of error - square and the sum of detector - response square from all frequencies , is , truncation of terms with very small singular values can therefore retain the least square fit of the detector response to the data while greatly reduce the statistical errors when extracting individual signal polarizations .a recent work on introducing the tikhonov regularization to waveform extraction of gw signals can be found in rakhmanov ( 2006). note that the tikhonov regularization is equivalent to introducing a new parameter to filtering out data associated with small singular values .there are at least data streams with zero singular values or zero multiplication factors .these are data streams that have null response to signals .the `` standard '' null streams can be written in terms of the svd of ( eq . [ svd ] ) as , for stationary gaussian noise , follows gaussian distribution of zero mean and unity variance and are statistically independent with each other .consistency check for a detected gw event therefore requires that all the null - streams are consistent with ` noise - only' at the source direction . using null - streams as a tool for consistency check of gw events against signal - like glitches for ground - based gw detectors was first proposed by wen and schutz (2005). further investigations on consistency check using the null - streams and its relation to the null space of the network - response matrix can be found in chatterji et al .( 2006). there are also `` semi- '' null streams corresponding to data streams with very small but non - zero singular values , where is the starting indexes for .these semi - null streams exist when the equations are nearly degenerated .this happens , for example , for response of the two ligo detectors of h1 ( h2 ) and l1 that are designed to be nearly aligned .the semi - null stream can be also simply caused by combination of weak directional sensitivity and/or high noise level instrument of all detectors in the network .consistency check of gw events , veto against noise , and localization of the source can be further improved by including both the `` standard '' null - streams of analytically zero singular values and also these semi - null streams . usage of semi - null streams depends on waveforms and therefore efficiency studies should be carried out before - hand .a proposal of using the semi - null stream for veto against detector glitches and for source localization can be found in wen and schutz ( 2005) for the two - detector network of h1-l1 .a new data analysis approach based on the singular value decomposition method has been proposed for the data analysis of gws using a network of detectors .we show that singular values of the response matrix of the gw network directly encode the sensitivity of the gw network to signals and the uncertainties in waveform estimation .the svd method is particularly useful for constructing null data streams that have no or very little response to gw signals .we argue that the svd method provide a simple recipe for data analysis of gws for a network of detectors .note that the svd method is widely used in engineering signal processing for image processing , noise reduction and geophysical inversion problems .an application of the svd method to detecting gws from the extreme - mass - ratio - inspiral sources using the space gw detector lisa can be found in wen et al .( 2006). the svd software package is also available in matlab. strategies on construction of the detection statistic , stable solutions to waveforms , and null streams based on the svd method are discussed .we show that the detection statistic should be constructed from data streams of non - zero singular values .we discuss how detection efficiency can be improved from a direct application of mlr by incorporating our knowledge of the network s directional response to gws and our assumptions on distribution of the signal power .we also give expressions of null - streams for arbitrary number of gw detectors using components from the svd of the network - response matrix .the concept of the semi - null streams that are characterized by small singular values is also introduced .we argue that the exploration of the null - space by including semi - null streams can help improving the source localization and consistency check .analytical study for the angular resolution of a network of gw detectors and their relations with the null space will be found in wen ( 2007 ) .we also show how a stable solution to the waveform can be constructed based on information from the singular values .we conclude that a gw event should be identified only when both of the following two conditions are satisfied at the same sky direction , ( 1 ) high probability of detection for the optimal statistic constructed from `` signal '' streams with non - zero singular values ( eq . [ new_data ] , sec .[ detection ] ) and ( 2 ) high probability that the null streams corresponding to zero singular values are consistent with noise ( eq . [ new_data ] , eq .[ null_data ] , sec .[ null ] ) .results of an extended investigation will be published elsewhere. are very grateful to yanbei chen for his critical discussions .we also thank wei - tou ni , sergey klimenko , soumya mohanty , and david blair for careful review and helpful comments of this manuscript .this work is under the support by the alexander von humboldt foundation s sofja kovalevskaja programme funded by the german federal ministry of education and research .this paper has been assigned ligo document number ligo - p070012 - 00-z .christian hansen , `` the truncated svd as a method for regularization '' , _ bit numerical mathematics _ , * 27 * , 534 ( 1987 ) .y. grsel and m. tinto , _ phys .d _ , * 40 * , 3884 ( 1989 ) .l. wen and b. f. schutz , _ classical and quantum gravity _, * 22 * , s1321 ( 2005 ) .h. cramer , _ mathematical methods of statistics _ ( princeton university press , princeton , n. j. 1946 ) .p. g. hoel , s. c. port , and c. j. stone , testing hypotheses " in _ introduction to statistical theory _3 ( new york : houghton mifflin , 1971 ) , p. 56 .k. rajesh nayak , s. v. dhurandhar , a. pai and j - y vinet , _ phys .d _ , * 68 * , 122001 ( 2003 ). s. klimenko , s. mohanty , m. rakhmanov and g. mitselmakher , _ phys .d _ , * 72 * , 122002 ( 2005 ) .sergei klimenko , soumya d. mohanty , malik rakhmanov , guenakh , mitselmakher , _ j. phys ._ , * 32 * 12 - 17 ( 2006 ) .soumya d. mohanty , malik rakhmanov , sergei klimenko , guenakh , mitselmakher , _ classical quant .* 23 * , 4799 ( 2006 ) . t. z. summerscales , a. burrows , c. d. ott , and l. s. finn , maximum entropy for gravitational wave data analysis : inferring the physical parameters of core - collapse supernovae , submitted to _ astrophys .j. _ ( 2007 ) .l. wen , data analysis of gravitational waves using a network of detectors , in preparation ( 2007 ) . .flanagan and s. a. hughes , _ phys .d _ , * 57 * 4566 ( 1998 ) .m. rakhmanov , _ classical and quantum gravity _ * 23 * , s673 ( 2006 ) .s. chatterji , a. lazzarini , l. stein , p. j. sutton , a. searle , & m. tinto , _ phys .d _ , * 74 * , 082005 ( 2006 ) .l. wen , y. chen and j. gair , _ aip conf .proc . 873 : laser interferometer space antenna : 6th international lisa symposium _ , * 873 * , 595 ( 2006 ) .l. wen , angular resolution of a network of gravitational - wave detectors , 2007 , apjl , submitted .
several large - scale gravitational wave ( gw ) interferometers have achieved long term operation at design sensitivity . questions arise on how to best combine all available data from detectors of different sensitivities for detection , consistency check or veto , localization and waveform extraction . we show that these problems can be formulated using the singular value decomposition ( svd) method . we present techniques based on the svd method for ( 1 ) detection statistic , ( 2 ) stable solutions to waveforms , ( 3 ) null - stream construction for an arbitrary number of detectors , and ( 4 ) source localization for gws of unknown waveforms .
because of the intensive use of carbon dioxide in industry and research , it has become necessary to determine its thermodynamic , physical and chemical properties on an extended range of temperatures. significant effort has been deployed to build up a database through observations and theoretical calculations . from the former point of view , we mention the case of the accurate measurements due to giauque & egan and from the latter point of view , the derivation based on the classical version of the theory of lattice dynamics , which predicts the heat capacity of carbon dioxide in the range of temperatures 15 k , is in a very good agreement with that obtained through observations . however , such a good agreement is still out of reach for some other properties of carbon dioxide due to difficulties from both experimental and theoretical points of view .for instance , the empirical determination of the latent heat of sublimation at low temperatures remains a major obstacle because of the difficulty in eliminating the superheating of the gas .similarly , by way of example , the lagrangian classical treatment of the two - dimensional rigid rotor is intractable and the theoretical determination of the heat capacity , mentioned above , had been made possible at only sufficiently low temperatures ( k ) when the harmonic approximation is valid . with that said ,much work has to be done in order to determine further properties of carbon dioxide particularly at low temperatures , such properties are still missing in the best compendia .we will exploit the data available in , which we refer to as g&e , and show that it is possible to evaluate the heat of sublimation and vapor pressure at temperatures 5 k. a key prerequisite is the determination of the heat of sublimation at =0 k ( = ) .stull calculated an average value of by the method of least squares using the vapor pressure data measured by different workers and obtained a value of 26.3 kj^-1^ ( = 6286 cal^-1^ ) for 139 k .however , the literature citations listed in show that stull did not extract data from g&e , which is even more accurate and includes data concerning the heat capacity of the solid carbon dioxide and other data that could be used to obtain at different temperatures . by contrast, g&e have evaluated at 194.67 k using partly their measured data and available data for at lower temperatures .they evaluated the integral of the heat capacity of the solid ( change in the enthalpy ) graphically from a smooth curve through their measured data and obtained a value for that is merely 10 cal^-1^ higher than their measured value =6030 cal^-1^ ( 25230 j^-1^ ) .they also evaluated the entropies of the gas and solid at 194.67 k and reached an _excellent _ agreement between experimental data and statistics ( the experimental & spectroscopic values of the entropy of the gas they obtained were 47.59 & 47.55 cal^-1^^-1^ , respectively , constituting a proof of the third law ) .however , this cumbersome procedure had prevented them from carrying out a systematic evaluation of the latent heat and entropy at temperatures covering the range of their measured data .furthermore , this procedure ( the graphical evaluation ) adds a human error , which is an unknown factor . in this paperwe will carry out a systematic evaluation of the fore - mentioned physical quantities on a more extended range of temperatures than that of g&e using 1 ) a computer algebra system ( cas ) , which eliminates the human error and allows an excellent adjustment of the parameters in order to achieve a better accuracy , as well as 2 ) an established formula for the vapor pressure .it will be shown below that our reevaluated value of is 6030.4 cal^-1^ ( 25231 j^-1^ ) .the data for the relevant quantities will be tabulated at temperatures incremented by 5 k and plotted . moreover ,the generating codes will be provided , which allow the evaluation of any quantity at any given temperature within minutes of time . in this work , we will be relying on measured data by different workers and on some empirical formulas derived by graphical interpolation .since some of these data are provided without accuracy and some other lack accuracy due to personal error , it will be difficult to assign accuracy to our results , as is the case in most compendia .some values of ( in torr ) will be given with one significant digit while other values with 2 or 3 significant digits .the values of ( of the order of 26000 j^-1^ ) will be given with five digits without decimals , assuming an error not higher than 0.35% .the accuracy of the results for and can be read by comparing with the available measured data .[ [ heat - of - sublimation - at - pmbt0 . ] ] * heat of sublimation at . * + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + throughout this paper , we use the units and symbols recommended by the _ international union of pure and applied chemistry _ ( _ iupac _ ) .the energy is given in j and in cal = 4.184 j , the pressure in torr , and the temperature in k. since the original data were given in calories , we perform our evaluations in this unit , taking =1.98724 cal^-1^^-1^ , then convert the results to joules .the g&e heat capacity measurements , shown in the codes ( appendix ) , extend from 15.52 to 189.78 k. on such a large interval there is no best equation that will represent the data .g&e worked on a smooth curve through the data but did not describe it . in order to represent the data ,the alternative is to subdivide the interval into sufficiently small intervals and represent the data by a polynomial on each sub - interval in such a way that the polynomial pieces blend smoothly making a spline .matlab provides spline curve via the command ` spline(x , y ) ` ( see appendix section ) .it returns the piecewise polynomial form of the cubic spline interpolant with the not - a - knot end conditions , having two continuous derivatives and breaks at all interior data sites except for the leftmost and the rightmost one .the values of the spline at the breaks ` spline(x , y , x(i ) ) ` coincide with the data values ` y(i ) ` .cubic splines are more attractive for interpolation purposes than higher - order polynomials . we will deal with molar physical quantities labeled by the subscripts & to differentiate between the solid and gaseous phases .we denote by the latent heat of sublimation and by , , , , , ( = ) , the internal energy , free energy , chemical potential , volume , enthalpy , entropy , respectively .we take the zero of rotational energy to be that of the =0 state and the zero of vibrational energy to be that of the ground state , meaning that a molecule at rest in the gas has an energy of zero at vanishing temperature ( =0 ) .let be the heat of sublimation at =0 which is , according to our energy convention , the binding energy of the particles of the solid ( ==== ) .the excellent agreement between the experimental & spectroscopic values of at 194.67 k is due to g&e accurate measurements and to the success of debye s theory at low temperatures .g&e used debye s formula to evaluate for 0 k. however , they did not explain their choice for debye s temperature . in this work ,the energy and entropy of the solid for temperatures below 15.52 k are extrapolated by substitution of the debye heat capacity formula .moreover , we will rely on suzuki & schnepp s assertion that the molar heat capacities of the solid carbon dioxide ( & ) are equal within an error of 10 per cent for such small temperatures .finally , we fix by equating the heat capacity due to debye with that measured by g&e at 15.52 k ( 0.606 cal^-1^^-1^ ) .solving the equation using a cas we find =139.59 k. the matlab codes provided in the appendix are split into three parts . in part ( i ) , ` cd ` represents the debye heat capacity .the vectors ` t ` & ` cp ` show the temperature data sites used by g&e ( 15.52.78 k ) and the corresponding measured heat capacities ( 0.606.05 cal^-1^^-1^ ) , respectively .these g&e data sites are extended by the temperature vector ` u ` and the corresponding debye heat capacity vector ` v ` , respectively .the last two lines evaluate , at the temperature vector ` tn ` , the spline through the extended data sites ( ` t ` , ` cp ` ) , the integrals == ( vector ` i ` ) and = ( vector ` j ` ) , with ` tn ` . the heat of sublimation is determined upon solving the equation = at any given temperature for which the measured is known .the lead we had followed seeking for higher accuracy led us to select the value of = cal^-1^ at k ( * ? ? ?* eucken & donath ) & .we find = cal^-1^ and the calculation is shown below . with = & = ,the equation = reduces to = . upon solving the clapeyron equation for obtain =+p\,v_{s} \\ & \quad , \\ \end{tabular}\ ] ] with =7.575455 in si units ( = ) and =0.561 , =954 , =1890 , =3360 k. we have then =.2 cal^-1^ leading with the previously evaluated terms to =6273.4 cal^-1^. + [ [ vapor - pressure . ] ] * vapor pressure .* + + + + + + + + + + + + + + + + + from now on we will assume =6274 cal^-1^ ( 26250 j^-1^ ) . upon substituting ( [ v ] ) & ( [ fg ] ) into = ( = ) and rearranging the terms we obtain /rt\}\,,\ ] ] where = , =\,({\rm i} ] .assuming that follows berthelot s equation \,p({\rm torr})\ ] ] ( where =6.1 k and , in order to express in cal^-1^ , we take =9.1/(128.8 ) k / torr ) , we have solved numerically both equation ( [ ptw ] ) and its linearized form and the results coincide up to an insignificant error . upon substituting exp ] $ ] ( eq .( [ fg ] ) ) . looking for extreme valueswe can first ignore the correction for gas imperfection then justify it later .we have solved graphically the equation =0 ( = ) and obtained the values 57.829 k for & 6503.58 cal^-1^ for as shown in fig .2 . we will assume =6503.6 cal^-1^ ( 27211 j^-1^ ) .tables [ tab2 ] & [ tab1 ] , however , show that at 57.829 k the vapor behaves as an ideal gas , and this justifies the omission of the correction terms in =0 . substituting ( [ b ] ) into ( [ l ] ) , this latter splits into two equations whether we evaluate the vapor pressure using ( [ gep ] ) or ( [ ptw ] ) \ , p_{\,\rm g\&e}\quad ( 154\leq t\leq 196\,{\rm k}),\label{gel } \\l_{\,\rm tw } & = & \epsilon_{0}-\delta h_{s}+ h_{g}+r\,\ell_{1}[1-(3\ell_{2}/t^{2})]\ , p_{\,\rm tw}\quad ( t\geq 5 { \rm k})\,.\label{ltw}\end{aligned}\ ] ] equations ( [ gel ] ) & ( [ ltw ] ) are plotted in fig .2 . in the codes provided in part(iii ) of the appendix, we evaluate the r.h.s of ( [ ltw ] ) at 160 , 180 & 194.67 k ( 3-vector ` ltw ` ) .the value of the latent heat obtained at 194.67 k is 6030.4 cal^-1^ ( 25231 j^-1^ ) or 6030.6 cal^-1^ ( 25232 j^-1^ ) whether we calculate the r.h.s of ( [ ltw ] ) or ( [ gel ] ) .+ [ tab2 ] 1.0lll 0 & 26250 & na + 5 & 26394 & na + 10 & 26538 & na + 15 & 26676 & na + 20 & 26804 & na + 25 & 26914 & na + 30 & 27005 & na + 35 & 27077 & na + 40 & 27133 & na + 45 & 27172 & na + 50 & 27197 & na + 55 & 27209 & na + 60 & 27210 & na + 65 & 27201 & na + 70 & 27183 & na + 75 & 27158 & na + 80 & 27128 & na + 85 & 27091 & na + 90 & 27048 & na + 95 & 27002 & na + 100 & 26951 & na + 105 & 26896 & na + 110 & 26836 & _ 26836 _ + 115 & 26773 & _ 26773 _ + 120 & 26707 & _ 26707 _ + 125 & 26637 & _ 26637 _ + 130 & 26565 & _ 26565 _ + 135 & 26488 & _ 26488 _ + 140 & 26408 & _ 26408 _ + 145 & 26325 & _ 26325 _ + 150 & 26239(c ) & _ 26239(c ) _ + 155 & 26149(c ) & 26149(c ) + 160 & 26055(c ) & 26055(c ) + 165 & 25958(c ) & 25958(c ) + 170 & 25855(c ) & 25855(c ) + 175 & 25745(c ) & 25745(c ) + 180 & 25629(c ) & 25629(c ) + 185 & 25504(c ) & 25504(c ) + 190 & 25368(c ) & 25368(c ) + 195 & 25221(c ) & 25220(c ) + + + in concluding , it was of interest to further compare our results for the pressure with those used by stull that , as already stated , are less accurate than g&e values . at temperatures 138.8 , 148.7 , 153.6 , 158.7 k , we read from the values 1 , 5 , 10 , 20 torr for the pressure , while our evaluated values ( eq . ( [ ptw ] ) ) are 1.16 , 5.30 , 10.42 , 20.12 torr , respectively .finally , values of the entropy of the solid at the tabulated temperatures =5 k ( 1 , positive integer ) form a sub - vector of ` j ` and are obtainable upon executing the codes ` q=2500:2500:97500 ; j(q ) ` . for instance , =`j(80000)`=14.07 , =`j(90000)`=15.50 and =`j(97335)`=16.52 cal^-1^^-1^ ( 58.87 , 64.85 & 69.12 j^-1^^-1^ , respectively ) .concerning the numerical approach , given the accurate data for the heat capacity at constant pressure of carbon dioxide and some available data for the heat of sublimation , we employed the method of splines to generate and evaluate a smooth curve representing the heat capacity data . dealing with a large number of data sites, we preferred to use cubic splines , which are more attractive for interpolation purposes than higher - order polynomials .once the curve set , we proceeded to the evaluation of the change of the enthalpy and entropy of the solid .the evaluation of the relevant physical quantities concerning the vapor was rather straightforward using almost fresh formulas from the thermodynamic literature .we used matlab to execute the task and the calculated entities were used in subsequent vapor pressure and heat of sublimation evaluations .now , concerning the theoretical approach , we mainly derived a formula for the vapor pressure including a correction for gas imperfection and effects for internal structure , as well as a formula for the heat of sublimation with same purposes .the author acknowledges comments and suggestions by an anonymous referee , which helped to improve the manuscript . + * appendix * + this section is devoted to provide the main matlab codes , as a part of the numerical method , leading to the results shown in this paper .+ _ * part(i ) * _ + part(i ) shows the data sites used by g&e ( 15.52.78 k ) & ( 0.606.05 cal^-1^^-1^ ) .we evaluate the spline through the extended data sites ( ` t ` , ` cp ` ) , the integrals == ( vector ` i ` ) and = ( vector ` j ` ) , with ` tn ` ..... syms x z real ; f=(12/(x^3))*int((z^3)/(exp(z)-1),z,0,x ) ; g=(3*x)/(exp(x)-1 ) ; a = f - g ; cd=3 * 1.98724*a ; u=0.01:0.01:15.25 ; xn=139.59./u ; v = real(double(subs(cd , x , xn ) ) ) ; t=[0 u 15.52 17.30 19.05 21.15 23.25 25.64 27.72 29.92 32.79 35.99 39.43 43.19 47.62 52.11 56.17 60.86 61.26 66.24 71.22 76.47 81.94 87.45 92.71 97.93 103.26 108.56 113.91 119.24 124.58 130.18 135.74 141.14 146.48 151.67 156.72 162.00 167.62 173.36 179.12 184.58 189.78 ] ; cp=[0 v 0.606 0.825 1.081 1.419 1.791 2.266 2.676 3.069 3.555 4.063 4.603 5.195 5.794 6.326 6.765 7.269 7.302 7.707 8.047 8.370 8.703 8.984 9.189 9.421 9.671 9.893 10.07 10.27 10.44 10.69 10.88 11.08 11.27 11.45 11.64 11.84 12.07 12.32 12.57 12.82 13.05 ] ; tn=0.001:0.002:196.001 ; spcp = spline(t , cp , tn ) ; i=0.002*cumsum(spcp ) ; j=0.002*cumsum(spcp./tn ) ; .... _ * part(ii ) * _ + we evaluate the ideal - gas and real - gas pressures ( eqs .( [ pi ] ) & ( [ ptw ] ) ) at 160 , 180 & 194.67 k. the evaluated pressures are represented by the 3-vectors ` pi ` and ` ptw ` , respectively . ....eps=6274 ; t=[159.999 179.999 194.669 ] ; m=[80000 90000 97335 ] ; ms = i(m)-(t.*j(m ) ) ; pc=7.575455*(10 ^ 5 ) ; l1=9 * 304.1/(128 * 72.8 * 760 ) ; l2=6*(304.1 ^ 2 ) ; s = exp(ms./(1.98724*t ) ) ; ztr=(1/(2 * 0.561))*((t.^(7/2 ) ) . *( ones(size(t))+((0.561/3)./t ) ) ) ; zv=(1./((ones(size(t))-exp(-954./t)).^2 ) ) . *( 1./(ones(size(t))-exp(-1890./t ) ) ) . *( 1./(ones(size(t))-exp(-3360./t ) ) ) ; pi=((760/101325)*pc).*((ztr.*zv ) . *( s.*exp(-eps./(1.98742*t ) ) ) ) ; v=(l1*((ones(size(t))-(l2./(t.^2))).*(pi./t)))+ ones(size(t ) ) ; ptw = pi./v ; t = 160 180 194.67 pi = 23.604 204.845 739.817 ptw = 23.632 206.308 754.942 .... _ * part(iii ) * _ + we evaluate the the heat of sublimation ( eq .( [ ltw ] ) ) 160 , 180 & 194.67 k. the output is the 3-vector ` ltw ` ..... it = ones(size(t ) ) ; h1=954./(exp(954./t)-it ) ; h2=1890./(exp(1890./t)-it ) ; h3=3360./(exp(3360./t)-it ) ; hv=1.98724*((2*h1)+h2+h3 ) ; hg=((3.5 * 1.98724).*t)+hv-(((1.98724 * 0.561)/3)*it ) ; gi=(1.98724*l1).*(it-((3*l2)./(t.^2))).*ptw ; ltw = eps - i(m)+hg+gi ; t = 160 180 194.67 ltw(cal / mol ) = 6227.4 6125.5 6030.4 ltw(j / mol ) = 26055 25629 25231 .... 99air liquide ` http://www.airliquide.com/en/business/products/gases/gasdata/ ` ; + j.b .calvert ` http://www.du.edu/~jcalvert/phys/carbon.htm ` .meyers , van dusen ( 1933 ) nat .j. research * 10 * : 381 ; henning , stock ( 1921 ) zeits .f. physik * 4 * : 226 ; siemens ( 1913 ) ann .d. physik * 42 * : 871 ; eucken , donath ( 1926 ) zeits .f. physik .chemie * 124 * : 181 ; heuse , otto ( 1931 ) ann .d. physik * 9 * : 486 ; ( 1932 ) * 14 * : 181 .giauque wf , egan cj ( 1937 ) j. chem .phys . * 5 * : 45 .stull dr ( 1947 ) organic compounds , ind .eng . chem . *39 * : 517 .suzuki m , schnepp o ( 1971 ) j. chem .phys . * 55 * : 5349 .schnepp o , jacobi n ( 1975 ) lattice dynamics of molecular solids . in dynamical properties of solids ,north - holland publishing company , amsterdam .sataty ya , ron a ( 1974 ) j. chem .phys . * 61 * : 5471 .grigoriev is , meilikhov ez ( editors ) ( 1997 ) handbook of physical quantities .crc press boca raton florida .gray de ( coordinating editor ) ( 1972 ( 1982 reissue ) ) american institute of physics handbook 3rd edition .mcgraw - hill book company new york .national institute of standards and technology ` http://webbook.nist.gov/chemistry/ ` .engineering tool box ` http://www.engineeringtoolbox.com/ ` .schaeffer cd , jr ( 1989 ) data for general inorganic , organic , and physical chemistry ` http://wulfenite.fandm.edu/data%20/data.html ` ; + the wired chemist ` http://wulfenite.fandm.edu/data%20/ ` .moore jh , spencer nd ( editors ) ( 2001 ) encyclopedia of chemical physics and physical chemistry .institute of physics publishing bristol & philadelphia .stull dr , westrum ef , sinke gc ( 1969 ) the chemical thermodynamics of organic compounds .john wiley & sons , new york .mills i , cvita t , homann k , kuchitsu k ( 1993 ) quantities , units and symbols in physical chemistry 2^nd^ edition .blackwell scientific publications oxford ; + iupac ` http://www.iupac.org/reports/1993/homann/index.html ` . de boor c ( 2001 ) a practical guide to splines .springer - verlag , new york ; schilling rj , harris sl ( 2000 ) applied numerical methods for engineers . brooks / cole , pacific grove , ca ; meir a , sharma a ( 1973 ) spline functions and approximation theory ( proceedings of a symposium , university of alberta , 1972 ) .basel , stuttgart , birkhuser ; gu c ( 2002 ) smoothing spline anova models .springer - verlag , new york .mcquarrie da ( 1976 ) statistical physics .harper & row publishers new york .herzberg g ( 1989 ) molecular spectra & molecular structure .krieger pub .co. malabar , fla .
we investigate the empirical data for the vapor pressure ( 154 k ) and heat capacity ( 15.52.78 k ) of the solid carbon dioxide . the approach is both theoretical and numerical , using a computer algebra system ( cas ) . from the latter point of view , we have adopted a cubic piecewise polynomial representation for the heat capacity and reached an excellent agreement between the available empirical data and the evaluated one . furthermore , we have obtained values for the vapor pressure and heat of sublimation at temperatures below 195 right down to 0 k. the key prerequisites are the : 1 ) determination of the heat of sublimation of 26250 j^-1^ at vanishing temperature and 2 ) elaboration of a ` linearized ' vapor pressure equation that includes all the relevant properties of the gaseous and solid phases . it is shown that : 1 ) the empirical vapor pressure equation derived by giauque & egan remains valid below the assumed lower limit of 154 k ( similar argument holds for antoine s equation ) , 2 ) the heat of sublimation reaches its maximum value of 27211 j^-1^ at 58.829 k and 3 ) the vapor behaves as a ( polyatomic ) ideal gas for temperatures below 150 k.
we consider a scalar field satisfying the helmholtz equation with frequency in . given a prescribed incident field , a non - singular solution of we are interested in the solution of where , for , , and equals inside the inhomogeneity and outside .we take the inhomogeneity to be a ball of radius .the coordinate system is chosen so that the inhomogeneity is centered at the origin . in other words we assume that both and are real and positive .we assume that the scattered field satisfies the classical silver - mller outgoing radiation condition , given by where , as usual .the purpose of this paper is to provide sharp estimates for the scattered field , for any contrast and any frequency .the norms we use to describe the scattered field are the following . given any , its restriction to the circle be decomposed in terms of the spherical harmonics given by , in the following way and can be measured in terms of the following sobolev norm for any real parameter . by density , this norm can be defined for less regular functions . for radiusindependent estimates , we shall use the norm it is easy to see that this norm is finite for a smooth with bounded radial variations . for a radial function , this is simply the sup norm for .finally , to document the sharpness of our estimates , we will provide lower bounds in terms of the semi - norms where are integers and is a real parameter .these norms are satisfy the following inequality and if for all , only has one non zero spherical harmonic coefficient , they are the natural extension of the norms introduced in for the two dimensional companion problem . when the incident field is a plane wave , where is a unit vector in , for all , whereas for any and , where for all and ( see ) .the motivation from this work comes from imaging . in electrostatics ,the small volume asymptotic expansion for a diametrically bounded conductivity inclusion is now well established , and the first order expansion has been shown to be valid for any contrast . it is natural to ask whether such expansions could also hold for non - zero frequencies , even in a simple case .another inspiration for this work is recent results concerning the so - called cloaking - by - mapping method for the helmholtz equation . in ,the authors show that cloaks can be constructed using lossy layers , and that non - lossy media could not be made invisible to some particular frequencies ( the quasi - resonant frequencies ) . within the range of non - lossy media , one can ask whether such ` cloak busting ' frequencies are a significant phenomenon , that is , would appear with non - zero probability in any large frequency set , or on the contrary if they are contained in a set whose measure tends to zero with .these questions were considered in two dimensions in . in this paperwe show that these results extend , after some adjustments , to the three dimensional case .the proofs presented in this paper are very similar to the ones of the two - dimensional paper , but we believe the results , more than their derivation , could be of interest to researchers in various areas of mathematics . in numerical analysisthey could be used as a validation test for broadband helmholtz solvers , since we provide both upper and lower bounds for the scattering data . in the area of small volume expansion for arbitrary geometries , or in the mathematical developments related to cloaking , they provide a best case scenario which can be used to document the sharpness of more general estimates . to make the results of this paper accessible to readers who are not familiar with bessel functions ,the main estimates are written in terms of the norms and introduced above , and powers of . because no unknown constant appears in the results , this paper can be used as a black - box if the reader wishes to do so .bessel functions do appear in one place , to describe quasi - resonances , but they turn out to be of the same nature as the two - dimensional ones , and so we refer to in that case . the main results of the paper are presented in section [ sec : main ] . the proofs are given in section [ sec : proofs ] .to state our results , we introduce the rescaled non - dimensional frequency , and the contrast factor given by the following theorem provides our estimates for either small frequencies or for any frequency .[ thm : estim - all ] for any , when there holds if is the first integer such that the spherical harmonic decomposition coefficient of is non zero for some , then holds for all .furthermore , for any , when , the scattered field also satisfies when there holds naturally , the variant of incorporating the more precise estimate given by for the fourier coefficient corresponding to also holds by linearity .it is easy to verify that the dependence on in is optimal by a taylor expansion around ( or ) for a incident wave with only one ( or two ) non - zero spherical harmonic coefficients for ( and ) .theorem [ thm : estim - all ] shows that this estimate is valid up to rescaled frequencies of order when , and of order when . to prove the optimality of these ranges , we define , for , and for , [ pro : lowerbounds ] when and , when , for any integer , in fact estimate holds if the supremum is taken in the set note that when . to illustrate the sharp contrast between what these estimates show when and when , let us consider the case of a plane wave .when , theorem [ thm : estim - all ] and proposition [ pro : lowerbounds ] show that for any there holds when , theorem [ thm : estim - all ] shows that for any , and for any , the combination of these two estimates do not provide a bound when for the near field .proposition [ pro : lowerbounds ] provides a lower bound in that case . for any and any , this lower bound grows geometrically with the upper bound of the interval of frequencies considered .in particular , for any , any , and , this unbounded behavior of the scattered field is due to the existence of quasi - resonant frequencies , just as in the two - dimensional case . to characterize these quasi - resonances ,bessel functions are required . for , we denote by the hankel function of the first kind of order .the bessel functions of the first and second kind of order are given by , and .we denote by the m - th positive solution of .we denote by the m - th positive solution of .finally , we write the first positive solution of .[ def : qr ] for any , the triplet is called quasi - resonant if and the following proposition is proved in in the case when is an integer , but the proof is unchanged for any .[ pro : dixon ] for any and , in every interval there exists a unique frequency such that the triplet is quasi - resonant .there are no other quasi - resonances .in particular , no quasi - resonance exists in the interval , or when .since for any there is only a countable number of quasi - resonant triplets , one could hope that outside security sets around the quasi - resonant frequencies , the scattered field could be bounded from above , even in the near field .this means excluding a countable union of intervals : a trade - off occurs between how much in the near - field one wishes to go , and how large the set of authorized frequencies is .the theorem below is the result of such a trade - off .[ thm : broadband ] for all , all , , and ] and by where is the associated legendre polynomial .the reflection and transmission coefficients and are given by the transmission problem on the boundary of the inhomogeneity , that is , at .they are the unique solutions of which are and , after a simplification using the wronskian identity satisfied by and , in , and , the symbol is an equality if the right - hand - side is replaced by its real part , the fields being real . by a common abuse of notations ,in what follows we will identify and with the full complex right - hand - side . to verify that this is the correct solution , we need to check that and are well defined .the fact that there is a unique solution to problem satisfying the radiation condition is well known ( see e.g. ) . is non zero for all , and .assume , for contradiction , .+ then , as and do not have common zeroes ( see e.g. ) , either is non zero , in which case - has the following solution or is non zero , and - has the following solution both and would be solution of problem - without an incident wave .there is of course another solution to that problem , .since problem - is well posed , see , we have a contradiction .we chose the three ( semi-)norms , and because they are compatible with expansions , and .in particular , for any , we have and whereas the other norms are and for any and , the two dimensional results found in are easily translated into three dimensional ones for the following reason .[ pro:2d3d ] let be the reflection coefficient associated to problem posed in a disk of radius in dimension 2 , with the appropriate out - going radiation condition .this reflection coefficient is defined ( by the same formula ) when is an arbitrary positive real number , and is replaced by any real .then , for any and any there holds from this pointonwards , we use the short - hand to represent the number . for we introduce then , the reflection function introduced in two dimensional problem considered in is where . for any and any such that , we write the properties of the function were studied in for any .when is an integer , is the reflection coefficient associated to problem in dimension .note that the identities valid for any yield that we introduce the notations then , the reflection coefficient corresponding to is given by if we introduce for any , outside the zeroes of and , we can rewrite , when is not a zero of , or , from it follows that for any , where these functions are defined .this in turns implies that and this identity extends to the zeroes of , or by continuity .the following lemma then follows mostly from .[ lem:7.1 ] for any and , * for there holds * for we have * furthermore , when and when and , * when and , we have * when there exists such that the existence of satisfying follows from proposition [ pro : dixon ] .thanks to proposition [ pro:2d3d ] , and because for any , we have the inequalities , and are proved when in , lemma 7.1 .we will now check that holds when .from there holds since we have for all the zero is given the first positive solution of .it satisfies .we can thus consider only .we have from proposition [ pro:2d3d ] , for and using the wronskian identity satisfied by , and the recurrence relations satisfied by bessel functions , we obtain ( see for details ) that with note that , and have simple analytic formulae .for example in particular , it is easy to show that is decreasing on .when , we have the following bound for all . as a consequence , which concludes the proof of when . for any , and , we have where for .therefore to proceed , note that is negative and decreasing , and for , thus a taylor expansion shows that for , and , we obtain for all , which implies , and since . inserting in together with the values of and for and , we obtain for for all , it is known that for , in particular , for all , we have combining , and we obtain .we may now conclude the proof of theorem [ thm : estim - all ] . for conveniencewe write .formula shows that note that is decreasing ( see e.g. 13.74 ) , therefore for all , estimate in lemma [ lem:7.1 ] shows that when . inserting this bound in we obtain which is estimate . using instead , we obtain for and , inserting this bound in together with we obtain which proves since .let us now turn to .formula shows that when , since is decreasing we define two sets of indices , and . since is decreasing , estimate shows that for all , we have on the other hand , , therefore when , , and it is known ( see 13.74 ) that when , is an increasing function of with limit .furthermore , it is also known ( see 15.3 ) that for all therefore , since , we have on the other hand , as it is known ( see ) that for all this shows that we have obtained that , for , inserting this bound in , we obtain .to prove proposition [ pro : lowerbounds ] , we shall use the following intermediate result , related to bessel functions .[ lem : boundhn ] for any , there holds for any there holds for any , , and , there holds we prove this lemma below .we can now conclude the proof of proposition [ pro : lowerbounds ] .let us start with the case .starting as before from formula , we have using now the bounds in lemma [ lem : boundhn ] and in lemma [ lem:7.1 ] , we have therefore as claimed .we now turn to the case .note that for all , using the upper bound given in we see that .therefore , choosing for each the frequency given by lemma [ lem:7.1 ] , we have the conclusion follows from estimate .to estimate the maximum of , we proceed as follows .note that the maximal value occurs at the first positive solution of , which we will denote .we compute that it is known ( see e.g. proposition a.1 ) that for and , therefore .this implies that next , note that from , and the bound we have on the other hand , is an increasing function ( see 15.8 ) , therefore it is well known ( see e.g. ) that for all , .it is also known ( see 15.8 ) that is a decreasing function .note if then . therefore combining these two bounds we obtain that together with this shows that let us now turn to .it is known ( see ) that where is a universal constant , .let us first assume that , where is given by then , using we find it is known ( see ) that for all , therefore where since is increasing for , we have using the definition of , we see that the right hand side of this last inequality is an explicit function of , which is negative when .using , we see that for for all .thus for all , there holds next , we note that is minimal on at for all , where it equals .therefore and , using that we obtain let us now suppose that . since is decreasing when , we obtain an upper bound on by replacing by its lower bound and by its upper bound , given in .the resulting expression is an explicit increasing function of , with limit .it is then possible to verify by inspection on a finite range for that for all , this inequality being automatically satisfied when for example .the maximum occurs near . onthe other hand , and is decreasing , and see e.g. therefore when . the bounds and show that when , we have combining and we have obtained that holds for all , all and all . to conclude , note that using we have the proof of theorem [ thm : broadband ] follows the line of the proof of the corresponding result in the two - dimensional case proved in .the first step is the following proposition .[ pro : ito ] for any and , we define and then where is defined by and where is the set of all positive such that . furthermore , when , the same result holds for .first , note that lemma [ lem:7.1 ] shows that .furthermore , we have shown in the proof of theorem [ thm : estim - all ] that when , , thus .thirdly , using the bound , we see that when , we have on , is decreasing , and is a decreasing function of .therefore and we have obtained that next , it is known ( see , proposition 8.3 ) that when and for some , then when .the argument is simple .it turn out that by a simple calculus argument using the formula for , when , , and , , then . when , , , and therefore the inclusion holds .since , , and on , the same is true when .the proof of proposition [ pro : ito ] will be complete once estimate is established , for .since it is proved in proposition 8.2 for , we only need to consider the case , and .we have and . introducing {rcl } ( 0,\frac{\pi}{2})\setminus \cup_k \{k\pi/\lambda\ } & \to & \mathbb{r } \\ x & \to & \displaystyle \frac{g_{\frac{1}{2 } } \left(\lambda x\right)}{k_{\frac{1}{2 } } ( x ) } , \end{array}\ ] ] we have .\ ] ] we first verify that is one - to - one on , for small enough and large enough .differentiating we find when , we have therefore when and , we have , and since . finally , when , thus for any , we have obtained that for all . in particular , if $ ] , we have therefore since is increasing on , and , where is a convenient choice ( but any number smaller than and greater than zero would do to write the riemann sum ) .the distance between two distinct positive solutions of is at least , therefore since , and we have obtained that and this concludes our proof .the second step is to use proposition [ pro : ito ] to derive an estimate for the scattered field , for large enough contrast .[ lem : highcontrast - lemma ] suppose .let be the following decreasing function of the contrast given , for any such that there exists a set depending on and such that and , for any recall that we established in , that for all we have for , and , let be given by if , we have thanks to proposition [ pro : ito ] from proposition [ pro : ito ] , we also know that to conclude , note that the set of excluded frequencies for is . we can now conclude the proof of theorem [ thm : broadband ] .when , theorem [ thm : estim - all ] implies theorem [ thm : broadband ] , with .when .theorem [ thm : estim - all ] shows that for all , we have so we can again select .suppose now , with .then , and we can apply lemma [ lem : highcontrast - lemma ] .choosing we have for all , and there exists a set depending on and such that for any , the size of the set is bounded by since is decreasing when .this work was completed in part while george leadbetter and andrew parker were visiting oxpde during a summer undergraduate research internship awarded by oxpde in 2011 , and they would like to thank the centre for the wonderful time they had there . c. mller , _ foundations of the mathematical theory of electromagnetic waves _ , revised and enlarged translation from the german .die grundlehren der mathematischen wissenschaften , band 155 , springer - verlag , new york , 1969 .mr 0253638 ( 40 # 6852 ) h .-nguyen and m. s. vogelius , _ a representation formula for the voltage perturbations caused by diametrically small conductivity inhomogeneities .proof of uniform validity _ , ann .h. poincar anal .non linaire * 26 * ( 2009 ) , no . 6 , 22832315 .mr 2569895 ( 2011f:78003 ) f. w. j. olver , d. w. lozier , r. f. boisvert , and c. w. clark ( eds . ) , _ nist handbook of mathematical functions _department of commerce national institute of standards and technology , washington , dc , 2010 , with 1 cd - rom ( windows , macintosh and unix ) . mr 2723248
we consider the solution of a scalar helmholtz equation where the potential ( or index ) takes two positive values , one inside a ball of radius and another one outside . in this paper , we report that the results recently obtained in the two dimensional case in can be easily extended to three dimensions . in particular , we provide sharp estimates of the size of the scattered field caused by this ball inhomogeneity , for any frequencies and any contrast . we also provide a broadband estimate , that is , a uniform bound for the scattered field for any contrast , and any frequencies outside of a set which tends to zero with .
metastable structures are found in a variety of physical systems , _ e.g. _ glasses , amorphous solids of colloids , stalk intermediate structures in biological membrane fusion process , and so on .metastable states correspond to regions of the local free energy minima in phase space . in conventional simulation of the canonical ensemble ,once the system is captured in these regions , the system often remain in these non - equilibrium states for enormously long time .such metastable states are frequently found in macromolecular and colloidal systems , and make the simulation studies on equilibrium states difficult . even if the system can escape from the non - equilibrium states to the ordered equilibrium state , the constant number of particles and the constant system size cause defects in the ordered structure .this situation breaks , in a global scale , the anisotropy of the ordered structure and forces the periodicity of the ordered structure to be commensurate to the system size . in order to find defect - freeequilibrated ordered structures , we should finely tune the system box size as well as the number of particles so that both the anisotropy and the periodicity of the ordered structure , which are not known _ a priori _ , are not violated by the periodic boundary conditions .this fine tuning for the ordered structure is in general a tedious task .advanced simulation techniques , _e.g. _ multicanonical ensemble method and umbrella sampling , allow us to almost homogeneously sample the whole phase space at constant and , with the help of artificial weights that reduce the occurrence probability of non - equilibrium states .however , microscopic states in equilibrium , sampled by these advanced techniques , are restricted to microstates with the given set of constant and constant .free energy landscapes of the system with different sets of and at the same particle density are not searched .these extensive variables simultaneously need manual fine tuning for the purpose of finding the most stable state in these free energy landscapes .for example , both and of perfect crystals should be integer multiples of the unit structure .furthermore these advanced methods also require an advanced programming and large amounts of complicated preparation , _e.g. _ accurate calculation of free energy and the precise adjustments of the artificial weights , prior to the production simulation runs .in addition , such unphysical sampling processes with the artificial weights make it difficult to trace physical trajectories in the phase space . herewe devise molecular monte carlo simulation method of systems connected to three reservoirs ( hereafter we call it `` three - reservoirs method '' ) , chemical potential , pressure , and temperature , for seeking the most stable states of the target systems , _i.e. _ the equilibrium structures . due to gibbs - duhem equation , where is the entropy and is the volume , the number of reservoirs is thermodynamically limited to no more than 2 for single component systems .however , we connect the three reservoirs in order to overcome the above difficulties of the other conventional and advanced simulation techniques . in order to perform this ,we introduce a method for adjusting these three reservoirs and obtaining thermodynamically stable states . the total number of particles and the system box size are additional degrees of freedom of the system connected to these three reservoirs .these additional degrees of freedom correspond to additional dimensions of the phase space , which provide shortcuts from the non - equilibrium state to the equilibrium state .in addition , unlike the other simulation techniques utilizing 2 or fewer reservoirs , these degrees of freedom allow the system itself to simultaneously tune and , so that the system reaches the true equilibrium ordered structure . furthermore , our method requires fewer efforts for the preliminary simulation prior to the production simulation runs .guggenheim formally introduced boltzmann factor ( statistical weight ) of the ensemble with the three reservoirs .prigogine and hill also studied the same ensemble later .these early works , however , focused on mathematical aspects of the partition function , _i.e. _ mathematical formalism of the ensemble , since their goal was to discover a universal and generalized expression for a partition function applicable to any thermodynamically acceptable ensembles .by contrast , physical aspects of the ensemble were wholly left for the future . in the present work ,we study the physical aspects of this ensemble intuitively and thought - experimentally .in addition , we also analytically solve maximization problems of entropy densities , which corroborates our intuitive and thought - experimental study . finally , we design three - reservoirs method based on these physical aspects of the ensemble and show simulation results on non - trivial globally - anisotropic defect - free ordered structures of colloidal systems . in spite of gibbs - duhem equation, a system connected to the three reservoirs can be realized in experiments .for example , we can imagine a system that obeys the grand canonical ensemble ( _ i.e. _ constant , , and ) , and replace one of its walls with a free piston facing to a reservoir of pressure , _ i.e. _ the 3rd reservoir . in the present article ,we theoretically construct thermodynamics and statistics of the system connected to the three reservoirs .based on euler equation of thermodynamics , we also propose a method for measuring entropy and free energy directly from the simulations of the systems connected to the three reservoirs .this measurement needs fewer computational efforts than the other free energy calculation methods using molecular simulation .we design the algorithms of the three - reservoirs method based on conventional monte carlo ( mc ) simulation methods of the grand canonical ensemble ( -ensemble ) and the isothermal - isobaric ensemble ( -ensemble ) .we give a brief description of these conventional molecular mc techniques in appendix [ sec : molecularmctechniqueinmuvtandnpt - ensembles ] .thermodynamics , statistical mechanics , and simulation methods of the system with the three reservoirs are studied in section [ sec : muptensemble ] .finally , we summarize the present work in section [ sec : conclusions ] .here we discuss the basic formalism of the three - reservoirs method , where the method for adjusting the three reservoirs is also developed .thermodynamic properties of this system are discussed in section [ subsec : thermodynamicsinmuptensemble ] .a microscopic formulation of three - reservoirs method based on statistical mechanics will be given in section [ subsec : statisticalmechanicalpropertiesofparticlesinmuptensemble ] , where we solve the maximization problem of the statistical entropy per volume .algorithms for the simulation based on this statistical formulation are constructed in section [ subsec : mcsimulationmethodinmuptensemble ] .simulation results to demonstrate efficiency and stability of three - reservoirs method are given in section [ subsec : examinationof3-reservoirsmethod ] .finally , a new and simple technique to measure the entropy and the free energy of the system is proposed in section [ subsec : entropyandfreeenergycalculationinmuptensemble ] . as an example of thermodynamic systems , we consider a gas contained in a diathermal box with a free piston .this box is placed in an environment at constant and constant .these two intensive variables determine the other intensive variables of this system , _e.g. _ the chemical potential , the number density of particles , and the free energy per particle .this means that thermodynamic degrees of freedom of this system are equal to 2 , which results from gibbs - duhem equation , eq . .in conventional simulation methods , is also fixed at some value ( -ensemble ) , whereas states and phases of the system are independent of , a change of only scales the extensive variables of the system . in other words ,phase diagrams constructed in -plane are independent of the extensive variables . instead of fixing this insignificant , we connect this system to a reservoir of , whose value is determined by through gibbs - duhem equation , eq . .since gibbs - duhem equation is satisfied , this third reservoir does not affect the equilibrium state of the system .this condition corresponds to a thermodynamically stable point based on gibbs - duhem equation , which results in an equation of state that links , and . at a thermodynamically stable point of this ensemble , the extensive variables of the system , _e.g. _ and , are freely scaled , _i.e. _ indeterminate and fluctuating , while the system keeps all the intensive variables fixed . in simulating systems connected to the three reservoirs at the thermodynamically stable points , we can choose simulation runs at small , which are computationally advantageous .gibbs free energy per particle , which must be minimized in -ensemble before adding the 3rd reservoir of constant , is unchanged even after this 3rd reservoir is connected to the system . in the above example, is given from the outside of the system ; is adjusted according to and connected to the system as an additional reservoir .two other combinations and , and and also work in a similar manner .therefore , in addition to gibbs free energy per particle , both grand potential per volume and the thermodynamic potential of -ensemble per , _ i.e. _ , are simultaneously minimized in the system connected to the three reservoirs , where and are entropy and internal energy respectively .gibbs - duhem equation explains this simultaneous minimization of the 3 free energy densities .when we add the 3rd reservoir to the system and set the system at the thermodynamically stable point , the intensive variable of the 3rd reservoir needs to be adjusted , in advance of the connection of this additional reservoir . through this adjustment of the 3rd intensive variable ,the corresponding free energy density is minimized .the same system at the same thermodynamically stable point is also constructed by the two other combinations of the 3 intensive variables . in other words , , , and -ensembles simultaneously underlie the ensemble with the three reservoirs .this results in the simultaneous minimization of the 3 free energy densities . in the other ensembles ,however , any sets of corresponding 3 external parameters , _e.g. _ in the canonical ensemble ( -ensemble ) , can be selected arbitrarily and the adjustment of the external parameters is not demanded .therefore , the corresponding free energy , _ e.g. _ helmholtz free energy in -ensemble , is minimized in a conventional ensemble , whereas other free energies are not minimized due to the absence of underlying ensembles .a system connected to the three reservoirs is sketched in fig .[ fig : sketchmuptreservoirs ] .the reservoirs 1 and 2 define the values of and of the system respectively .the particle density of these 2 reservoirs , denoted by and respectively , determine the and .the system and the reservoirs 1 and 2 are connected to a thermostat at , the reservoir 3 . in appendix[ sec : molecularmctechniqueinmuvtandnpt - ensembles ] , we give a derivation of this , and its explicit expression is given in eq . . ,where of the system is a dynamic variable . between the reservoir 2 and the system ,a free piston is placed . this piston moves and changes the volume to fix the pressure of the system at .the system and these reservoirs are connected to a thermostat at , the reservoir 3 . ] as another simple example , we thermodynamically consider a system composed of a single - component ideal gas . in this example , we assume that the reservoirs are also composed of the same ideal gas . at the thermodynamically stable point ,a relation holds and the particle number density of the system , , also equals these particle number densities of the reservoirs ; _ i.e. _ . however , when , both and diverge , since the reservoir 1 continues to increase of the system aiming for a large value and the reservoir 2 increases aiming for a small value of .when , both and vanish .therefore , the system reaches equilibrium only at the thermodynamically stable point .in other words , outside the thermodynamically stable point , the system is always in non - equilibrium and both the intensive and the extensive variables are indeterminate .these results also apply to systems composed of interacting particles ( _ e.g. _ non - ideal gases ) .we utilize these divergence and vanishment of the target system as a criterion for the equilibration , which can be used for the automatic adjustment of the three intensive variables , _e.g. _ by a bisection method , to determine the thermodynamically stable point .the system quickly diverges or vanishes outside the vicinity of the thermodynamically stable points .the speed of the divergence and the vanishment increases with the difference in the intensive parameter sets from the thermodynamically stable point .on the other hand , when we need a long simulation run in the vicinity of the thermodynamically stable point , indeterminate could cause a computational problem , since the extensive variables could become extremely large or vanish .however , an appropriate choice of and makes the system last for a long time , within which good statistics of simulation results , _ e.g. _ particle density and lattice constants of crystals , are obtained .we can determine such simulation results with accuracy enough to obtain the equilibrated structure of the system with the three reservoirs .if we need a far longer simulation run , the ensemble can be switched to one of the conventional methods , _e.g. _ -ensemble or -ensemble . as these switched ensemblesare free from the problem of the indeterminate , they allow us to perform a longer simulation run of the equilibrated structure obtained via three - reservoirs method . in the present article, we tentatively call the ensemble of the systems connected to the three reservoirs -ensemble , since this is obtained as an equilibrium condition between the three intensive variables and as a combination of , , and -ensembles . herethe statistical mechanical properties of particles in -ensemble are discussed . according to gibbs - duhem equation, the thermodynamic potential of -ensemble is identically equivalent to zero in the thermodynamic limit , whereas systems obeying this ensemble have certain degrees of freedom in the phase space .this means that boltzmann factor fluctuates in statistical mechanics and that the partition function of this ensemble , _ i.e. _ summation of statistical weights over the whole phase space , is defined .this boltzmann factor is calculated in the present section as a natural extension of those for and -ensembles , and determines detailed balance conditions necessary for designing three - reservoirs simulation method . for determining this boltzmann factor, we solve a maximization problem of the statistical entropy per volume in -ensemble .the statistical entropy is defined as , where the suffix , , represents microstates of the system , denotes corresponding occurrence probability , and is boltzmann constant . because the extensive variables of -ensemble is indeterminate ,we choose the entropy density to be maximized . with the use of eq ., the statistical entropy per volume is given by , the equilibrium probability distribution , , which maximizes eq .under the constraints in -ensemble , is determined by : where is the thermal average specified according to the reservoirs .the constraint of eq . represents the normalization condition .the two constraints given by eq . instead of three constraintsare due to the thermodynamic degrees of freedom , which equal 2 . with the use of lagrange multipliers , , , and ,this maximization problem is reduced to , .\end{aligned}\ ] ] this reduced problem , eq ., is solved by a partial derivative with respect to .the solution is : ,\ ] ] where , , and .equations - , and give the statistical entropy per volume determined as thermal average as : equating eq . with the thermodynamic relation for the thermodynamic potential of -ensemble , denoted by , obtain : in the thermodynamic limit , the right - hand side of eq .is essentially zero compared with the other extensive variables in eq . .this result coincides with the euler equation in thermodynamics .furthermore , for finite , the last term of eq . is monotonically increasing with decreasing .this indicates that due to the principle of maximizing entropy a finite size system as is used in the computer simulation has a tendency to shrink even at the thermodynamically stable point . finally , from eq ., the probability distributions of the microstates are obtained as , .\ ] ] we confirmed that these results are also obtained by maximizing the statistical entropy per particle or the statistical entropy per internal energy instead of the statistical entropy per volume as has been done in eq . .the boltzmann factor of -ensemble is , ,\ ] ] where denotes the total kinetic energy of the system , the potential energy of the system , and is the spatial coordinates of the particle . as we have seen above ,this is a result of a natural extension of and -ensembles .this boltzmann factor determines the statistical properties of systems equilibrated with the three reservoirs and is equivalent to the boltzmann factor of -ensemble at fixed , and is equivalent to the boltzmann factor of -ensemble at fixed .guggenheim formally derived the boltzmann factors ( statistical weight ) of various ensembles . assuming that the ensemble averages of extensive variables , e.g. and , were determined in -ensemble, guggenheim also introduced statistical weight of this ensemble , based on analogy between other conventional ensembles .however , in the calculation of the maximization of entropy density discussed in section [ subsec : statisticalmechanicalpropertiesofparticlesinmuptensemble ] , guggenheim s assumption corresponds to keeping the averages and fixed , instead of the constraints eq . .guggenheim s assumption contradicts the indetermination of the extensive variables , as was pointed out by prigogine and further discussed by sack .prigogine showed that the summation of the resulting boltzmann factor over the phase space diverges and therefore concluded that the resulting partition function of -ensemble does not have any physical meanings .the thermodynamic potential of -ensemble , , determined from such partition function could be indefinite while it should identically equal zero in the thermodynamic limit .the true thermodynamic potential that dominates this ensemble , similar to the helmholtz free energy in -ensemble , remains unknown since guggenheim s article . in the following, we will answer prigogine s criticism and give an explicit expression of the thermodynamic potential for -ensemble . in conventional ensembles ,the statistical weight takes non - zero values only in the vicinity of the averages or in the phase space . outside this vicinity, the statistical weight quickly decreases to zero .this suppresses the divergence of the partition functions , _i.e. _ the summation of the statistical weights , in the case of conventional ensembles . on the other hand , in -ensemble , there is no such limitation because of the indeterminate extensive variables .the statistical weight of -ensemble at each microstate keeps non - zero values at any or .this results in the divergence of the partition function of -ensemble .however , ratios of the statistical weights between any pair of microstates are still defined .this feature guarantees the physical validity of -ensemble . in this case , the trajectory of the system in the phase space is similar to a free random walk in infinitely large space without boundaries .moreover , in the present study , equations and calculated with the constraints eqs . and indicate that the summation of the statistical weights equals for -ensemble .therefore , the corresponding thermodynamic potential , eq . , is negligibly small compared with the other extensive variables in eq . and vanishes in the thermodynamic limit .furthermore , by the thermodynamic consideration given in section [ subsec : thermodynamicsinmuptensemble ] , we have shown that -ensemble is obtained by combining 3 underlying ensembles each with 2 reservoirs , _i.e. _ , , and -ensembles .this thermodynamic consideration means that free energy densities of these underlying ensembles , _i.e. _ gibbs free energy per particle , grand potential per volume , and , are simultaneously minimized in -ensemble , rather than .this corresponds to the minimization of helmholtz free energy in -ensemble .the present simulation method is constructed based on conventional simulation methods in the grand canonical ensemble ( -ensemble ) and -ensemble .the simulation algorithms of the particle insertion and deletion in -ensemble ( see appendix [ subsec : mcsimulationmethodinmuvtensemble ] ) and the system size change in -ensemble ( see appendix [ subsec : mcsimulationmethodinnptensemble ] ) are directly utilized in our method .this compatibility between the present method and the conventional methods demonstrates that the algorithms of the present method satisfy detailed balance condition .one simulation step of the present method is composed of the following 4 trial steps : a. with probability , trial particle insertion into the system , b. with probability , trial particle deletion from the system , c. with probability , trial system size change , d. with probability , trial displacement by metropolis algorithm _i.e. _ perturbation of one particle , is chosen , where and are constants fixed in an interval . with the use of the simulation algorithms of trial particle insertion and deletion in -ensemble , the insertion and deletion of the present method ,steps i ) and ii ) , are performed . during this particle exchange between the system and the reservoir 1 ,the system size is fixed .this particle exchange in the present method satisfies the detailed balance condition because it is guaranteed in -ensemble .the trial system size change , step iii ) , is performed with use of the simulation algorithms in -ensemble , during which is fixed .this system size change in the present method also satisfies the detailed balance condition . unlike the conventional mc simulations in -ensemble based on mcdonald s method , , and independently changed in the present method . the trial move of one particle , step iv ) ,is performed by metropolis algorithm in -ensemble , which also satisfies the detailed balance condition .therefore , the present mc simulation method for -ensemble fulfills the principle of detailed balance .see also section [ subsubsec : detailedbalanceconditionandergodicityof3-reservoirsmethod ] .our algorithm indicates that a short - time average of an intensive physical quantity in -ensemble is approximated by ensemble averages of and -ensembles at corresponding and , which is discussed in appendix [ sec : ensembleaverageateachnandvinmuptensemble ] .our simulation is performed in a rectangular system box with independently changing system size . since any crystals fit into rectangular boxes with periodic boundary conditions, we do not have to introduce parrinello - rahman method , which allows the change in the shape of the simulation box .our simulation algorithm is similar to gibbs ensemble technique , which is utilized for simulation of phase equilibria in -ensemble , where the phases coexisting in the same system box in -ensemble exchange both volume and particles .a system connected to the three reservoirs at a thermodynamically stable point corresponds to this gibbs ensemble , when one of the coexisting phases in gibbs ensemble is assumed to be infinitely large that plays the role of the two reservoirs and . in this section, the detailed balance condition and the ergodicity of three - reservoirs method are discussed .steps i ) and ii ) change according to the detailed balance condition as in the same way that the -ensemble does .step iii ) changes according to the detailed balance condition as in the -ensemble .the particle coordinates are updated in step iv ) , by the standard metropolis algorithm of -ensemble . as a result ,three - reservoirs method satisfies the principle of detailed balance and ergodicity , based on the conventional ensembles which fulfill ergodicity .this indicates that , , and the particle coordinates are simultaneously updated in the phase space , so that the system realizes the equilibrium state .this also means that statistics of -ensemble contradicts none of the three underlying ensembles with 2 reservoirs , _i.e. _ , , and -ensembles . in this section, we demonstrate the efficiency and stability of the three - reservoirs method using several examples .the first example is a model polymer - grafted colloidal system , which was observed to show various exotic metastable phases each of which has a long life time .colloidal particles are made from metals , polymers , _ etc ._ and are often modeled with hard spheres . on the other hand ,owing to van der waals attraction acting on particle surfaces , colloids are aggregating and make precipitates after a long time .for the purpose of stabilizing colloidal dispersions against the precipitation , linear polymer chains are often grafted onto the surfaces of the colloids .these are called polymer - grafted colloids . depending on the physical and chemical properties of the grafted polymers , interaction between polymer - grafted colloidschanges significantly .polymer - grafted colloids have several industrial applications due to such useful characteristics , for example filler particles immersed in a polymer matrix , and the particles in electro and magnet rheological fluids . in our previous work , we studied the phase behavior of colloidal particles onto which diblock copolymers are grafted .pair interaction potential between these polymer - grafted colloids was numerically determined via self - consistent field calculation as a function of the distance between centers of the particles , .this potential has been approximated by spherically symmetrical repulsive square - step potential with a rigid core of diameter and a square - step repulsive potential of diameter and height as : where the step potential is originated from the grafted polymer brushes .this interaction potential is purely repulsive . simulating particles interacting via in -ensemble ,we have studied phase behavior of these colloidal systems .these mc simulation results show that , at low temperature , high pressure , and , our particles self - assemble into string - like assembly .the positions and the mean - square displacement of the particles show that the string - like assembly is observed in disordered solid phases .actually , such a string - like assembly has been observed recently in experiments .in addition to such string - like assembly , various structures , _ e.g. _ dimers and lamellae , and also glass transition are observed in the same model system .it was also shown that the particles interacting via continuous repulsive potential similar to the above show these string - like and other various assemblies . in these recent studies using -ensemble at finite in both 2 and 3-dimensions, it was shown that this string - like assembly with a local alignment in the same direction but with a global isotropy is a metastable structure .although a variety of ground states of the same model system at zero temperature have been discovered via genetic algorithms , equilibrium states at finite temperature have not been understood yet .we simulate the equilibrated states of our model system at finite via three - reservoirs method . in the present simulation work , and taken as the unit length and the unit energy , respectively .simulation is performed on 2 dimensional systems .we define dimensionless chemical potential as , the thermal de broglie wave length , defined in eq . in appendix[ sec : molecularmctechniqueinmuvtandnpt - ensembles ] , is removed from in eq ., since simulation results are independent of , which is discussed in appendix [ sec : molecularmctechniqueinmuvtandnpt - ensembles ] . in the present article, is fixed .in the initial state , particles are arranged on a homogeneous triangular lattice in a square system box , , with the periodic boundary condition . for the trial move of the particles , _i.e. _ metropolis algorithm , a particle is picked at random and given a uniform random and isotropic trial displacement within a square whose sides have length . is fixed at .the probability and . with this ,the computational time for the simulation is about twice as long as the simulation in -ensemble. 1 monte carlo step ( mcs ) is defined as simulation steps .the mersenne twister algorithm is adopted as a uniform random number generator for our simulation . in our previous -ensemble simulation with the potential step width , , we found that the string length diverges at and . at this low , using three - reservoirs method , we simulate the system . in the present simulation , the density is initially set at 0.451 for system .first , simulating the system at various values of , we search for the thermodynamically stable point at fixed .the given is , 1 ) : , 2 ) : , 3 ) : , 4 ) : , and 5 ) : . as an example, a snapshot of the system at 1 ) is presented in fig .[ fig : frog2dmupts20snapshots](a ) . despite different ,all the systems of 1 ) to 5 ) show similar well - aligned globally - anisotropic defect - free string - like assembly , though the system in -ensemble at and presents the globally - isotropic string - like assembly as is shown in fig .[ fig : frog2dmupts20snapshots](c ) . only small and short - lived defects caused by the thermal fluctuationcan be generated in the systems simulated with -ensemble , whereas many long - lived defects are observed in simulation .time evolutions of and at 1 ) to 5 ) are given in fig .[ fig : frog2dmupts20t012forarticle - rhoandn ] . all the data for the time evolution of shown in fig .[ fig : frog2dmupts20t012forarticle - rhoandn](a ) are fluctuating in the vicinity of , regardless of .it would be worth noting that the simulations started from different initial conditions , _e.g. _ different initial particle density and different system aspect ratio of the simulation box , also reach the same results as long as the intensive variables of the reservoirs , , , and are the same . furthermore , outside this range of from 1 ) to 5 ) , the system diverges ( ) or vanishes ( ) just after the simulation starts .these results illustrate that this region of is located in the vicinity of the thermodynamically stable point at this pressure , . , ( a ) , and time evolution of , ( b ) , simulated at and various , .lines 1 ) to 5 ) are simulation results at .1 ) : .2 ) : .3 ) : .4 ) : .5 ) : . lines 6 ) to 8) are simulation results at .6 ) : .7 ) : .8) : . ]on the other hand , time evolution of , plotted in fig .[ fig : frog2dmupts20t012forarticle - rhoandn](b ) , indicates that the total system size depends on in a systematic manner . at small mcs , line 1 )indicates the tendency of the vanishment and lines 2 ) to 5 ) the tendency of the divergence .this means that , in a short computational time , the thermodynamically stable point is expected to lie between 1 ) : and 2 ) : , which corresponds to a relative error of some percent .the abrupt time evolution of stops at less than mcs , whereas the system slowly continues diverging or vanishing at larger mcs .the mcs needed for the divergence or the vanishment becomes larger when we choose parameter sets close to the exact values at the thermodynamically stable point .this is utilized as a criterion for measuring the convergence of the thermodynamic intensive variables in -ensemble .although we stop the simulation with this relative error of some percent , the accuracy of the thermodynamically stable point could be raised , _e.g. _ by the bisection method improving the accuracy of .lines 6 ) to 8) in fig .[ fig : frog2dmupts20t012forarticle - rhoandn ] show the results of a similar series of simulations for a fixed value of ( the same value as that for line 2 ) ) and changing the value of .this is consistent with the thermodynamically stable point at obtained above within the relative error of some percent .the given of lines 6 ) to 8) ranges to 1.1 . a snapshot of the system at 8) is presented in fig . [ fig : frog2dmupts20snapshots](b ) .despite different intensive parameter sets , all the systems of 6 ) to 8) show the globally - anisotropic defect - free string - like assembly similar to fig .[ fig : frog2dmupts20snapshots](a ) .long - lived defects are absent in these systems .all the lines 6 ) to 8) in fig .[ fig : frog2dmupts20t012forarticle - rhoandn](a ) are fluctuating around lines 1 ) to 5 ) , _ i.e. _ the vicinity of .for the values of outside this range , quickly diverges or vanishes . from these data , we recognize that the present thermodynamically stable point lies between 7 ) : and 8) : .this thermodynamically stable point is equal to the one we obtained above within a relative error of some percent , as is expected .once the thermodynamically stable point is determined accurately in -ensemble , we can exchange the ensemble to a conventional one , _ e.g. _ -ensemble or -ensemble , and can perform a longer simulation run , which is free from the divergence or the vanishment of .as is expected , we can perform longer simulation runs even with the -ensemble if the thermodynamically stable point is determined with higher accuracy .next , we check the stability of our simulation method . starting from the instantaneous microstate shown in fig .[ fig : frog2dmupts20snapshots](a ) , we resume simulation runs after disconnecting the reservoirs 1 or 2 , which corresponds to simulations with or -ensemble . snapshots of the system and the time evolution of in these resumed simulation runs are given in figs .[ fig : frog2dmupts20snapshotsinvariousensembles ] and [ fig : frog2dmupts20rd001e14p10t012innptmuvt - rho ] respectively .both of these simulation runs preserve the same string - like assembly . in -ensemble case, a value of that is close to that for three - reservoirs method is also obtained .therefore , the equilibrium state obtained via three - reservoirs method does not change even after the ensemble is switched .these results justify our method for determining the thermodynamically stable point with -ensemble . .black dots represent the centers of the particles and grey lines denote networks of overlaps between the particles . from the instantaneous state of fig .[ fig : frog2dmupts20snapshots](a ) , simulations are resumed after disconnecting the reservoir 1 or 2 , i.e. simulations are resumed with or -ensemble .( a ) : simulation result at mcs , after resuming the simulation with -ensemble .( b ) : simulation result at mcs , after resuming the simulation with -ensemble . ] in simulations with and -ensembles that are resumed from the instantaneous state shown in fig .[ fig : frog2dmupts20snapshots](a ) . in these simulation runs ,the number of the particles is with minor fluctuations . for reference ,the result of the simulation with -ensemble is also shown . ] here , one can recognize occurrence of a few defects and undulation of the string - like assemblies after switching the ensembles .this should be attributed to the difference in the nature of the thermal fluctuations of a finite system between different ensembles .for example , when a particle is removed from a perfect triangular crystal in -ensemble and simultaneously the ensemble is exchanged to -ensemble , long - lived defects are created in the crystal .figure [ fig : frog2dmupts20snapshotsinvariousensembles](a ) shows this finite size effect . in the thermodynamic limit where the system size becomes infinitely large, such a difference should vanish .we also recognize a slight drop in in -ensemble by approximately 2% .this is also related to the finite size effect of the -ensemble , which will be further discussed in section [ subsubsec : aldertransitionoftheoutercores ] .the undulation of the string - like assembly shown in fig .[ fig : frog2dmupts20snapshotsinvariousensembles](b ) corresponds to zigzag instability typically observed in convection rolls in a fluid slab when its natural periodicity is suddenly changed .this zigzag instability is consistent with the slight drop in , _i.e. _ a slight increase in the system size , in -ensemble shown in fig .[ fig : frog2dmupts20rd001e14p10t012innptmuvt - rho ] .these simulation results show that the physical properties of the system , in conventional ensembles , sensitively depend on and the system box size while they do not strongly depend on in -ensemble . at extremely low temperature ,the repulsive square - step of becomes far higher than the thermal energy , . due to this extremely high potential energy barrier ,the phase behavior of the system is almost identical to the behavior of hard particle systems with diameter if the system volume exceeds the close - packed volume ( area ) of the outer cores of the particles , denoted by ( in 2 dimensions ) .therefore , crystalization of the hard particles at high density , called alder transition , occurs in our system at and low temperature . herewe simulate this triangular crystal of the outer cores of our colloids with the diamter at low temperature , where we fixed the parameters that are determined via the iterative refinement of and .the stability of this thermodynamically stable point is discussed later in the present section .-ensemble at and at mcs .the initial system volume is set at .black circles represent the inner cores of the particles and white ones denote the outer cores . ] , ( a ) , and , ( b ) and ( c ) , in the simulations with the -ensemble at .black lines denote the results of that are started from the initial condition with and and grey lines from that with and . ] simulation with the -ensemble is started from an initial state with and .we prepared this initial configuration by removing the particles from the equilibrium configuration with particles arranged on a homogeneous triangular lattice , which results in an inhomogeneous particle configuration .such an inhomogeneous configuration is swiftly equilibrated in -ensemble as is shown in fig .[ fig : frog2dmupts20rd01e09p045t01inin1089iniv4778.38_009000000mcs ] . a snapshot of the system simulated with the -ensemble , which is presented in fig .[ fig : frog2dmupts20rd01e09p045t01inin1089iniv4778.38_009000000mcs ] , shows the defect - free triangular crystal .temporary small defects due to thermal fluctuation are sometimes found in the system , whereas long - lived defects are absent .time evolution of and is plotted in fig .[ fig : frog2dmupts20rd01e09p045t01inin1089iniv4778.38inin1221iniv5647.18-rhoandn ] . small fluctuation in indicates the high stability of this defect - free crystalline state .different initial particle configuration also results in a similar defect - free crystalline state with the same average .time evolutions of and for simulation on the system with different and initial are also plotted in fig .[ fig : frog2dmupts20rd01e09p045t01inin1089iniv4778.38inin1221iniv5647.18-rhoandn ] , which show the same defect - free structure .this demonstrates the high stability of the defect - free crystalline state obtained via three - reservoirs method .next , we compare physical characteristics of -ensemble with these of the conventional ensembles . for this purpose , we perform simulations also with the conventional ensembles , _ i.e. _ and -ensembles , with the same parameters . different from the -ensemble case , in the present -ensemble case , we have to manually tune the value of so that the perfect ordered equilibrium structure can be obtained .for this reason , we perform a series of simulations with -ensemble for all the values of within an interval , where and 1 mcs = simulation steps are fixed and the initial system volume is still .the only parameter that is changed from the above -ensemble simulation is .typical examples of the time evolution of in these simulation runs are plotted in fig .[ fig : frog2dnpts20p045t01n - rho ] . for reference ,the result with the parameter optimized in -ensemble , , is also plotted in this figure .the system with this optimized parameter shows the defect - free triangular crystal and its value of is close to the results of the -ensemble .however , with any value of in the above interval , long - lived defects , mostly point defects , appear in the system , as is shown in fig .[ fig : frog2dnpts20p045t01n1082n1089_020000000mcs ] .moreover , the average value of changes with the change in slightly , nonmonotonically , and sensitively .this behavior is different from the results of the -ensemble .this means that , although the external intensive variables are fixed , physical properties of the system in -ensemble sensitively depend on the external extensive variable , . in -ensemble at .] in -ensemble , the system size has to manually be tuned .actually , we try such a manual tuning by performing the -ensemble simulation with various system sizes in an interval .aspect ratio of the system box is kept at a typical value for molecular simulation , .simulation parameters that are changed from those of the corresponding -ensemble simulation are and .we assume that 1 mcs = simulation steps .all the systems we have simulated show the defect - free triangular crystals , since point defects are directly removed by the particle insertion and deletion processes .long - lived defects are not found in these simulation runs with -ensemble . however , the average value of sensitively and nonmonotonically depends on the value of .typical examples of time evolution of are given in fig .[ fig : frog2dmuvts20rd01e09t01iniv - rho ] . although the intensive variables are specified by the reservoirs , physical properties of the system in -ensemble significantly depend on the extensive variable . the data for the simulation with , which is the optimized value obtained in the three - reservoirs method , are also plotted by a black line .we can confirm that the optimized value of is preserved during the simulation run .this result justifies the simulation results obtained using three - reservoirs method .obtained with -ensemble at .the black line denotes the result at , which is the optimized value obtained with three - reservoirs method . ] in the -ensemble simulations , the particle density can be adjusted only discretely because of the discrete nature of , and therefore the behavior of in -ensemble is rather abrupt and sensitive to the other parameters .however , in this -ensemble case , the point defects can rather easily be removed even in high - density states _e.g. _ in a triangular crystal . on the other hand in -ensemble , is changed by the change in the continuous dynamic variables , .this results in a smaller change in the average compared with that in -ensemble .however , long - lived defects are frequently found in -ensemble .three - reservoirs method overcomes these disadvantages of and -ensembles and , at the same time , it inherits the advantages of these ensemble methods . in -ensemble ,both point defects and line defects can be removed easily , and the extensive variables can finely and spontaneously be tuned .the simulation results in section [ subsec : examinationof3-reservoirsmethod ] show that it is a tedious task to perform the manual tuning of and the system size in conventional ensembles , whereas these extensive variables are automatically and finely tuned in our -ensemble . in the conventional ensembles whereat least one extensive variable is fixed , physical properties of the system are significantly dependent on the value of such a fixed extensive variable .this illustrates that , for example , the equilibrium state of a system in -ensemble depends on even though the intensive variables and are fixed . as another example , the equilibrium state of a system in -ensemble changes with and even though the external intensive variables and are fixed .the equilibrium state in conventional ensembles is specified by the parameter sets of the external extensive variables as well as the external intensive variables . in the present article, we tentatively define the local equilibrium state as the equilibrium state at each parameter set of the external extensive variables with the fixed external intensive variables . in -ensemble ,however , the extensive variables are finely and spontaneously tuned and the most stable state over the local equilibrium states at the given is automatically obtained . in the present article , we tentatively call this equilibrium state in -ensemble the global equilibrium state .as these dependences of the physical properties on the external extensive variables in conventional ensembles are regarded as a finite size effect , the three - reservoirs method is a technique to remove the finite size effect of the conventional ensemble methods .any local equilibrium states are , in the thermodynamic limit , identical to the global equilibrium state . in molecular simulations , physical quantities of the simulation system are defined through the ensemble average over the probability distribution of the microstates in the phase space .the evaluation of the free energy of such a system , however , is equivalent to the evaluation of its partition function . as the partition function is not an averaged quantity over the ensemble , its evaluation demands a high dimensional integral over the whole phase space , resulting in unrealistically large amounts of computational cost . instead of evaluating the partition function directly , a derivative of the free energy with a control parameter , _e.g. _ pressure in the canonical ensemble , is calculated for the sake of evaluating the free energy in standard molecular simulation .this technique , which is usually called the thermodynamic integration , gives the free energy difference between two different thermodynamic states .however , with this technique one can not go across a first - order phase transition .for example , when the two thermodynamic states are located in a solid phase and in a fluid phase respectively , a first - order transition occurs in the middle of the integration path . due topossible hysteresis at this transition point , forward and backward integration paths between the two states in general gives different values for the free energy difference .this problem also affects the other free energy calculation techniques , _e.g. _ histogram reweighting technique and the method of expanded ensembles . in order to overcome this difficulty, we need to find appropriate reference states , whose free energy values are already known , _einstein solid for the free energy calculation of crystals .these reference states provide reversible integration paths .if we can not find such reference states , we have to find integration paths that bypasses the first - order transition line , _e.g. _ an integration path that is arranged with the help of artificially introduced external fields . after setting a reversible integration path using these techniques , we run simulations at a large number of state points along the integration path , and integrate the derivative of the free energy along this path .in addition to the discretization error in the integration along the path , occurrence of defects in the system also affects the accuracy of the free energy evaluation of the ordered structures .here we propose , based on euler equation in thermodynamics , a convenient method for the free energy calculation using the systems connected to the three reservoirs .the entropy of the system per particle , denoted by , satisfies euler equation , where denotes the total kinetic energy of the system , denotes the total potential energy , and is the ensemble average at . in the right - hand side of this equation , the 3 intensive variables , , , and of the system , relax to the equilibrium values that are equal to those given by the reservoirs . is determined via the equipartition theorem , _ e.g. _ for monatomic molecules .the ensemble averages and can directly and readily be evaluated through the simulation runs of the three - reservoirs method .therefore , according to this euler equation , is determined from our simulation at one state point , which means that this free energy evaluation method requires far smaller amounts of computation than the other standard methods do .the free energy of the system , _e.g. _ helmholtz free energy , gibbs free energy , and grand potential , is also obtained from this result in a similar manner . sinceour entropy and free energy calculation method is free from any thermodynamic integration paths , the first - order phase transition does not affect our evaluation method .in addition , with the three - reservoirs method we can easily eliminate the defects which is the main origin of the error in the free energy evaluation for the ordered phases .this raises the accuracy of the evaluation . in our molecular simulation, includes the thermal de broglie wave length , as is discussed in appendix [ sec : molecularmctechniqueinmuvtandnpt - ensembles ] , _ i.e. _ eq . . and in when the free energy difference between two different state points is calculated .therefore , we do not have to take care of these and .although the above method based on euler equation is , in principle , applicable to simulations with the other ensembles , _e.g. _ and -ensembles , one has to measure intensive variables and/or as ensemble averaged values .such a procedure requires a computationally expensive analysis .for example , the measurement of , _i.e. _ gibbs free energy per particle , demands a vast amount of simulation , especially in high particle density regions .however , in the -ensemble or -ensemble , we do not have to evaluate because it is already given by the reservoir .same is true for the evaluation of the pressure . as a result , with the use of the -ensemble, we can skip tedious evaluations of the intensive variables because all the essential intensive variables , , and are already specified by the reservoirs .in addition , metastable structures and defects , which frequently appear in conventional ensembles , affect the results of this entropy and free energy evaluation .when the system is in metastable states or outside the global equilibrium state , the intensive variables given from the reservoirs are inconsistent with the values of these variables in the simulation system .this means that both and need to be analyzed in the simulation rather than to use the specified value by the reservoirs .however , as the defects are quickly eliminated and the system reaches the global equilibrium in the simulations with the -ensemble , the evaluation of the free energy is free from the above problem associated with the metastable states and the local equilibrium states . when one tries to construct the equilibrium phase diagrams using conventional simulation methods , candidates for the equilibrium structure have to be chosen prior to the simulation and the free energy of each candidate should be measured and compared with each other with high accuracy , for example , with a typical error level of to . a variety of phases , _e.g. _ crystals , the string - like assemblies , and the other ordered and disordered structures should be considered the candidates for the equilibrium phase . in actual calculation, we empirically select some of these potential candidates and discard the others .however , we can not deny the possibility that we have discarded the true equilibrium structure in this selection process .in addition , there is another possibility that the equilibrium structure is a totally new structure which has not been discovered yet . with the use of -ensemble ,however , one can obtain the global equilibrium structure directly as a result of the fine tuning of the extensive variables .therefore , the above - mentioned problem encountered in the construction of the phase diagram using the standard ensembles can be avoided when we use -ensemble .we have studied thermodynamics , statistical mechanics , and molecular mc simulation algorithms of -ensemble .guggenheim formally introduced boltzmann factor ( statistical weight ) of this ensemble with an assumption that the averages of extensive variables can be determined . however, this assumption contradicts the indetermination of extensive variables in -ensemble .in addition , other characteristics of this ensemble were totally absent in guggenheim s discussion and other early works .these early researchers concentrated on the formalism , i.e. mathematical aspects of the partition function of this ensemble , only at the thermodynamically stable point .physical aspects of the ensemble have not seriously been discussed . in the present work ,we have shed light on these problems and have discovered thermodynamic and statistical mechanical characteristics of -ensemble , _e.g. _ the thermodynamically stable point , thermodynamic degrees of freedom , thermodynamics outside the thermodynamically stable point , maximization of entropy density , quick equilibration due to shortcuts in the phase space , _ etc ._ we have also shown that -ensemble is built as a combination of the 3 ensembles each of which is combined to 2 reservoirs , _i.e. _ , , and -ensembles .the 3 corresponding free energy densities , _i.e. _ gibbs free energy per particle , grand potential per volume , and , are simultaneously minimized in -ensemble , rather than the thermodynamic potential of -ensemble , denoted by .we have proposed a molecular mc simulation method based on -ensemble ( three - reservoirs method ) .we can show that this three - reservoirs method gives a physically acceptable ensemble , which allows us to trace the physical trajectories in the phase space . since three - reservoirs method is built as a combination of conventional and -ensembles , programming is lighter than other advanced techniques .in addition , the thermodynamically stable point is determined according to gibbs - duhem equation in a short computational time .these features mean that three - reservoirs method requires a small amount of preparation and that we can quickly start production simulation runs , although other advanced techniques demand a large quantity of complicated preparation , _e.g. _ advanced programming , the precise adjustments of the artificial weights necessary for multicanonical technique , and accurate free energy measurement essential for the expanded ensemble technique .these advantages over other simulation techniques facilitate and reduce the total work flow of our three - reservoirs method compared with the conventional methods such as , , or multicanonical method .furthermore , only with the three - reservoirs method , we can _ simultaneously _ and _ automatically _ tune the number of particles and the system size to obtain the equilibrium ordered state . this unique advantage of the three - reservoirs method could enhance the understanding of those systems that were obtained via the other standard simulation techniques .for example , for perfect crystals , both and must be integer multiples of the unit structure of the crystal structure , which is in general not known _ a priori_. in principle , by measuring the free energy density of various structures at each set of and ,these extensive parameters can manually be tuned in simulation of -ensemble , -ensemble , multicanonical ensemble , _ etc ._ in practice , however , this manual tuning requires much computational effort . on the other hand , with the use of our three - reservoirs method, we can automatically achieve such optimization . for a solid at finite temperature ,the system with the -ensemble reaches a globally - anisotropic defect - free ordered state as the equilibrium state by crossing the metastable states through the shortcuts in the phase space due to the additional degrees of freedom . on the other hand , in conventional ensembles , physical properties of the systemsensitively and discretely depend on and/or even though external intensive variables are fixed .this results in a requirement of the manual tuning of these extensive variables to obtain the global equilibrium state .these results illustrate that our method can be applied to a variety of physical systems for the sake of studying ordered structures in equilibrium at finite , _e.g. _ lamellae composed of diblock copolymers , smectic phase of liquid crystals , and fluid bilayer membranes .three - reservoirs method can also be applied to numerical calculation of the equation of state that relates , , and .we have also shown that the entropy and the free energy can quickly be evaluated in -ensemble based on euler equation .this feature is essentially important in the construction of the phase diagram of condensed materials .the authors wish to thank professor komajiro niizeki and mr masatoshi toda for helpful suggestions and discussions .we also thank the anonymous referees for their valuable suggestions and comments .this work is partially supported by a grant - in - aid for science from the ministry of education , culture , sports , science , and technology , japan .in this appendix , we show molecular mc simulation technique for single component systems based on the and -ensembles .we use the following notations : denotes the spatial coordinates of the particle , mass of a particle , the thermal energy , the potential energy of the system , and planck s constant .the chemical potential of a system consisting of real particles in the canonical ensemble ( -ensemble ) is given by , where and here , denotes the chemical potential of an ideal gas in -ensemble and is called the thermal de broglie wave length .the quantity denotes the excess chemical potential that is originated from the interaction between the real particles .simulation results in -ensemble are independent of .this means that only appears in the chemical potential at the reference point in -ensemble simulation , which is discussed in section [ subsec : mcsimulationmethodinmuvtensemble ] . in the present section ,we discuss the mc method of a single component system in the grand canonical ensemble , which is also called -ensemble because , , and are kept fixed . in simulation with the -ensemble , particles are inserted into and deleted from the system in addition to metropolis trial displacement of particles . in one simulation for this ensemble, these steps are included in : a. with probability , trial particle insertion into the system b. with probability , trial particle deletion from the system c. with probability , trial displacement based on metropolis algorithm , _i.e. _ perturbation to one particle is chosen , where is a constant fixed in an interval .algorithms of trial particle insertion and deletion are discussed in the following .we assume that one particle is inserted into the system that is composed of particles .the position of this inserted particle is chosen uniformly over the system box .the coordinates of the particles , , are fixed during this particle insertion step .the state after the insertion , _i.e. _ the state of particles , is accepted as a new state with the probability , \right ) , \\ & u^{\text{ins}}_{\text{excess } } : = u \left ( \rvector_1 , \dots , \rvector_{n+1 } \right ) - u \left ( \rvector_1 , \dots , \rvector_n \right ) . \notag\end{aligned}\ ] ] a function returns the smaller of two arguments , and .if the trial insertion is rejected , the state before the insertion is kept for the next simulation step .we assume that one particle is randomly chosen and attempts to be removed from the present system composed of particles .this chosen particle , denoted by index , is removed from the system with the probability , } \right ) , \\u^{\text{del}}_{\text{excess } } : = u \left ( \rvector_1 , \dots , \rvector_n \right ) - u \left ( \rvector_1 , \dots , \rvector_{j-1 } , \rvector_{j+1 } , \dots , \rvector_n \right ) .\notag\end{gathered}\ ] ] if the trial deletion is rejected , the state before the deletion is kept for the next simulation step .when is substituted for in eqs . and , these acceptance criteria are independent of .this illustrates that simulation results in -ensemble are free from the actual value of .therefore , appears only in the chemical potential at the reference point in the -ensemble simulation .mc simulation method in isobaric - isothermal ensemble , also called -ensemble , is briefly summarized in this section .in addition to and , the system pressure , denoted by , is given from the outside , in this ensemble .in simulation of -ensemble , the system size is changed , in addition to the trial move of particles . in one simulation step , we include the following steps : a. with probability , trial system size change b. with probability , trial displacement based on metropolis algorithm , _ i.e. _ perturbation of one particle is chosen , where is a constant fixed in an interval .algorithms of trial system size change are discussed in the following .to change the system box size , we use the following 4-step algorithm .we assume that the system box size , denoted by , is changed to a new system size , .unlike the conventional mc simulations in -ensemble based on mcdonald s method , each element of the system size is independently changed in our algorithm . 1 .the new system size is chosen , where is a small constant length , , and are random numbers uniformly distributed over an interval , and denotes the volume of the new system box .coordinates of all the particles before the system size change , , are homogeneously scaled , this is new particle coordinates after the change .3 . the potential energies of the systems before and after the system size change , and respectively , are calculated .the new system size and particle coordinates are accepted with the probability , \right ) . \end{gathered}\ ] ] if this trial system size change is rejected , the state before the trial is kept for the next simulation step .the ensemble average at each in -ensemble is equivalent to the average in -ensemble at the same .we illustrate this in the present section .we assume that a finitely long simulation run is performed in -ensemble at a thermodynamically stable point .after the equilibration of the simulation system , a finitely large number of microstates of the system are visited .unnormalized boltzmann factor of the system is denoted by , where the suffix represents these microstates . is kronecker delta . .after the equilibration of the system , the ensemble average of an intensive physical quantity , , obtained in the present simulation run is : } { \sum_i \delta_{n_i , n } \frac{1 } { \varlambda^{3n } \ , n ! } \exp \left [ - \beta \left ( p v_i - \mu n + u_i \right ) \right ] } \notag \\ & \quad = \sum_{n=0}^{\infty } f(n ) \frac { \sum_i \delta_{n_i , n } a_i \exp \left [ - \beta \left ( p v_i + u_i \right ) \right ] } { \sum_i \delta_{n_i , n } \exp \left [ - \beta \left ( p v_i + u_i \right ) \right ] } \notag \\ & \quad \cong \sum_{n=0}^{\infty } f(n ) \ , \langle a \rangle _ { t , p , n},\end{aligned}\ ] ] where the summation runs over the microstates visited in the simulation , and is defined as , and denotes the ensemble average of in -ensemble . denotes the occurrence probability of in the simulation run .an expression similar to eq . also holds for the ensemble average of in -ensemble .this result illustrates that the short - time average of in -ensemble is approximated by the ensemble average in -ensemble or -ensemble .when the finite size effect of the system , which has been discussed in sections [ subsec : examinationof3-reservoirsmethod ] and [ subsec : globalequilibrium ] , is small , is dependent on and independent of .therefore , eq . yields .a similar relation , , also holds , where denotes the ensemble average of in -ensemble .therefore , .53ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] link:\doibase 10.1038/nmat2404 [ * * , ( ) ] link:\doibase { 10.1209/epl / i2005 - 10288 - 6 } [ * * , ( ) ] link:\doibase 10.1126/science.1142614 [ * * , ( ) ] link:\doibase { 10.1038/nsmb.1455 } [ * * , ( ) ] link:\doibase { 10.1039/b901657k } [ * * , ( ) ] _ _ , http://webdoc.sub.gwdg.de/diss/2010/norizoe/norizoe.pdf[ph.d .thesis ] , , ( ) , + http://webdoc.sub.gwdg.de/diss/2010/norizoe/ + \doibase doi : 10.1016/0370 - 2693(91)91256-u [ * * , ( ) ] _ _ ( , , ) \doibase doi : 10.1016/0021 - 9991(77)90121 - 8 [ * * , ( ) ] _ _ ( , , ) link:\doibase 10.1063/1.462133 [ * * , ( ) ] ( ) , link:\doibase 10.1063/1.1750386 [ * * , ( ) ] \doibase doi : 10.1016/0031 - 8914(50)90070-x [ * * , ( ) ] _ _ ( , , ) \doibase doi:10.1080/00268975900100021 [ * * , ( ) ] link:\doibase 10.1021/jp951819f [ * * , ( ) ] _ _ ( , , ) link:\doibase 10.1103/physrevlett.45.1196 [ * * , ( ) ] link:\doibase 10.1063/1.328693 [ * * , ( ) ] link:\doibase 10.1016/0036 - 9748(83)90283 - 1 [ * * , ( ) ] * * , ( ) * * , ( ) link:\doibase 10.1103/physreve.80.061403 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.99.248301 [ * * , ( ) ] * * , ( ) * * , ( ) link:\doibase 10.1063/1.2965880 [ * * , ( ) ] * * , ( ) link:\doibase 10.1063/1.3006065 [ * * , ( ) ] \doibase http://doi.acm.org/10.1145/272991.272995 [ * * , ( ) ] \doibase http://doi.acm.org/10.1145/146382.146383 [ * * , ( ) ] \doibase http://doi.acm.org/10.1145/189443.189445 [ * * , ( ) ] link:\doibase 10.1103/revmodphys.65.851 [ * * , ( ) ] * * , ( ) link:\doibase 10.1103/physrevlett.63.1195 [ * * , ( ) ] * * , ( ) link:\doibase 10.1063/1.448024 [ * * , ( ) ] link:\doibase 10.1103/physreve.51.r3795 [ * * , ( ) ] link:\doibase { 10.1039/b818111j } [ * * , ( ) ] link:\doibase 10.1063/1.1734110 [ * * , ( ) ] * * , ( ) link:\doibase 10.1063/1.1741967 [ * * , ( ) ] * * , ( ) link:\doibase 10.1063/1.462658 [ * * , ( ) ] and , eds ., _ _ ( , , ) _ _ ( , , ) * * , ( ) link:\doibase 10.1063/1.1667938 [ * * , ( ) ] link:\doibase 10.1063/1.1673047 [ * * , ( ) ] * * , ( )
in conventional molecular simulation , metastable structures often survive over considerable computational time , resulting in difficulties in simulating equilibrium states . in order to overcome this difficulty , here we propose a newly devised method , molecular monte carlo simulation of systems connected to three reservoirs : chemical potential , pressure , and temperature . gibbs - duhem equation thermodynamically limits the number of reservoirs to 2 for single component systems . however , in conventional simulations utilizing 2 or fewer reservoirs , the system tends to be trapped in metastable states . even if the system is allowed to escape from such metastable states in conventional simulations , the fixed system size and/or the fixed number of particles result in creation of defects in ordered structures this situation breaks global anisotropy of ordered structures and forces the periodicity of the structure to be commensurate to the system size . here we connect the such three reservoirs to overcome these difficulties . a method of adjusting the three reservoirs and obtaining thermodynamically stable states is also designed , based on gibbs - duhem equation . unlike the other conventional simulation techniques utilizing no more than 2 reservoirs , our method allows the system itself to simultaneously tune the system size and the number of particles to periodicity and anisotropy of ordered structures . our method requires fewer efforts for preliminary simulations prior to production runs , compared with the other advanced simulation techniques such as multicanonical method . a free energy measurement method , suitable for the system with the three reservoirs , is also discussed , based on euler equation of thermodynamics . this measurement method needs fewer computational efforts than other free energy measurement methods do .
model selection is central to statistics , and the most popular statistical techniques for model selection are akaike information criterion ( aic ) , bayesian information criterion ( bic ) and minimum description length ( mdl ) . the basic idea behind mdl principle is to equate compression with finding regularities in the data .since learning often involves finding regularities in the data , hence learning can be equated with compression as well .hence , in mdl , we try to find the model that yields the maximum compression for the given observations .the first mdl code introduced was the two - part code .it was shown that the the two - part code generalizes maximum entropy principle . however , in the past two decades , the normalized maximum likelihood ( nml ) code , which is a version of mdl , has gained popularity among statisticians .this is particularly because of its minimax properties , as stated in , which the earlier versions of mdl did not possess .efficient methods for computing nml codelengths for mixture models have been proposed in . in both these papers, the aim of model selection is to decide the optimum number of clusters in a clustering problem . in a maximum entropy approach to density estimation one has to decide , _ a priori _ , on the amount of ` information ' ( for example , number of moments ) that should be used from the data to fix the model .the minimax entropy principle states that for given sets of features for the data , one should choose the set that minimizes the maximum entropy .however , it can easily be shown that if there are two feature subsets and and , the minimax entropy principle will always prefer over . hence , though the minimax entropy principle is a good technique for choosing among sets of features with same cardinality , it can not decide when the sets of features have varying cardinality . in this paper, we study the nml codelength of maximum entropy models . towards this end , we formulate the problem of selecting a maximum entropy model given various feature subsets and their moments , as a model selection problem .we derive nml codelength for maximum entropy models and show that our approach is a generalization of minimax entropy principle .we also compute the nml codelength for discriminative maximum entropy models .we apply our approach to gene selection problem for leukemia data set and compare it with minimax entropy method .let be a family of probability distributions on the sample space . denotes the sample space of all data samples of size . denotes an element in , where is a vector in .a two - part code encodes the data sample by first encoding a distribution and then the data . the best hypothesis to explain data is then the one that minimizes the sum .according to mdl principle \label{crude } , \ ] ] where is the codelength of the data , is the codelength of distribution and is the codelength of data given the distribution . is a measure of the error of data with respect to distribution .hence , if approximates well enough , should be small and vice versa . by kraft s inequality, there exists a codelength function on given by .we an use as directly , since it is the unique minimizer of expected codelength , when is indeed the true distribution .the normalized maximum likelihood code is one of the several ways for constructing . to formalize the definition of normalized maximum likelihood , we need the following notion of ` regret ' . let be a model on and let be a probability distribution on .the regret of with respect to for data sample is defined as \enspace .\ ] ] the regret is nothing but the extra number of bits needed in encoding using instead of the optimal distribution for in .the worst case regret denoted by is defined as the maximum regret over all sequences in \right ] \enspace .\ ] ] our aim is to find the distribution that minimizes the maximum regret . to this end, we define the complexity of a model as where denote the maximum likelihood estimate ( mle ) of the parameter for the model for the data sample . in the above equation and subsequent sections , the integralis defined subject to existence .also , we assume that mle is well defined for the model .furthermore , the error of a model is defined as \enspace .\label{eq : error_def}\ ] ] the following result is due to . if the complexity of a model is finite , then the minimax regret is uniquely achieved by the normalized maximum likelihood distribution given by the corresponding codelength also known as the stochastic complexity of the data sample is given by be a random variable taking values in .let be a set of functions of . the resultant linear family is given by the set of all probability distributions that satisfy the constraints where is the empirical estimate of for the data .the resulting maximum entropy model contains such that where . here, is the normalizing constant . given a set of maximum entropy models characterized by their function set , we use nml code to choose the model that best describes the data .the nml codelength of data for a given model is composed of two parts : ( i ) the error codelength and ( ii ) the complexity of the model .error codelength of data sequence for the maximum entropy model is n times the maximum entropy of the corresponding linear family where is the maximum entropy distribution of given by . first , we compute the error codelength of the data for the model . by definition , using definition of from equation , we get \notag \\ = & \inf_{\lambda } \left [ n\lambda_0 + \sum_{k=1}^m \lambda_k \sum_{i=1}^n \phi_k(\mathbf{x}^{(i ) } ) \right]\notag \\ = & \inf_{\lambda}\left [ n\lambda_0 + \sum_{k=1}^m \lambda_k \left(n\bar{\phi}_k(\mathbf{x}^n)\right ) \right ] \notag \\= & n\left[\inf_{\lambda } \left ( \lambda_0 + \sum_{k=1}^m \lambda_k \bar{\phi}_k(\mathbf{x}^n)\right ) \right ] \label{eq : error_lambda } \enspace , \end{aligned}\ ] ] where is the sample estimate of for the data and . using lagrange multipliers , it is easy to see that the maximum entropy distribution for the linear family has the form for some .since maximum entropy distribution always exists for a linear family , the parameters can be obtained by maximizing the log likelihood function . \notag \\ = & { \operatornamewithlimits{argmin}}_{\lambda } \left [ \lambda_0 + \sum_{k=1}^m \lambda_k \bar{\phi}_k(\mathbf{x}^n ) \right ] \label{eq : lambda_star } \enspace .\end{aligned}\ ] ] where we remove the negative sign to change to .the notation is used to denote the empirical mean of for the data .the corresponding entropy is given by where the last equality follows from the definition of in equation . by combining equations and and using the fact that maximum entropy distribution always exists for a linear family, we get \label{eq : entropy_lambda } \enspace .\ ] ] by combining equations and , we get for fixed , the error depends on the function set through the above equation .as the no . of functions in the function setis increased , the size of the linear family decreases .hence , the entropy of the maximum entropy distribution also decreases , since we are restricted to search for the maximum entropy distribution in a smaller space .hence , error of the model decreases .complexity of the maximum entropy model is given by where is the maximum entropy distribution of .we have where we have used the definition of error in and its relationship with entropy in to get the result . by similar arguments as above , it is easy to see that the complexity of the model increases by increase in the number of constraints . by using andwe get the desired result .hence , the nml codelength ( also known as stochastic complexity ) of for the model is given by in this section , we show that the presented nml formulation for maximum entropy is a generalization of the minimax entropy principle , where this principle has been used for feature selection in texture modeling .let be sets of functions from to the set of real numbers .corresponding to each set , there exists a maximum entropy model and vice - versa . the mdl principle states that given a set of models for the data, one should choose the model that minimizes the codelength of the data .here , the codelength that we are interested in is the nml codelength ( also known as stochastic complexity ) .since , there exists a one - one relationship between the maximum entropy models and the function sets , the model selection problem can be reframed as \enspace .\ ] ] if we assume that all our models have the same complexity , then the second term in r.h.s can be ignored . since n , the size of data is a constant , the model selection problem becomes this is the classical minimax entropy principle given in .hence , the minimax entropy principle is a special case of the mdl principle where the complexity of all the models are assumed to be the same and the models assumed are the maximum entropy models .discriminative methods for classification , model the conditional probability distribution , where is the class label for data .maximum entropy based discriminative classification tries to find the probability distribution with the maximum conditional entropy subject to some constraints , where is the class variable .initially , information is extracted from the data in the form of empirical means of functions .these empirical values are then equated to their expected values , thereby forming a set of constraints .the classification model is constructed by finding the maximum entropy distribution subject to these sets of constraints .we use mdl to decide the amount of information to extract from the data in the form of functions of features .a straightforward application of this technique is feature selection .the maximum entropy discriminative model , where is the set of all probability distributions of the form let us denote the denominator in above equation as .since , we are not interested in modelling , we use the empirical distribution approximate .the empirical distribution is given by and otherwise .hence , the constraints become as discussed in , the sender - receiver model assumed here is as follows .both sender and receiver have the data .the sender is interested in sending the class labels .if he sends the class labels without compression , he needs to send bits .if , however , he uses the data to compute a probability distribution over the class labels , and then compress using that distribution , he may get a shorter codelength for .his goal is to minimize this codelength , such that the receiver can recover the class labels from this code .the error codelength of for the conditional model is equal to n times the maximum conditional entropy of the model .error of the conditional model is given by \label{eq : error_lambda_disc}\!\!\ !\enspace .\end{aligned}\ ] ] where we have used similar reasoning as in to get the last statement .also , the maximum conditional entropy distribution can be obtained by maximizing the corresponding log - likelihood function .hence , correspondingly , the corresponding conditional entropy is given by \nonumber \\= & \frac{1}{n}\!\ ! \left[\sum_{k=1}^m \lambda_k^*\!\ ! \left(\sum_{i=1}^n\phi_k(\mathbf{x}^{(i ) } , c^{(i)})\right ) \right .\!\!\!+\!\!\ ! \left .\sum_{i=1}^n \log{z_\lambda(\mathbf{x}^{(i)})}\right ] \label{eq : entropy_lambda_disc } \!\!\!\enspace .\end{aligned}\ ] ] here the first equality follows from the definition of conditional entropy as used in .we use the definition of to convert the integral to a summation . by using and the fact that must sum up to 1 ,we obtain the fourth equality . using and, we obtain \label{eq : entropy_lambda_disc1 } \!\!\!\!\enspace .\ ] ] replacing the above equation in , we get the desired result . the complexity of the conditional model is given by use gene selection as an example to illustrate discriminative model selection for maximum entropy models .the dataset used is leukemia dataset available publicly at http://www.genome.wi.mit.edu .the dataset was also used in to illustrate nml model selection for discrete regression .the data set consists of two classes : acute myeloid leukemia ( aml ) and acute lymphoblastic leukemia ( all ) .there are 38 training samples and 34 independent test samples in the data .the data consists of 7129 genes .the genes are preprocessed as recommended in . assuming the sender - receiver model discussed above, the sender needs 38 bits or 26.34 nats in order to send the class labels of training data to receiver .if the nml code is used , the sender needs 24.99 nats .since the sender and receiver both contain the microarray data , the sender can use the microarray data to compress the class labels much more than can be obtained wihout the microarray data . specifically , we are interested in finding the genes which gives the best compression , or the minimum nml codelength .[ ht ] genes , title="fig:",scaledwidth=45.0%,scaledwidth=20.0% ] [ ht ] genes , title="fig:",scaledwidth=45.0%,scaledwidth=20.0% ] [ ht ] [ ht ] for the purpose of our algorithm , we quantize the genes to various levels . we claim that quantizing a gene reduces the risk of overfitting of the model to data .to support our claim , we have also plotted the change in accuracy with quantization level in figure [ accuracy ] for the top 25 genes . as can be seen from the graph , increasing quantizaton level from to results in a decrease in accuracy .we have also plotted a graph for change in average nml codelength with quantization level for the top 25 genes in figure [ nml_codelength ] .an interesting observation is that the minima of nml codelength coincides exactly with the maxima of accuracy .a similar trend was obsrved when the number of genes were changed . hence , we quantize each gene to 5 levels .other than the advantages of quantization mentioned above , quantization is also necessary for the current problem as the problem of calculating complexity can become intractable even for moderate n. the constraints that we use are moment constraints , that is .we vary the value of m from 1 to 7 to get a sequence of maximum entropy models .the nml codelength of the class labels is calculated for each such model .the model that results in the minimum nml codelength is selected for each gene .it was observed that for most genes , the nml codelength decreased sharply when m was increased from 1 to 2 .the change in values of nml codelength was less noticeable for .the variation of nml codelength of class labels for a typical gene are shown in figure [ nml_vs_m ] . in order to make the changes in nml codelength more visible, we skip the nml codelength for m=1 .our approach for ranking genes is as follows . for each gene , we select the value of m that gives the minimum nml codelength .we then sort the genes in increasing order of their minimum nml codelengths .the minimum codelength achieved is 8.35 nats , which is much smaller than 24.99 nats achieved without using the microarray data .since compression is equated with finding regularity according to minimum description length principle , hence , it can be stated that the topmost gene is able to discover a lot of regularity in the data .finally , we use mdl to build a classifier .the amount of information to use for each gene is decided , by using mdl to fix the number of moments .mdl is used to rank the features .then , we use class conditional independence among features to build a maximum entropy classifier .the number of genes used for the classifier are varied from 1 to 130 .the resultant graph is compared with other maximum entropy classifiers in figure [ classifier ] , where the amount of information used per gene is the same for all genes .finding appropriate feature functions and the number of moments is important to any maximum entropy method . in this paper, we pose this problem as a model selection problem and develop an mdl based method to solve this problem .we showed that this approach generalizes minimax entropy principle of .we derived nml codelength in this respect , and extended it to discriminative maximum entropy model selection .we tested our proposed method for gene selection problem to decide on the quantization level and number of moments for each gene .finally , we selected the genes based on the codelength of the class labels and compared the simulation results with minimax entropy method .the bottleneck for using mdl for model selection in discriminative classification is the computation of complexity .more efficient approximations to calculate the complexity need to be developed to employ this approach for problems involving larger data sets .p. kontkanen , p. myllymki , w. buntine , j. rissanen , and h. tirri , `` an mdl framework for data clustering , '' in _ advances in minimum description length _ , p. g. grnwald , i. j. myung , and m. a. pitt , eds .mit press , cambridge , ma , 2005 . s. hirai and k. yamanishi , `` efficient computation of normalized maximum likelihood coding for gaussian mixtures with its applications to optimal clustering , '' in _ information theory proceedings ( isit ) , 2011 ieee international symposium on_. ieee , 2011 , pp . 10311035 .
in this paper , we treat the problem of selecting a maximum entropy model given various feature subsets and their moments , as a model selection problem , and present a minimum description length ( mdl ) formulation to solve this problem . for this , we derive normalized maximum likelihood ( nml ) codelength for these models . furthermore , we show that the minimax entropy method is a special case of maximum entropy model selection , where one assumes that complexity of all the models are equal . we extend our approach to discriminative maximum entropy models . we apply our approach to gene selection problem to select the number of moments for each gene for fixing the model .
let be a bounded smooth domain in , and fix . we suppose that the boundary consists of two nonempty open subsets , that is , .we are concerned with the non - stationary incompressible navier - stokes equations in : u + ( u)u -u + p = f & in , [ a.3 ] + u = 0 & in , with the initial condition here , , , , and denote a viscosity constant , velocity field , pressure , and external force respectively ; means the time derivative . as for the boundary condition, we impose the adhesive b.c . on : on the other hand , we consider one of the following nonlinear b.c . on : which is called the _ slip boundary condition of friction type _( sbcf ) , and which is called the _ leak boundary condition of friction type _ ( lbcf ) . here, is an outer unit normal vector defined on , and we write and .the stress tensor is given by , being kronecker delta .we define the stress vector as , and write and .one can easily see that may depend on , whereas does not .the function , given on and assumed to be strictly positive , is called a _ modulus of friction_. its physical meaning is the threshold of the tangential ( resp .normal ) stress .in fact , if ( resp . ) then ( [ a.1 ] ) ( resp .( [ a.2 ] ) ) implies ( resp . ) , namely , no slip ( resp .leak ) occurs ; otherwise non - trivial slip ( resp .leak ) can take place .we notice that if we make formally , ( [ a.1 ] ) and ( [ a.2 ] ) reduce to the usual slip and leak b.c . respectively . in summary ,sbcf and lbcf are non - linearized slip and leak b.c. obtained from introduction of some friction law on the stress .it should be also noted that the second and third conditions of ( [ a.1 ] ) ( resp .( [ a.2 ] ) ) are equivalently rewritten , with the notation of subdifferential , as though we will not pursue this matter further , one can refer to for the navier - stokes equations with general subdifferential b.c .see also , which considers the motion of a bingham fluid under b.c . with nonlocal friction against slip .sbcf and lbcf are first introduced in for the stationary stokes and navier - stokes equations , where existence and uniqueness of weak solutions are established .generalized sbcf is considered in .the - regularity for the stokes equations is proved in . in terms of numerical analysis , deal with finite element methods for sbcf or lbcf .applications of sbcf and lbcf to realistic problems , together with numerical simulations , are found in . for non - stationary cases , study the time - dependent stokes equations without external forces under sbcf and lbcf , using a nonlinear semigroup theory .the solvability of nonlinear problems are discussed in for sbcf , and in for a variant of lbcf .they use the stokes operator associated with the linear slip or leak b.c . , and do not take into account a compatibility condition at .the purpose of this paper is to prove existence and uniqueness of a strong solution for ( [ a.3])([a.5 ] ) with ( [ a.1 ] ) or ( [ a.2 ] ) .we employ the class of solutions of ladyzhenskaya type ( see ) , searching such that there are several reasons we focus on this strong solution .first , from a viewpoint of numerical analysis , we would like to construct solutions in a class where uniqueness and regularity are assured also for 3d case .second , we desire an -estimate with respect to time for , which may not be obtained for weak solutions of leray - hopf type ( cf .* proposition iii.1.1 ) ) .third , in lbcf , it is not straightforward to deduce a weak solution because of ( [ a.4 ] ) below .similar difficulty already comes up in the linear leak b.c .( see ) the rest of this paper is organized as follows .basic symbols , notation , and function spaces are given in section 2 . in section 3, we investigate the problem with sbcf .the weak formulation is given by a variational inequality , to which we prove uniqueness of solutions . to show existence, we consider a regularized problem , approximate it by galerkin s method , and derive a priori estimates which allow us to pass on the limit to deduce the desired strong solution . using the compatibility condition that must satisfy sbcf , we can adapt to the regularized problem , which makes an essential point in the estimate .section 4 is devoted to a study of the problem with lbcf .there are two major differences from sbcf .first , as was pointed out in the stationary case ( * ? ? ?* remark 3.2 ) , we can not obtain the uniqueness of an additive constant for if no leak occurs , namely , on .second , under lbcf , the quantity need not vanish because can be non - zero .this fact affects our a priori estimates badly , and we can extract a solution only when the initial leak is small enough . incidentally , if we use the so - called bernoulli pressure instead of standard , the mathematical difficulty arising from ( [ a.4 ] ) are resolved ; nevertheless the leak b.c . involving the bernoulli pressure is known to cause an unphysical effect in numerical simulations ( see ) .thereby we employ the usual formulation .finally , in section 5 we conclude this paper with some remarks on higher regularity .throughout the present paper , the domain is supposed to be as smooth as required . for the precise regularity of which is sufficient to deduce our main theorems , see remarks [ rem c.10 ] and [ rem d.10 ] .we shall denote by various generic positive constants depending only on , unless otherwise stated .when we need to specify dependence on a particular parameter , we write as , and so on .we use the lebesgue space , and the sobolev space for a nonnegative integer , where means . is also defined for a non - integer ( e.g. ( * ? ? ?* definition 1.2 ) ) .we put . for spaces of vector - valued functions , we write , and so on . the lebesgue and sobolev spaces on the boundary , , or , are also used . means , and we put , where denotes the surface measure . for a positive function on , the weighted lebesgue spaces and are defined by the norms respectively .the dual space of is ( see ( * ? ? ?* lemma 2.1 ) ) . the usual trace operator is defined from onto .the restrictions , of , are also considered , and we simply write to indicate them when there is no fear of confusion .in particular , and means and respectively , for . note that and because is smooth on .the inner product of is simplified as , while other inner products and norms are written with clear subscripts , e.g. , or . for a banach space , we denote its dual space by and the dual product between and by . moreover , we employ the standard notation of bochner spaces such as , . for function spaces corresponding to a velocity and pressure , we introduce closed subspaces of or as follows : to indicate a divergence - free space , we set .we use the notation , , , and .let us define bilinear forms , , and a trilinear form by the bilinear forms are continuous , and from korn s inequality ( ( * ? ? ? *lemma 6.2 ) ) there exists a constant such that concerning the trilinear term , we obtain the following two lemmas .\(i ) when , for all it holds that \(ii ) when or , for all it holds that in particular , we see from ( [ b.6 ] ) that by the sobolev embedding ( resp . ) which is valid for ( resp . ) , combined with an interpolation inequality between and , we have therefore , since by hlder s inequality , we conclude ( [ b.5 ] ) ( resp .( [ b.6 ] ) ) .[ lem b.2 ] ( i ) for all and , .\(ii ) for all and , , and where is a constant depending only on . by integration by parts, we have from which the conclusion of ( i ) and the first assertion of ( ii ) follow . combining the hlder inequality , the sobolev embedding , and the continuity of the trace operator , we derive ( [ b.10 ] ) . whether is small or not , especially when compared to in ( [ b.3 ] ) , is a very crucial point in our a priori estimates for lbcf ( see proposition [ prop d.1 ] ) .this is why we distinguish from other constants and do not combine with them . as ( i )above shows , this problem does not happen when we consider sbcf .furthermore , we introduce nonlinear functionals and by where is a modulus of friction mentioned in section [ sec a ] .they are obviously nonnegative and positively homogeneous .in addition , they are lipschitz continuous when for a.e . .the followings , which are readily obtainable consequences of standard trace and ( solenoidal ) extension theorems ( ( * ? ? ?* theorems i.1.5 - 6 , lemma i.2.2 ) , see also ( * ? ? ?* section 5.3 ) ) , are frequently used in subsequent arguments .[ lem b.1 ] ( i ) for , it holds that .\(ii ) for satisfying , there exists such that on and .[ lem b.10 ] ( i ) for , it holds that .\(ii ) for ( resp . ) , there exists ( resp . ) such that on and .the definition of given in section [ sec a ] becomes ambiguous when has only lower regularity , say .thus we propose a redefinition of it , based on the following green formula : [ def b.1 ] let , , , .if ( [ a.3 ] ) holds in the distribution sense for a.e . , that is , then we define by where is given by .the above is well - defined by virtue of the trace and extension theorem .it coincides with the previous definition when is sufficiently smooth .in addition , by lemmas [ lem b.1 ] and [ lem b.10 ] , and are characterized by and respectively . by lemma [ lem b.1](ii ) , actually does not depend on .throughout this section , we assume , , and . further regularity assumptions on these data will be given before theorem [ thm c.1 ] .in addition , the barrier term is simply written as .a primal weak formulation of ( [ a.3])([a.5 ] ) with ( [ a.1 ] ) is as follows : pde - sbcf for a.e . , find such that , , is well - defined in the sense of definition [ def b.1 ] , a.e . on , and a.e .on .more precisely , " implies that actually belongs to with .in particular , . throughout this section, we refer to problem pde - sbcf just as problem pde .similar abbreviation will be made for other problems .one can easily find that a classical solution of ( [ a.3])([a.5 ] ) with ( [ a.1 ] ) solves problem pde , and that a sufficiently smooth solution of problem pde is a classical solution .as the next theorem shows , problem pde is equivalent to the following variational inequality problem .vi-sbcf for a.e . , find such that , , and [ thm c.2 ] problems and are equivalent .the precise meaning of equivalent " is that if solves problem pde , solves problem vi ; if solves problem vi , there exists unique such that solves problem pde .hereafter we will frequently use the terminology equivalent " in a similar sense .let be a solution of problem pde .then it follows that using this equation together with and , we have for all .hence is a solution of problem vi .next , let be a solution of problem vi .taking as a test function in ( [ c.4 ] ) , with arbitrary , we find that by a standard theory ( see ( * ? ? ?* propositions i.1.1 and i.1.2 ) ) , there exists unique such that ( [ b.50 ] ) holds .therefore , is well - defined , and thus combining this equation with ( [ c.4 ] ) , we obtain and as a result of triangle inequality , for . in view of lemma [ lem b.1](ii ), this implies that for by a density argument , we can extend to an element of such that since , we conclude . then follows from ( [ c.5 ] ) with .hence is a solution of problem pde .we are now in a position to state our main theorem .we assume : 1. .[ s1 ] 2 . with .[ s2 ] 3 . , and sbcf is satisfied at , namely , [ s3 ] note that can be defined in a usual sense .[ thm c.1 ] under , when there exists a unique solution of problem such that when , the same conclusion holds on some smaller time interval .we call the solution in the above theorem a _ strong solution _ of problem vi .first we prove the uniqueness of a strong solution. the existence will be proved in section [ sec 3.4 ] after some additional preparations .[ prop c.4 ] if and are strong solutions of problem , then .taking and in ( [ c.4 ] ) for and that for respectively , and adding the resulting two inequalities , for a.e . we obtain we deduce from ( [ b.6 ] ) , together with young s inequality , that combining ( [ b.3 ] ) and these estimates with ( [ c.12 ] ) , we have by gronwall s inequality , we conclude since .( note that remains finite because thus . in the case of sbcf here , the last term of ( [ c.12 ] )vanishes , according to lemma [ lem b.2](i ) .we did not use that fact because we would like to make our proof of uniqueness remain unchanged when we deal with lbcf .concerning the associated pressure , we find : [ prop c.5 ] under the assumptions of theorem , let be the strong solution of problem , and be the associated pressure obtained in the proof of theorem . then . for a.e . , the well - known inf - sup condition ( see ( * ? ? ?* i.(5.14 ) ) ) , together with ( [ c.2 ] ) , ( [ b.7 ] ) , and a.e . on ,yields since rhs is bounded uniformly in , is in . to prove the solvability of problem vi, we consider a regularized problem vi , which is shown to be equivalent to a variational equation problem , denoted by problem ve . before stating those problems in detail , for fixed we introduce where is a regularization of having the following properties : 1 . is a nonnegative convex function . 2 . for all , it holds that 3 .if denotes , for all it holds that 4 .let denote the hessian of , namely , for .then is semi - positive definite , that is , where means the transpose of .this is a consequence of the convexity of .such does exist ; for example , let be given by if , if .then some elementary computation shows that enjoys all of ( a)(d ) above .one could use the moreau - yoshida approximation of as , which is considered in , but it is only in , not in .since is differentiable , the functional is gteaux differentiable , with its derivative computed by for .we are ready to state the regularized problems mentioned above .vi-sbcf for a.e . , find such that , and ve-sbcf for a.e . , find such that , and here , is a perturbation of the original initial velocity .the way one obtains from is described later . by an elementary observation ( e.g. ( * ? ? ?* section 3.3 ) or ( * ? ? ?* lemma 3.3 ) ) , we see that : [ prop c.1 ] problems and are equivalent. now we focus on the construction of a perturbed initial velocity .since satisfies sbcf by ( s[s3 ] ) , it follows from the green formula , for , that & \hspace{9 cm } ( \forall v\in v_{n,\sigma } ) .\label{c.34}\end{aligned}\ ] ] here we consider the regularized problem : find such that & \hspace{9.5 cm } ( \forall v\in v_{n,\sigma } ) , \label{c.25}\end{aligned}\ ] ] which is equivalent to ( cf . proposition [ prop c.1 ] ) by a standard theory of elliptic variational inequalities , ( [ c.25 ] ) admits a unique solution , which is the perturbation of in question . with this setting ,we find : [ lem c.1 ] ( i ) when , strongly in .\(ii ) and \(i ) taking in ( [ c.25 ] ) and in ( [ c.34 ] ) , adding the resulting two inequalities , applying korn s inequality , and using ( [ c.28 ] ) , we conclude \(ii ) since by ( s[s2 ] ) , we can directly apply the regularity result ( * ? ? ?* lemma 5.2 ) to the elliptic variational inequality ( [ c.25 ] ) , and obtain ( [ c.27 ] ) .though our and are different from those of , it makes no difference in the proof of that lemma .[ rem c.10 ] ( i ) as a result of ( i ) above , for sufficiently small we have \(ii ) concerning the regularity of the domain , assumes that and are class of and respectively , which is sufficient for our theory as well . in , dealing with the stationary problem , the author stated that was enough to derive and . however , it turned out that his proof presented there worked only for ; see the errata by the same author .this is why we have assumed in ( s[s2 ] ) , not . due to proposition [ prop c.1 ] ,we concentrate on solving problem ve . in doing so ,we construct approximate solutions by galerkin s method .since is separable , there exist members , linear independent to each other , such that dense in . here is fixed , and thus we may assume .ve-sbcf find )\ , ( k=1 , . ., m) ] for some .the a priori estimate below shows can be taken as , so that we write instead of from the beginning .[ prop c.2 ] let ( s[s1])(s[s3 ] ) be valid and be small enough so that holds .\(i ) when , and are bounded independently of and .\(ii ) when , the same conclusion holds for some smaller interval , which can be taken independently of and .due to space limitations , we simply write , , , . .instead of , , , . .and so on .\(i ) multiplying ( [ c.17 ] ) by , and adding the resulting equations for , we obtain where we have used lemma [ lem b.2](i ) .it follows from ( [ b.3 ] ) and ( [ c.18 ] ) that which gives consequently , for , since by assumption , we find that and are bounded by independently of and .next , we differentiate with respect to , which is possible because s are in ) g>0 ] such that defined by satisfies and & \hspace{9 cm } ( k=1 , . . . ,\label{d.10 } \end{aligned}\ ] ] since , there exist unique solutions )\ , ( k=1 , . ., m) ] of such that if this inequality holds for all , we take . noting , we find from ( [ d.13 ] ) that hence is bounded independently of , .next , differentiating ( [ d.10 ] ) , multiplying the resulting equation by , and adding them , we obtain here , we estimate each term in ( [ d.14 ] ) as follows : collecting these estimates , we derive from ( [ d.14 ] ) that for combining the technique used in proposition [ prop c.2 ] with ( [ d.17 ] ) and ( [ d.55 ] ) , we observe that , , and are bounded by .it remains to show that is bounded from below independently of .if and thus , we can extend beyond and repeat the above discussion until we reach either in the former case . in the latter case ,we have hence is bounded from below , and we complete the proof for .second let us consider the case .what changes from is that ( [ d.15 ] ) is replaced with where can be arbitrarily small .we choose satisfying , so that by virtue ( [ d.20 ] ) .let be the maximum value of ] , we set .such does exist , and if then .therefore , setting , instead of ( [ d.16 ] ) we get as a consequence , we see that , , are bounded by . now ,if or then are bounded from below as follows : when and , we can extend beyond and repeat the above discussion .this completes the proof of proposition [ prop d.1 ] .the last step of the proof , namely , passing to the limits and can be carried out by the same way as proposition [ prop c.3 ] , with replaced by and vice versa .this proves that a solution of problem vi exists , which , combined with the uniqueness result , completes the proof of theorem [ thm d.1 ] . at first glanceone may think theorem [ thm d.1 ] , where we get only a time - local solution in spite of a smallness assumption on even if , is too poor .however , in view of the fact that we obtain only time - local solutions in 2d case under the linear leak b.c .( see ( * ? ? ?* theorem 6 ) or ) , such limitations can not be avoided to some extent . under additional smallness assumptions on the data , we can derive global existence results for both and .by the discussion presented above , we have established the existence and uniqueness , while we did not get in touch with higher regularity such as this is because some regularity results for the elliptic cases are not available .for instance , problem vi-sbcf is rewritten as with for some . if we prove this elliptic variational inequality has a unique solution in when , then a technique similar to ( * ? ? ?* theorems iii.3.6 and iii.3.8 ) allows us to deduce .thereby , we need to extend the regularity theory of to cases .the author would like to thank for professor norikazu saito for encouraging him through valuable discussions .this work was supported by grant - in - aid for jsps fellows and crest , jst .
strong solutions of the non - stationary navier - stokes equations under non - linearized slip or leak boundary conditions are investigated . we show that the problems are formulated by a variational inequality of parabolic type , to which uniqueness is established . using galerkin s method and deriving a priori estimates , we prove global and local existence for 2d and 3d slip problems respectively . for leak problems , under no - leak assumption at we prove local existence in 2d and 3d cases . compatibility conditions for initial states play a significant role in the estimates .
consider a minimal - delay space - time coded rayleigh quasi - static flat fading mimo channel with full channel state information at the receiver ( csir ) .the input output relation for such a system is given by where is the channel matrix and is the additive noise .both and have entries that are i.i.d .complex - gaussian with zero mean and variance 1 and respectively .the transmitted codeword is and is the received matrix .the ml decoding metric to minimize over all possible values of the codeword is [ ld_stbc_def ] a linear stbc : a linear stbc over a real ( 1-dimensional ) signal set , is a finite set of matrices , where any codeword matrix belonging to the code is obtained from , by letting the real variables take values from a real signal set where are fixed complex matrices defining the code , known as the weight matrices .the rate of this code is complex symbols per channel use .we are interested in linear stbcs , since they admit sphere decoding ( sd ) which is a fast way of decoding for the variables .a further simplified version of the sd known as the fast sphere decoding ( fsd ) ( also known as conditional ml decoding ) was studied by biglieri , hong and viterbo .the quadratic form ( qf ) approach has been used in the context of stbcs in to determine whether quaternion algebras or biquaternion algebras are division algebras , an aspect dealing with the full diversity of the codes .this approach has not been fully exploited to study the other characteristics of stbcs . in this paper, we use this approach to study the fast sphere decoding ( fsd ) complexity of stbcs ( a formal definition of this complexity is given in subsection [ fsdc ] ) . designing stbcs with low decoding complexity has been studied widely in the literature .orthogonal designs with single symbol decodability were proposed in , , .for stbcs with more than two transmit antennas , these came at a cost of reduced transmission rates .to increase the rate at the cost of higher decoding complexity , multi - group decodable stbcs were introduced in , , .fast decodable codes ( codes that admit fsd ) have reduced sd complexity owing to the fact that a few of the variables can be decoded as single symbols or in groups if we condition them with respect to the other variables .fast decodable codes for asymmetric systems using division algebras have been recently reported . golden code and silver code are also examples of fast decodable codes as shown in and .the properties of fast decodable codes and multi - group decodable codes were combined and a new class of codes called fast group decodable codes were studied in . in this subsectionwe define the hurwitz radon quadratic form ( hrqf ) on any stbc .we first recall some basics about quadratic forms .more details can be seen in .[ quad_form_def ] let be a field with characteristic not 2 , and be a finite dimensional -vector space .a quadratic form on is defined as a map such that it satisfies the following properties .* for all and all . * the map ] .hence , we can associate a matrix with the quadratic form such that [ hrqf_def ] the hurwitz radon quadratic form is a map from the stbc to the field of real numbers , i.e. , given by where is an element of the stbc and [ hrqf_is_qf_thm ] the map defined by is a quadratic form . the map needs to satisfy the conditions as defined in definition [ quad_form_def ] .we have and \ ] ] should be bilinear and symmetric where and . substituting and simplifying ,we get .\ ] ] it is clearly seen that this map is bilinear and symmetric .we can associate a matrix with the hrqf .if we define the matrix where such that , then we can write the hrqf as where ] , ] and ^{t } = \textbf{u}\left [ s_{3 } , s_{4}\right]^{t}, ] let all the variables take values from a signal set of cardinality .if we order the variables ( and hence the weight matrices ) as ] , then the matrix for sd has the following structure ,\ ] ] where denotes non zero entries . with this ordering , the fsd complexity increases to .the contributions of this paper are as follows : * we give a formal definition of the fsd complexity of a linear stbc ( subsection [ fsdc ] . ) * with the help of hrqf , it is shown that the fsd complexity of the code depends only on the weight matrices of the code with their ordering , and not on the channel realization ( even though the equivalent channel when sd is used depends on the channel realization ) or the number of receive antennas . * a best ordering ( not necessarily unique ) of the weight matrices provides the least fsd complexity for the stbc .we provide an algorithm to be applied to the hrqf matrix which outputs a best ordering .the remaining of the paper is organized as follows : in section [ sec2 ] the known classes of low ml decodable codes , the system model and the formal definition of the fsd complexity of a linear stbc are given . in section [ sec3 ] , we show that the fsd complexity depends completely on the hrqf and not on the channel realization or the number of receive antennas . in section [ sec4 ] , we present an algorithm to modify the hrqf matrix in order to obtain a best ordering of the weight matrices to obtain the least fsd complexity . concluding remarks constitute section [ sec5 ] ._ notations : _ throughout the paper , bold lower - case letters are used to denote vectors and bold upper - case letters to denote matrices .for a complex variable , and denote the real and imaginary part of , respectively .the sets of all integers , all real and complex numbers are denoted by and , respectively .the operation of stacking the columns of one below the other is denoted by .the kronecker product is denoted by , and denote the identity matrix and the null matrix , respectively . for a complex variable ,the operator acting on is defined as follows .\ ] ] the operator can similarly be applied to any matrix by replacing each entry by , resulting in a matrix denoted by .given a complex vector ^{t}, ]for any linear stbc with variables given by ( [ ld_stbc ] ) , the generator matrix is defined by where ^{t} ] with each drawn from a 1-dimensional ( pam ) constellation . using the above equivalent system model ,the ml decoding metric can be written as using decomposition of , we get where is an orthonormal matrix and is an upper triangular matrix .using this , the ml decoding metric now changes to if we have , ] with as column vectors and ] , then the hrqf matrix and the matrix will have the following structure : ,\ ] ] ,\ ] ] where denotes the non - zero entries .as it can be seen , the upper triangular portion of the matrix , has a structure that admits fast decodability which is conditionally -group decodable if considered as the matrix .we now turn to the class of fast group decodable codes .[ hrqf_fast_group_decode_lemma ] consider an stbc .let denote the hrqf matrix of this stbc .if there exists an ordered partition of into non - empty subsets with cardinalities such that whenever and and , and if any group admits fast decodability , i.e. , there exists an ordered partition of where , into non - empty subsets such that whenever and and , , then the code is fast group decodable .the proof follows from the proofs of lemmas [ hrqf_multi_group_lemma ] and [ hrqf_fast_decode_lemma ] .we now consider an example to illustrate the above lemma .[ hrqf_fast_group_decode_ex ] consider the fast group decodable stbc given in .\ ] ] ' '' '' let the ordering of the variables ( and hence the weight matrices ) be ] , we get the following hrqf matrix and the matrix for this ordering : ;\ ] ] .\ ] ] the fsd complexity for this ordering is . andthis ordering does not admit fast decoding as well .when we run the algorithm on the given hrqf matrix , the two sets and are formed , which are hr orthogonal with each other . in this case , and .the conditioned variables will be present in the set .the hrqf matrix at this stage is as given by .the ordering of the variables at the end of this stage is ] .the matrix for this ordering is given by \ ] ] \ ] ] the fsd complexity for this matrix is which is the best possible complexity for the silver code .[ algo_multi_group ] consider the fast group decodable code presented in example [ hrqf_fast_group_decode_ex ] .if we order the variables as ] .the and the matrix for this ordering is as shown below .,\ ] ] .\ ] ] this ordering gives us the fsd complexity of .in this paper we have analysed the fsd complexity of an stbc using quadratic forms .we have shown that the hrqf completely categorizes the fsd complexity of an stbc and hence it is independent of the channel and the number of receive antennas .we have provided an algorithm to obtain a best ordering of weight matrices to get the best decoding performance from the code .this work was supported partly by the drdo - iisc program on advanced research in mathematical engineering through a research grant , and partly by the inae chair professorship grant to b. s. rajan .k. p. srinath and b. s. rajan , low ml - decoding complexity , large coding gain , full - rate , full - diversity stbcs for 2x2 and 4x2 mimo systems , ieee journal of selected topics in signal processing : special issue on managing complexity in multiuser mimo systems , vol .3 , no . 6 , pp .916 - 927 , dec . 2009 .t. p. ren , y. l. guan , c. yuen and r. j. shen , fast - group - decodable space - time block code , proceedings ieee information theory workshop , ( itw 2010 ) , cairo , egypt , jan .6 - 8 , 2010 , available online at http://www1.i2r.a-star.edu.sg/cyuen/publications.html .o. tirkkonen , a. boariu and a. hottinen , minimal non - orthogonality rate 1 space - time block code for 3 + tx antennas , proceedings of ieee international symposium on spread - spectrum techniques and applications , new jersey , pp .429 - 432 , sep .6 - 8 , 2000 .
decoding of linear space - time block codes ( stbcs ) with sphere - decoding ( sd ) is well known . a fast - version of the sd known as fast sphere decoding ( fsd ) has been recently studied by biglieri , hong and viterbo . viewing a linear stbc as a vector space spanned by its defining weight matrices over the real number field , we define a quadratic form ( qf ) , called the hurwitz - radon qf ( hrqf ) , on this vector space and give a qf interpretation of the fsd complexity of a linear stbc . it is shown that the fsd complexity is only a function of the weight matrices defining the code and their ordering , and not of the channel realization ( even though the equivalent channel when sd is used depends on the channel realization ) or the number of receive antennas . it is also shown that the fsd complexity is completely captured into a single matrix obtained from the hrqf . moreover , for a given set of weight matrices , an algorithm to obtain a best ordering of them leading to the least fsd complexity is presented . the well known classes of low fsd complexity codes ( multi - group decodable codes , fast decodable codes and fast group decodable codes ) are presented in the framework of hrqf .
let be infinite dimensional hilbert spaces and a bounded linear operator .if , the range of , is not closed it is well known that the linear operator equation is ill - posed , in the sense that , the moore - penrose generalized inverse of , is not bounded .the moore - penrose generalized inverse is strongly related to the least - squares ( ls ) solutions of ( [ eq:0 ] ) .in fact equation ( [ eq:0 ] ) has a ls solution if and only if belongs to , the domain of , which is defined as . in that case , is the best approximate solution ( i.e. the ls solution of minimum norm ) and the set of all ls solutions of ( [ eq:0 ] ) is given by . if the problem is ill - posed , then does not depend continuously on the data .hence if instead of the exact data , only an approximation is available , with , where is the noise level or observation error , then it is possible that does not exist or , if it exists , then it will not necessarily be a good approximation of , even if is very small .this instability becomes evident when trying to approximate by standard numerical methods and procedures .thus , for instance , except under rather restrictive conditions ( , ) , the application of the standard ls approximations procedure on a sequence of finite dimensional subspaces of , whose union is dense in , will result in a sequence of ls approximating solutions which does not converge to ( see ) . moreover, this divergence can occur with arbitrarily large speed ( see ) .ill - posed problems must be regularized before pretending to successfully attack the problem of numerically approximating their solutions .regularizing an ill - posed problem such as ( [ eq:0 ] ) essentially means approximating the operator by a parametric family of continuous operators , where is called the regularization parameter .more precisely , for with ] there holds .let be a parametric family of functions defined for all .we shall say that is a `` spectral regularization method '' ( srm ) , if it satisfies the following hypotheses : _ h1_. for every fixed is piecewise continuous with respect to , for ; _ h2_. there exists a constant ( independent of ) such that for every ; _h3_. for every , it can be shown that if is a srm then the family of operators defined by is a fro for ( , theorem 4.1 ) . in this casewe shall say that is a spectral regularization family " for .the use of this terminology has to do with the fact that each one of its elements is defined in terms of an integral with respect to the spectral family associated to the operator .note that given the operator , it is sufficient that be defined for ] be an increasing function .it is said that the regularization method has qualification if there exists a constant such that }{\sup}{\left\vert1-\lambda g_\alpha(\lambda)\right\vert}\rho(\lambda)\leq \gamma \,\rho(\alpha)\quad \forall \;\alpha \in ( 0,a].\ ] ] in this article we generalize the previous concept , mainly by allowing the function appearing in the left hand side of ( [ eq : calif - mathe ] ) to be substituted by a general function with similar properties .it is important to point out that in the classical qualification " of a method was defined to be the number in definition [ def : calif - clasica ] ( even in the case ) . however , from our point of view the generalized qualification " of a method will not be a number but rather a function of the regularization parameter as an order of convergence in the sense of definition [ def : calif - mathe ] . in the case of srms with classical qualification of positive finite order , the corresponding generalized qualification will be shown to be the function , coinciding with the classical approach . since in the extreme cases and that function does not define an order of convergence , we have preferred to exclude them from the definition of classical qualification ( definition [ def : calif - clasica ] ) and , accordingly , we shall say that the method does not have classical qualification .the organization of this article is as follows . in section 2 the concepts of weak and strong source - order pair and of order - source pairare defined and three qualification levels for srm are introduced : weak , strong and optimal . a sufficient condition for the existence of weak qualification is provided and necessary and sufficient conditions for an order of convergence to be strong or optimal qualification are given .in section 3 , examples of all qualification levels are provided and the relationships between them and with the classical qualification and the qualification introduced in are shown .in particular , srms having qualification in each one of the three levels and not having classical qualification are presented .finally several implications of this theory in the context of orders of convergence , converse results and maximal source sets for inverse ill - posed problems are shown in section 4 .it is well known that there exist srms for which the corresponding given in definition [ def : calif - clasica ] is infinity , e.g.truncated singular value decomposition ( tsvd ) , landweber s method and showalter s method .however , a careful analysis leads to observe that the concept of qualification as optimal order of convergence of the regularization error remains alive underlying most of these and many other methods . in this sectionwe generalize the definition of qualification introduced by math - pereverzev in and thereby the notion of classical qualification of a srm .also three different levels of qualification are introduced : weak , strong and optimal .these levels introduce natural hierarchical categories for the srms and we show that the generalized qualification corresponds to the lowest of these levels . moreover , a sufficient condition which guarantees that a srm possesses qualification in the sense of this generalization is provided and necessary and sufficient conditions for a given order of convergence to be strong or optimal qualification are found .we denote with the set of all non decreasing functions such that and with the set of all continuous functions satisfying and such that for every if moreover is increasing , then it is an _index function _ in the sense of math - pereverzev ( ) .let .we say that `` precedes at the origin '' and we denote it with , if there exist positive constants and such that for every . [def : order ] let .we say that `` and are equivalent at the origin '' and we denote it with , if they precede each other at the origin , that is , if there exist constants , , such that for every . clearly , " introduces an order of equivalence in .analogous definitions and notation will be used for .let be a srm , , and * i ) * we say that is a `` weak source - order pair for '' if it satisfies * ii ) * we say that is a `` strong source - order pair for '' if it is a weak source - order pair and there is no for which in ( [ eq : o ] ) can be replaced by .that is , if ( [ eq : o ] ) holds and also * iii ) * we say that is an `` order - source pair for '' if there exist a constant and a function with , such that in the previous definitions we shall refer to the function as the order of convergence " and to as the source function " .the reason for using this terminology will become clear in section 4 when we shall see applications of these concepts in the context of direct and converse results for regularization methods .the following observations follow immediately from the definitions . 1 .if is a weak source - order pair for which is not a strong source - order pair , then there exists such that and therefore can not be an order - source pair for . thus if is an order - source pair and is a weak source - order pair , then is further a strong source - order pair in the sense of * _ ii_)*. 2 .let .if is a weak source - order pair for and then is also a weak source - order pair for .if is a weak source - order pair for and is such that there exists for which for every , then is also a weak source - order pair for . in the following definitionwe introduce the concept of generalized qualification and three different levels of it .[ def : calif-3 ] let be a srm . * i ) * we say that is `` weak or generalized qualification of '' if there exists a function such that is a weak source - order pair for .* ii ) * we say that is `` strong qualification of '' if there exists a function such that is a strong source - order pair for .* iii ) * we say that is `` optimal qualification of '' if there exists a function such that is a strong source - order pair for ( it is sufficient that be a weak source - order pair ) and is an order - source pair for .it is important to observe that weak qualification generalizes the concept of qualification introduced by math and pereverzev in and therefore , the notion of classical qualification .in fact , if has continuous qualification in the sense of definition [ def : calif - mathe ] and , then the function is weak qualification of . however ,these two notions are not equivalent .we shall see later on that it is possible for a function to be weak qualification of a srm and not be qualification according to definition [ def : calif - mathe ] ( see comments at the end of section 3 ) .it is timely to note here that if has classical qualification of order , then is weak qualification of and moreover is a weak source - order pair for for every ] .let .then for every ] from which it follows that for every ] then for every , from what it follows that .then , for any , therefore , is a weak source - order pair for , which implies that is weak qualification of .( note that in this case any is weak qualification of . ) * b ) * let be a srm such that for every , is positive and monotone decreasing for .for we define , where since for every , , it follows that given there exists such that for every . then , moreover $ ] for every and therefore , for every . on the other hand ,since for every , is decreasing for , it follows immediately that is strictly increasing .furthermore , since is bounded , it has countably many jump discontinuity points .therefore , it is possible to assume , without loss of generality , that is continuous ( since , if it is not , we can redefine it in such a way that it be continuous , by subtracting the jumps at the discontinuity points ) .thus is continuous , strictly increasing with .therefore , its inverse function exists over the range of and it is strictly increasing and continuous with .it is possible to extend to in such a way that it preserves all these properties .we shall denote with this extension . for , we define since for every , is positive for all , it follows that is also positive .since for every , , the definition of implies that for every , or equivalently , for every .then for every and the fact that implies that .if further is a non decreasing function , then and it suffices to define . on the contrary , since is bounded and positive with , there always exists a function such that for every , as we wanted to show .> from the previous theorem , it follows that the srms such that for every , is decreasing for and for every , is positive and decreasing for , do possess weak qualification .it is important to observe that most of the usual srms do in fact satisfy these conditions . in particularthis is so for landweber s and showalter s methods .now given the srm and , we define note that in the next three results we will see that the characteristics of a given function , as a possible strong or optimal qualification of a srm , can be determined from properties of that function . [ teo : cond - calif](necessary and sufficient condition for strong qualification . )a function such that is strong qualification of if and only if suppose that is strong qualification of .then there exists a function such that is a strong source - order pair for .then , for every , thus ( [ eq : cond - calif ] ) follows from ( [ eq : o ] ) and ( [ eq : no - o ] ) .conversely , suppose now that for every .we will show that is strong qualification of .for that let us see that is a strong source - order pair for . since for every , it follows that then , verifies ( [ eq : o ] ) and ( [ eq : no - o ] ) , which , together with the fact that , implies that is a strong source - order pair and thus is strong qualification of .[ teo : nu < nu - rho ] let be strong qualification of and . then is a strong source - order pair for if and only if there exists such that for every .since is strong qualification , by proposition [ teo : cond - calif ] it follows that for every .suppose now that is a strong source - order pair for .then there exist positive constants and such that for every , .then , for every and therefore for every .conversely , suppose that there exists such that for every .since , it then follows that that is , is a weak source - order pair for .moreover since and are positive for all , it follows that verifies ( [ eq : no - o ] ) and therefore is , furthermore , a strong source - order pair for .[ teo : cond - calopt ] ( necessary and sufficient condition for optimal qualification . )a function such that is optimal qualification of if and onlyif verifies and . suppose that is optimal qualification .then is strong qualification and it follows from proposition [ teo : cond - calif ] that verifies ( [ eq : cond - calif ] ) .moreover since is optimal qualification , there exists such that is a strong source - order pair and is an order - source pair . from the latterit follows that there exist a constant and a function with , such that on the other hand , since is a strong source - order pair for , it follows from proposition [ teo : nu < nu - rho ] that there exists such that > from ( [ eq:2 ] ) and ( [ eq:3 ] ) it follows that that is , satisfies ( [ eq:4.50 ] ) as we wanted to show .conversely , suppose that verifies ( [ eq:4.50 ] ) and ( [ eq : cond - calif ] ) . by proposition [ teo : cond - calif ]we have that is a strong source - order pair for and ( [ eq:4.50 ] ) implies that is an order - source pair .then , is optimal qualification of .next we will show the uniqueness of the source function .[ teo : unica nu ] if is optimal qualification of then there exists at most one function ( in the sense of the equivalence classes induced by definition [ def : order ] ) such that is a strong source - order pair and is an order - source pair for .moreover if , then is such a unique function . given that is optimal qualification of , there exists at least one function such that is a strong source - order pair and is an order - source pair for .suppose now that there exist and such that and are strong source - order pairs and and are order - source pairs for .then there exist and a function with , such that for every .then , on the other hand , since is a strong source - order pair , there exist positive constants and such that from ( [ eq:4 ] ) and ( [ eq:5 ] ) it follows that since we have that for every .analogously , by interchanging and it follows that there exists such that for every and therefore , .suppose now that .since is optimal qualification of it follows from theorem [ teo : cond - calopt ] that verifies ( [ eq:4.50 ] ) and ( [ eq : cond - calif ] ) .then , is the unique function such that is a strong source - order pair and is an order - source pair for .the following is a result about the uniqueness of the order .if and are strong source - order pairs for and there exists , then .suppose that and are strong source - order pairs for .we will first show that suppose that since is a strong source - order pair we have that and it follows from ( [ eq:6 ] ) and ( [ eq:7 ] ) that the on the right - hand side of the previous expression must be equal to zero , which is a contradiction .then , similarly , it is shown that since there exists , we then have that and . then , and , that is , , as we wanted to show .in this section we present several examples which illustrate the different qualification levels previously introduced as well as the relationships between them and with the concept of classical qualification and the qualification introduced in .although some of these examples are only of academic interest and nature , they do serve to show the existence of regularization methods possessing qualification in each one of the levels introduced in this article .* example 1 .* tikhonov - phillips regularization method , where has classical qualification of order ( ) .we will see that is optimal qualification in the sense of definition [ def : calif-3 ] * _ iii)_*. in fact , for , and if then , that is , verifies ( [ eq : cond - calif ] ) . also since we have that verifies ( [ eq:4.50 ] ) . from theorem [ teo : cond - calopt ] it then follows that is optimal qualification of .* example 2 .* let be the family of functions associated to the truncated singular value decomposition ( tsvd ) , it follows that , where is as in definition [ def : calif - clasica ] .therefore , tsvd does not have classical qualification . in this casewe have that let and .then then , it follows from theorem [ teo : cond - caldebil].a ) that any function is weak qualification of the method .however , tsvd does not have strong qualification .in fact , for any function we have that for every .proposition [ teo : cond - calif ] implies then that is not strong qualification of the method . in was observed that tsvd has arbitrary qualification in the sense of definition [ def : calif - mathe ] .* example 3 . * for define it can be immediately verified that satisfies the hypotheses _ h1-h3 _ and therefore is a srm . since for all , it follows that for every , then , does not have classical qualification ( more precisely , where is as in definition [ def : calif - clasica ] ) .we will now show that is optimal qualification of .since for every , it follows from proposition [ teo : cond - calif ] that is strong qualification of .moreover since it follows that verifies ( [ eq:4.50 ] ) .theorem [ teo : cond - calopt ] then implies that is optimal qualification of .* example 4 . * for with ,define clearly , satisfies hypotheses _ h1-h3 _ and therefore is a srm .since for all , it follows that for every , then , and therefore does not have classical qualification .however , we will show that is optimal qualification of .in fact , since for every and it follows from theorem [ teo : cond - calopt ] that is optimal qualification of .* example 5 .* let be the tikhonov - phillips regularization method , which , as previously mentioned , it has classical qualification of order .in example 1 we saw that is optimal qualification of this method and therefore it is also weak qualification of it . since it follows from definition [ def : calif-3].i ) and observation 2.a ) that is also weak qualification .however , is not strong qualification of the method .in fact , for any , we have that * example 6 .* let be the srm defined in example 4 .this method does not have classical qualification since .we proved that is optimal qualification and therefore , it is also weak qualification . since , just like in the previous example , it follows immediately that is weak qualification .let us show now that is not strong qualification of the method .for any , we have that it is important to observe that if is strong qualification of a srm then it follows immediately from the definition of strong source - order pair that the method has classical qualification of order .the converse , however , is not true as the next example shows .hence it is the weak and not the strong qualification what generalizes the classical notion of this concept .* example 7 .* for with define and in this case , one can immediately show that is a srm with classical qualification of order .however , is not strong qualification of the method .in fact , for any , we can see that and therefore condition ( [ eq : no - o ] ) is not satisfied .srms possessing strong but not optimal qualification , have very peculiar properties .thus for instance , it is possible to show that if is strong qualification which is not optimal , then , the function it is not of bounded variation as a function of in any neighborhood of .even so , the following three examples show the existence of srm having strong but not optimal qualification and they show that strong qualification in no case implies optimal qualification .* example 8 . * given , for define so that it can be immediately checked that is a srm with classical qualification of order .with we have that , since , from proposition [ teo : cond - calif ] it follows that is a strong source - order pair and is strong qualification of the method .however , for every , \;=\;0.\end{aligned}\ ] ] therefore equation ( [ eq:4.50 ] ) does not hold and is not optimal qualification of the method . * example 9 . * for define as follows : so that it can be immediately verified that is a srm which does not have classical qualification ( ) .however , with we have that }\\ & = & \lambda^{\frac12}.\end{aligned}\ ] ] since , by proposition [ teo : cond - calif ] is a strong source - order pair and is strong qualification of the method .however , we have that and therefore ( [ eq:4.50 ] ) does not hold and is not optimal qualification of the method .* example 10 .* for and define so that just like in examples 8 and 9 it can be easily checked that is a srm which does not have classical qualification ( ) , that is strong but not optimal qualification of the method and that is a strong source - order pair with . note that examples 2 , 3 , 4 , 6 , 9 and 10 correspond to srms which do not have classical qualification but , however , they do have generalized qualification , falling in some of its three different levels . also landweber s method and showalter s method , which as previously pointed out do not have classical qualification ( in both cases ) , are srms defined by ( where and , respectively .it can be easily proved , by using theorem [ teo : cond - caldebil ] , that is weak qualification of landweber s method and is weak qualification of showalter s method . however , in this last case it can be easily shown that does not satisfy condition ( [ eq : calif - mathe ] ) and therefore is not qualification in the sense of definition [ def : calif - mathe ] . the different qualification levels introduced in this article and the relationships between them are visualized in figure 1 .the generalization of the concept of qualification of a srm introduced in the previous sections is strongly related with and it has a broad spectrum of applications in the context of orders of convergence , converse results and maximal source sets for inverse ill - posed problems .we present next some results in this direction .however , we point out that this is not the main objective of the present article . for that reason ,some of this results will be stated without proof .more detailed results in this regard will appear in a forthcoming article .let be infinite dimensional hilbert spaces and a bounded , linear invertible operator such that is not closed . for , the set , will be referred to as the source set associated to the function and the operator " . in all that follows ,the hypothesis can be replaced by continuous on and , where is the set of all functions which are measurable with respect to the measures for every .the following direct result , whose proof follows immediately from the concept of weak source - order pair , states that if the exact solution of the problem belongs to the source set and is a weak source - order pair for , then the regularization error has order of convergence . for brevity reasons we do not give the proof here .[ teo : gen-4.3 ] let be weak qualification of and such that is a weak source - order pair for .if then for .it is important to note here that the previous result can be viewed as a generalization of theorem 4.3 in , to the case of srm with weak qualification and general source sets .in fact , that result corresponds to the particular case in which has classical qualification of order .the following converse result states that if the regularization error has order of convergence and is an order - source pair , then the exact solution belongs to the source set given by the range of the operator .[ teo : gen-411 ] if is an order - source pair for and for , then the proof follows immediately from the definition of order - source pair for the srm .it is interesting to note that theorem [ teo : gen-411 ] can also be viewed as a generalization of theorem 4.11 in .in fact , this corresponds to the particular case in which y .if moreover is optimal qualification then the reciprocal of theorem [ teo : gen-411 ] also holds .this is proved in the following theorem . if is optimal qualification of and , then for if and only if let be optimal qualification of and .then by theorem [ teo : unica nu ] , is an order - source pair for and since for , it follows from theorem [ teo : gen-411 ] that conversely , if , since by virtue of theorem [ teo : unica nu ] is a strong source - order pair , theorem [ teo : gen-4.3 ] implies that for .an important result regarding existence and maximality of source sets is the following : if is strong qualification of a srm and it follows from proposition [ teo : nu < nu - rho ] that is a maximal source set where is order of convergence of the regularization error .more precisely we have the following result .[ teo : rango ] let be strong qualification of such that and .if is a strong source - order pair for and then .under the hypotheses of the proposition [ teo : nu < nu - rho ] , there exists such that for every , which implies that .if moreover is optimal qualification the following stronger result is obtained . if is optimal qualification of and , then is the only source set where is order of convergence of the regularization error of .this result follows immediately from theorem [ teo : unica nu ] .* examples : * \1 . for the tikhonov - phillips regularization method the only source set where is optimal qualification is , since in this case . \2 . in example 3 of section 3 we saw that is optimal qualification of and .since it follows that is the only source set where is order of convergence of the regularization error .in example 8 of the previous section , for we have that .since is strong qualification of this srm , it follows that is a maximal source set where is order of convergence of the regularization error .as pointed out at the end of section 3 , is weak qualification of showalter s method .it can be easily shown that for every , is a weak source - order pair for the method .therefore , it follows from theorem [ teo : gen-4.3 ] that the regularization error has order of convergence whenever .. happens with landweber s method and .in this article we have extended the definition of qualification for spectral regularization methods introduced by math and pereverzev in .this extension was constructed bearing in mind the concept of qualification as the optimal order of convergence of the regularization error that a method can achieve ( , , , ) .three different levels of generalized qualification were introduced : weak , strong and optimal .in particular , the first of these levels extends the definition introduced in and a srm having weak qualification which is not qualification in the sense of definition [ def : calif - mathe ] was shown .sufficient conditions for a srm to have weak qualification were provided , as well as necessary and sufficient conditions for a given order of convergence to be strong or optimal qualification .examples of all three qualification levels were provided and the relationships between them as well as with the classical concept of qualification and the qualification introduced in were shown .several srms having generalized qualification in each one of the three levels and not having classical qualification were presented . in particular , it was shown that the well known tsvd , showalter s and landweber s methods do have weak qualification .finally several implications of this theory in the context of orders of convergence , converse results and maximal source sets for inverse ill - posed problems , were briefly shown .more detailed results on these implications will appear in a forthcoming article .
the concept of qualification for spectral regularization methods ( srm ) for inverse ill - posed problems is strongly associated to the optimal order of convergence of the regularization error ( , , , ) . in this article , the definition of qualification is extended and three different levels are introduced : weak , strong and optimal . it is shown that the weak qualification extends the definition introduced by math and pereverzev ( ) , mainly in the sense that the functions associated to orders of convergence and source sets need not be the same . it is shown that certain methods possessing infinite classical qualification , e.g. truncated singular value decomposition ( tsvd ) , landweber s method and showalter s method , also have generalized qualification leading to an optimal order of convergence of the regularization error . sufficient conditions for a srm to have weak qualification are provided and necessary and sufficient conditions for a given order of convergence to be strong or optimal qualification are found . examples of all three qualification levels are provided and the relationships between them as well as with the classical concept of qualification and the qualification introduced in are shown . in particular , srms having extended qualification in each one of the three levels and having zero or infinite classical qualification are presented . finally several implications of this theory in the context of orders of convergence , converse results and maximal source sets for inverse ill - posed problems , are shown . * keywords . * qualification , regularization method , inverse ill - posed problem .
analyses of high - throughput genomic data often produce ranked lists of genomic loci .two examples are lists of differentially expressed genes from an rna - seq or microarray experiments and lists of transcription factor ( tf ) binding peaks from chip - seq data . in each list , loci are ranked based on scores such as p - values , false discovery rates ( fdr ) or other summary statistics .when two such lists are available , a common problem is to characterize the degree of concordance between them .below are two examples . *_ characterizing co - binding of two transcription factors _ :chip - seq data are collected for two different tfs . for each tf ,an initial data analysis yields a list of peaks along the genome representing its putative binding regions . in order to characterize whether the two tfs collaborate and how they interact with each other ,one wants to compare the two peak lists to answer the following questions : ( 1 ) what proportion of the true binding sites are shared by the two tfs ? ( 2 ) how does this proportion change as one moves from high quality peaks to low quality ones ? * _ assessing reproducibility of scientific findings _ : gene expression data for the same biological system are collected independently by two different laboratories .each lab collects the data using its own platform and protocol .the data from each lab contain gene expression profiles for two biological conditions , each with multiple replicate samples .each lab analyzes its own data to generate a list of differentially expressed genes .one wants to compare the differential gene lists from the two labs to determine which differential genes are likely to be reproducible by other labs . in both these scenarios ,perhaps the best way to compare two datasets is to model it at the raw data level .whenever possible , directly comparing or modeling the raw data may allow one to keep most of the information . however , this is not always easy or feasible .for instance , sometimes genomic rank lists are published without releasing the raw data to protect confidentiality of research subjects .sometimes , one may want to compare his / her own data with thousands of other datasets in public repositories such as encode , modencode , and gene expression omnibus ( geo ) . analyzing all the raw data in these databasesis a huge undertaking that requires significant amount of resources .this is often beyond the capacity of an individual investigator , and it may not be justified based on the return . in those situations , comparing two datasets based on the readily available rank lists may be preferred . sometimes , this may be the only solution .this article considers analysis issues in this scenario . as an exploratory tool ,the simple venn diagram approach is widely used to show the overlap between two genomic loci lists . however, this approach does not consider the concordance or correlation of ranks between the two lists .a feature commonly seen in genomic rank lists is that the top ranked loci are more likely to be true signals .signals are more likely to be reproduced in independent studies than noise ; therefore , they tend to be correlated between different datasets .because of this , the concordance between the two rank list is a function that changes with the rank of the loci .this information is not reflected in a venn diagram . to address this limitation , li et al .recently proposed a method to measure the concordance of two rank lists as a function of rank .they developed a gaussian copula mixture model to assign a reproducibility index , irreproducible discovery rate ( idr ) , to each locus .the idr analysis produces a concordance curve rather than a scalar number to measure the overlap between two lists .this approach is semiparametric and invariant to monotone transformations of the scores used for ranking . in principle , idr is a model based version for one minus the correspondence at the top ( cat ) plot proposed by irizarry et al. .the original authors of idr demonstrated their method using an application where they evaluated the reproducibility of different chip - seq peak callers by comparing the peak calling results from two replicate experiments . although the idr approach represents a significant advance compared to the simple venn diagram analysis , it also has limitations .importantly , the gaussian copula mixture model in the original idr approach requires one to know the ranks of each locus in both lists . however , many loci occur only in one list . as a result , to perform the idr analysis , li et al .first filtered out all loci that were reported in only one rank list .loci are included in the idr analysis only if they appear in both lists .as li et al . reported , for the real data they analyzed ( which are peak lists from replicate chip - seq experiments ) , only `` 23 - 78% of peaks are retained for this analysis '' . as such , the original idr analysis only characterizes the signal concordance for a subset of loci that are reported in both lists .attempting to interpret the resulting idr as a reproducibility measure for the whole dataset could be misleading .it is possible that the two original loci lists have little overlap and , therefore , low reproducibility , but the loci in their overlapping set ( i.e. , the loci shared by both lists ) are highly correlated in terms of their relative ranking .in such a situation , the idr computed using only the overlapping loci may misleadingly suggest high reproducibility of the two datasets .this is a limitation caused by ignoring list - specific loci , and can only be addressed by bringing them back into the analysis . herewe propose a survival copula mixture model , scop , to tackle the general problem of comparing two genomic rank lists .this new approach allows one to include the list - specific loci in the analysis when evaluating the signal concordance between the two datasets . for loci that occur only in one list , we treat the scores used for ranking ( e.g. , p - values or fdrs ) in the other list as censored data . in this way, we translate the problem into a bivariate survival problem .although many works have been done in the area of estimating correlation structure of bivariate failure times in survival analysis , none of them considered the issue specific to genomic data . in genomic applications , the higher ranked loci are more likely to be true signals .thus , adopting the traditional survival analysis terminology , earlier failure time is of higher interest . built upon li et al , our survival copula mixture model attempts to borrow strength from both the copula mixture model and the survival analysis .the benefit is that it can better characterize the overlap and concordance between two rank lists .the article is organized as follows . in section 2, we introduce the survival copula mixture model and discuss its connection to survival analysis .section 3 uses simulations to demonstrate our method and compare it with the other alternatives .we apply our method to two real chip - seq datasets example in section 4 .we then conclude the article with discussions in section 5 .consider two genomic loci lists such as lists of differentially expressed genes from two rna - seq experiments or lists of transcription factor binding regions from two chip - seq experiments . in each list ( ) , loci are rank ordered based on a score such as a p - value or an fdr .let be the score for locus in list .without loss of generality , we assume that smaller score ( e.g. , smaller fdr ) represents a higher significance .often , a locus is reported in list only when its score passes a cutoff .thus , all loci in list 1 satisfy , and any locus with is not reported .similarly , list 2 contains loci for which .a locus may be reported in both lists , in one list only , or in none of the lists .each list may contain a certain amount of noise or false positives in addition to signals . by comparing the two lists ,the goal is to characterize the degree of concordance of the signals from the two datasets , and how the concordance varies as one moves from the top ranked loci to those lower ranked . to analyze these data, we borrow the idea of idr .however , instead of excluding loci that occur in only one list from the analysis , we retain all loci that occur in any of the two lists . if a locus does not appear in one list , its score in that list is labeled as missing .this creates missing data , but the data here are not missing completely at random .for example , if rank list 1 uses an fdr cutoff of 0.1 , then we know that for any loci in list 2 but not in list 1 , their missing fdr in list 1 are indeed greater than 0.1 . in other words , the data we observe are right truncated .this naturally translates the problem into a survival problem with right censoring data .figure 1(a ) shows a numerical example .the figure displays two chip - seq peak lists ranked according to fdr .region 2 passes the fdr cutoff for both lists , but region 1 only appears in the peak list for tf a. it is absent in the peak list for tf b since its fdr in that dataset is higher than the 0.1 cutoff . rather than excluding region 1 from the analysis , we retain it and encode the data using `` observed survival time '' and `` censoring indicator '' adopting the terminology in survival analysis .the `` observed survival time '' is defined as , and the `` censoring indicator '' is defined as . in this example , the observed survival time for region 1 in peak list b is 0.1 , and the censoring indicator is equal to zero indicating that the data is censored .intuitively , the original idr approach by only models the red points ( i.e. , cases with complete data ) in figure 1(b ) , whereas our new approach attempts to use information from all data points regions ii , iii , and iv .later we will show that compared to the original idr calculation which excludes the list - specific loci , including them as censored data in our model will provide more information .let be the probability density function for . is the corresponding survival function . for any bivariate random variables, there exists a copula , which is invariant under monotone transformation for the marginal distribution .based on this , we use two latent random variables and to characterize the relationship between and . for each , is assumed to follow a gaussian mixture distribution . represents the survival function for the latent variable .the latent variables ( and ) and the observed scores ( and ) are linked through a monotone transformation .let denote the density function for .it is assumed that this density is a mixture of a noise component and a signal component , where .similarly , the density function for , , is assumed to be a mixture of noise and signal where .the data are assumed to be generated as below ( see figure 1(c ) for a cartoon illustration ) : 1 . a random indicator is first assigned to each locus . * if , then locus is noise in both lists . * if , then locus is signal in list 1 but noise in list 2 . * if , then locus is signal in list 2 but noise in list 1 . * if , then locus is signal in both lists .+ thus , represents the co - existing pattern of signals .it is usually called `` frailty '' in survival analysis .the is assumed to be assigned according to the probability vector , where .2 . given , latent variables and are generated according to and , respectively .3 . and are truncated using and as cutoffs .4 . the truncated pseudo data and are monotone transformed to observed data and based on , which yields .correspondingly , . also note that where and .since is truncated at and the censoring time is a constant , the censoring time is independent of the underlying true failure time and contains no information about and . as a result , the contribution of each locus to the likelihood can be represented by one of the four formulas below : * : .* : .* : .* : .collect the data and latent variables into three sets , and , and define .the full likelihood can be derived as : to fit the model , we use an iterative em algorithm similar to the one proposed by li et al. to estimate . 1 .intialize using random values .2 . use the kaplan - meier estimator to estimate the marginal survival functions for and .3 . given the initial , obtain pseudo - data , .estimate parameters based on the pseduo - data and using an em algorithm . 5 .update the and using the newly estimated , and update the pseudo - data and using the new and .iterate between steps 3 and 4 until the change in log - likelihood between the two nearby iterations is less than a pre - specified threshold .details of the algorithm are given in the appendix .once the model parameters are estimated , a coexistence probability ( also called probability for having reproducible signals ) can be computed for each locus as : using these coexistence probabilities , we define two coexistence curves ( cop curves ) as : intuitively , indicates that among the loci whose scores in list 1 are less than , the proportion that are true signals in both lists .similarly , shows among the loci ranked higher than locus in list 2 , the fraction that represents signals reproducible in both lists . from these two cop curves, one can see how the co - existence strengths between the two lists change from the most significant loci to the least significant ones . to facilitate the comparison with the idr approach in , we also define : represents the fraction of noise or non - concordant ( non - reproducible ) signals among loci whose score in list 1 does not exceed . can be interpreted similarly .our model and measures here allows asymmetry of the signals in the two lists .for instance , if one list is obtained from a poor quality experiment with low signal - to - noise ratio and the other list is from a high - quality experiment with high signal - to - noise ratio , the two cop curves will be different .in contrast , the original idr approach only produces one idr curve to show the concordance . as a result, it can not show the difference between two asymmetric datasets .in this section , we use simulations to illustrate scop and compare it with the venn diagram and idr approach .case i illustrates why scop is better at characterizing the degree of concordance between two rank lists .consider two lists , each with 10,000 loci .since the copula model is invariant to monotone transformation of marginal scores , we generated the simulation data by first generating latent random variables and and then transforming them to p - values , denoted as , from a one - sided -test for vs . specifically , where follows the standard normal distribution . for both lists ,a normal distribution was used as the signal component for the latent variable , and was used as the noise component .the mixture proportions of the four possible co - existence patterns were .in other words , for the full lists without truncation , only 10% of the loci represent signals in both lists , and the other 90% of the loci are noise .all values greater than -1.65 , corresponding to p - value in a one - sided z - test , were truncated .the p - values were then generated according to the process described in section 2.2 . under this setting ,the two lists are symmetric in terms of their signal - to - noise ratio . to reflect the scenario in real applications , lociwhose p - values were greater than in both lists were excluded from the analysis. meanwhile , all the other loci , either censored in only one or neither of the two lists , were retained . as shown in figure 2 ( a ) ,a total of 1,872 loci passed the p - value cutoff in either list 1 or list 2 . among them , 56.1% ( 1,050 ) were reported in both lists .nevertheless , the venn diagram does not characterize the rank concordance between the two lists . for case i. ( d ) for case i. ( e)-(f ) the venn diagram , idr by li et al(2011 ) , and for case ii.,scaledwidth=80.0% ] we then applied the idr approach to the 1,050 loci reported in both lists , consistent with how the idr analysis was performed by .figure 2 ( b ) shows the corresponding idr curve .based on the curve , the idr analysis would claim high reproducibility between the two datasets .however , this is clearly not the case , since figure 2 ( a ) shows that 43.9% of the 1,872 reported loci were not shared by the two lists .this illustrates why ignoring the list - specific loci in the idr analysis can be misleading .the high reproducibility that the method reports only describes the degree of concordance among the loci common in both lists .it does not characterize the concordance or the reproducibility of the whole lists .this has important implications .idr is widely used in the encode project to measure the reproducibility of replicate experiments , and to evaluate the performance of data analysis algorithms in terms of how consistent they perform when applied to replicate experiments .the example here shows that idr can be very misleading if one wants to measure the global reproducibility of two replicate experiments , or to evaluate if a data analysis algorithm is stable .this is caused by ignoring the list - specific loci , which is not allowed in the original idr model . finally , we applied scop to the simulated data .figures 2 ( c ) and ( d ) show the corresponding and curves , together with the underlying truth curve .the and curves ( red dashed lines ) match the underlying truth curves ( black solid lines ) very well . as a benchmark comparison, we also counted among the top ranked loci in one list how many of them were absent from the other list , and created the corresponding curves called `` naivevenn '' ( dark green dotted lines ) hereafter . from another perspective , naivevenn curves were constructed by fixing one circle in the venn diagram , varying the other circle with different rank cutoffs , and counting the overlap proportions .to certain extent , `` naivevenn '' can be viewed as a naive estimation for and .however , naivevenn underestimates the irreproducibility for loci occurring in only one list , whereas scop is able to borrowing information from all loci , complete or missing in one list , to better estimate the signal and noise proportions in the data . and curves clearly demonstrate that the fraction of concordant signal in the two lists is not high , and the irreproducible loci consist of 40% of both observed lists . and curves also show that the signal concordance decreases as one moves from top ranked loci to lower ranked loci , a trend not directly revealed by the venn diagram approach . with these curves , one may adjust the cutoff for calling signals based on the degree of reproducibility between two independent experiments , which is a function not usually provided by the venn diagram approach . in caseii , each rank list contains 1,000 loci .the mixture proportions of the co - existence patterns were .for both lists , the noise and signal components for generating latent random variables were again assumed to follow and respectively . were truncated at -1.65 as well . among the 1,083 loci that passed the cutoff in either list , 98.8% ( 1,070 ) were not found in both lists . figures 2(k ) and ( l ) show that scop accurately characterizes the degree of signal concordance between the two lists ( compare the red and the black curves ) . comparing figure 2(b ) in case i and figure 2(f ) in case ii , one can see that the original idr approach would claim high reproducibility in both cases .however , figure 2(g)(h ) clearly demonstrates that these two cases are different . in casei , 40% of all loci are claimed as noise ( figure 2 ( c)(d ) ) for the observed lists , whereas only 1.5% of all loci in case ii are estimated as noise ( figure 2 ( g)(h ) ) . in summary ,the two simulations above show that the overlap revealed by venn diagrams does not contain all information about the degree of signal concordance and how it changes when rank changes .they also show that the idr computed using the loci present in both lists can be misleading for characterizing global concordance or reproducibility . by incorporating the censoring data into the analysis, scop addresses both issues and can provide a better characterization of concordance or reproducibility . for case iii .( d ) for case iii.,scaledwidth=80.0% ] unlike the idr approach which only produces one idr curve , scop creates two curves , one for each rank list . using these two curves, one can explore characteristics specific to each rank list .for instance , besides measuring the overall concordance of two rank lists , idr is also used to determine where to cut the rank lists to keep only the loci that are likely to be reproducible in independent experiments , that is , it serves a role similar to false discovery rate ( fdr ) . in many real applications, one rank list is obtained from a high quality experiment , whereas the other list is obtained from a noisy dataset ; thus , the two rank lists may be asymmetric in terms of their signal - to - noise ratio . in such a scenario, one may want to have a more detailed view of each list .since the idr approach produces only one idr curve for loci shared by both lists , it does not reveal the asymmetry between the two lists . using this idr curve to choose cutoff forces the same cutoff to be applied to both lists .this may result in decreased power in detecting reproducible loci .in contrast , scop allows one to estimate idr separately for each list and to observe the asymmetry of data quality for the two lists . to demonstrate , we generated scores for 10,000 loci and created two rank lists using a procedure similar to case i in section 3.1 .for both lists , the signal and noise components were and respectively .the latent variables in both lists were censored at -1.65 . the mixture proportions of the four co - existence patterns were set as . in this case , 70% of loci in the complete list 1 are signals , of which only 20% are also reproducible in complete list 2 , corresponding to signal proportion of 96% and 28% in the observed lists 1 and 2 .this simulation is referred to as case iii . when the idr approach was applied to analyze the loci present in both lists , the idr estimates were very conservative compared to the true fdr ( figure 3(b ) ) .this is because the asymmetry of the two lists lead to high variability , inflating the error rate estimates .in contrast , the and curves produced by scop accurately estimated the proportion of irreproducible signals in each list ( figure 3(c)(d ) ) and indicated that the two lists have asymmetric signals .thus , a separate analysis rather than a pooled one is needed for these two lists .moreover , figure 3(d ) once again illustrates that naivevenn can underestimate the irreproducibility .the reason is that it fails to distinguish between the signals and noise in the overlap part of the venn diagram , and hence count both of them in the calculation .we downloaded two replicate chip - seq experiments for transcription factor nf - kb in cell line gm10847 from the encode together with their corresponding input data ( table 1 ) .peak list a was called using cisgenome by comparing sample 1 with sample 3 and 4 at cutoff of fdr=0.01 ; similarly , peak list b was called by comparing sample 2 with sample 3 and 4 . for each peak, we extracted the 150bp window centered at the peak summit to ensure the same length in peaks .we then compared the two peak lists .the venn diagram in figure 4(a ) shows that the majority of the loci in these two peak lists were different .only 39.0% of loci in list a and 20.5% of loci in list b were found in the other list .nevertheless , the idr analysis of the shared loci gives a low idr estimate of 0.015 , misleadingly suggesting high reproducibility between the two replicates ( figure 4(b ) ) .in contrast , scop was able to show that the two replicate experiments have low reproducibility and high idrs ( figure 4(c)(d ) ) . for replicate a. ( d ) for replicate b.,scaledwidth=80.0% ]foxa transcription factors are a key family of tfs that regulate gene activities in liver cancer .biologists are interested in how members in this tf family interact with each other and whether different members bind to the same genomic loci in liver cancer cells .the encode project has generated chip - seq data for both foxa1 and foxa2 in a liver cancer cell line hepg2 .these data can be used to answer the questions raised in section 1 . using cisgenome , we called 65,535 binding peaks for foxa1 ( comparing sample 5 and 6 with sample 9 and 10 ) and 48,503 peaks for foxa2 ( comparing sample 7 and 8 with sample 9 and 10 ) , respectively at the fdr=0.01 cutoff . finally , we applied scop to characterize the concordance between the two lists .figure 5(a ) shows that among the top ranked foxa1 peak regions , about 60% are also bound by foxa2 .as one moves to the lower ranked foxa1 peaks , a lower percentage are simultaneously bound by foxa2 .thus , robust foxa1 binding seems to require foxa2 binding at the same location .in contrast , the very top ranked foxa2 peaks are more likely to be foxa2 specific and less likely to be shared for foxa1 binding .the middle ranked foxa2 peaks are more often bound by foxa1 .the co - binding proportion drops again for foxa2 peaks with low ranking which are increasingly more likely to be noise .this suggests that foxa2 may play its regulatory role in a different mode compared to foxa1 .this information is not immediately revealed by the venn diagram and idr approach .in summary , scop offers a new solution for comparing two rank lists .scop takes into account both the overall proportion of overlap shared by the two lists and the consistency of ranks along them .this overcomes the shortcomings of the venn diagram and the idr approach , and allows better characterizing of the concordance and global reproducibility between two datasets .our simulation studies show that drawing conclusions on concordance from venn diagrams may not reveal all the information in the data .the same degree of overlap may correspond to different signal - to - noise ratio .idr , on the other hand , is limited in terms of characterizing the global reproducibility between two datasets since it focuses on analyzing loci shared by both lists . in light of these results , the scop curves should provide a better solution to assessing data quality ( e.g. , reproducibility between replicate chip - seq samples ) and computational algorithms ( e.g. , evaluate consistency of the results when a method is applied to two replicate experiments ) in projects such as encode .our current model considers the problem of comparing two rank lists .an interesting future research topic is how to extend it to comparing multiple rank lists .currently , one can apply scop to compare each pair of lists .however , this pairwise comparison approach does not directly reveal higher order relationships .for instance , with three datasets , one can also ask how many loci are shared by all three lists in addition to asking how many loci are shared by each pair of lists . for rank lists , there are combinatorial signal coexistence patterns . as increases , the complexity of the problem increases exponentially .efficient ways to perform the comparison and summarize results , similar to those in , need to be developed in order to solve this problem .currently , an r package for scop is available upon request .the package will soon be submitted to bioconductor .here we present the details of the iterative algorithms used to estimate . 1 . initialize parameters .2 . estimate the survival function using the kaplan - meier estimator .3 . compute the pseudo - data . since does not have a closed form , first computed on a grid of 5,000 points over the range $ ] . is then obtained through linear interpolation on the grid .4 . run em algorithm to search for that maximizes the log - likelihood of pseudo data .the resulting is denoted as .iterate between steps 3 and 4 until the change in log - likelihood between the two nearby iterations is less than a pre - specified threshold .below are details of the em algorithm in step 4 . in the e - step ,one evaluates the q - function here the expectation is taken with respect to probability distribution . therefore , in the m - step , one finds that maximize the q - function .denote them by .solving we have : recall : and can be computed by replacing with accordingly .only in equation a.3 involves .because , the tail probability of a normal distribution , has no close form , we use the r function with the `` l - bfgs - b '' option to obtain the values that maximize . are searched in a similar fashion to maximize .the authors thank professor mei - cheng wang and members from the hopkins slam and genomics working group for their helpful discussions and suggestions . 9 barrett t , wilhite se , ledoux p , evangelista c , kim if , tomashevsky m , marshall ka , phillippy kh , sherman pm , holko m , yefanov a , lee h , zhang n , robertson cl , serova n , davis s , soboleva a ( 2013 ) ncbi geo : archive for functional genomics data sets update ._ nucleic acids research _ * 41 * , ( d1 ) : d991d995 .benjamini , y and hochberg , y ( 1995 ) controlling the false discovery rate : a practical and powerful approach to multiple testing . _ journal of the royal statistical society , series b ( methodological ) _ * * 57 * * ( 1 ) : 289300 celniker se , dillon la , gerstein mb , gunsalus kc , henikoff s , karpen gh , kellis m , lai ec , lieb jd , macalpine dm , micklem g , piano f , snyder m , stein l , white kp , waterston rh , modencode consortium ( 2009 ) unlocking the secrets of the genome ._ nature _ * 459 * , 92730 .irizarry ra , waren d , spencer f , kim if , biswal sh , frank bc , gabrielson e , garcia jgn , geoghegan j , germino g , griffin c , hilmer sc , hoffman e , jedlicka ae , kawasaki e , martinez - murillo f , morsberger l , lee h , petersen d , quackenbush j , scott a , wilson m , yang y , ye sq , yu w ( 2005 ) multiple - laboratory comparison of microarray platforms . _ nature methods_. * 2 * , ( 5 ) 345349 landt sg , marinov gk , kundale a , kheradpour p , pauli f , batzoglou s , bernstein be , bickel p , brown jb , cayting p , chen y , desalvo g , epstein c , fisher - aylor ki , euskirchen g , gerstein m , gertz j , hartemink aj , hoffman mm , iyer vr , jung yl , karmakar s , kellis m , kharchenko pv , li q , liu t , liu xs , ma l , et al.(2012 ) chip - seq guidelines and practices of the encode and modencode consortial . _ genome research_. * 22 * ( 9):181331. mikkelsen ts , ku m , jaffe db , issac b , lieberman e , giannoukos g , alvarez p , brockman w , kim tk , koche rp , lee w , mendenhall e , odonovan a , presser a , russ c , xie x , meissner a , wernig m , jaenisch r , nusbaum c , lander es , bernstein be ( 2007 ) genome - wide maps of chromatin state in pluripotent and lineage - committed cells ._ nature _ * 448 * , 55360 nan b , lin xh , lisabeth ld , harlow s ( 2006 ) piecewise constant cross - ratio estimation for association of age at a marker event and age at menopause . _journal of american statistician association_. * 101 * ( 437):6577
analyses of high - throughput genomic data often lead to ranked lists of genomic loci . how to characterize concordant signals between two rank lists is a common problem with many applications . one example is measuring the reproducibility between two replicate experiments . another is to characterize the interaction and co - binding between two transcription factors ( tf ) based on the overlap between their binding sites . as an exploratory tool , the simple venn diagram approach can be used to show the common loci between two lists . however , this approach does not account for changes in overlap with decreasing ranks , which may contain useful information for studying similarities or dissimilarities of the two lists . the recently proposed irreproducible discovery rate ( idr ) approach compares two rank lists using a copula mixture model . this model considers the rank correlation between two lists . however , it only analyzes the genomic loci that appear in both lists , thereby only measuring signal concordance in the overlapping set of the two lists . when two lists have little overlap but loci in their overlapping set have high concordance in terms of rank , the original idr approach may misleadingly claim that the two rank lists are highly reproducible when they are indeed not . in this article , we propose to address the various issues above by translating the problem into a bivariate survival problem . a survival copula mixture model is developed to characterize concordant signals in two rank lists . the effectiveness of this approach is demonstrated using both simulations and real data . yy wei and hk ji genomic rank lists comparison * keywords * genomics ; high - throughput experiments ; mixture model ; survival copula ; em algorithm ; reproducibility ; co - binding of transcription factors .
consider an sde of the form where is a -dimensional wiener process , is a bounded measurable mapping from to .according to there exists a unique strong solution to equation ( [ eq_main ] ) .it is well known that if is continuously differentiable and its derivative is bounded , then equation ( [ eq_main ] ) generates a flow of diffeomorphisms .it turns out that this condition can be essentially reduced , and a flow of diffeomorphisms exists in the case of possible unbounded hlder continuous drift vector .recently the case of discontinuous drift was studied in and the weak differentiability of the solution to ( [ eq_main ] ) was proved under rather weak assumptions on the drift .the authors of consider a drift vector belonging to for some such that they establish the existence of the gteaux derivative in ;{\mathds{r}^d}) ] where , are positive constants that depend only on and denote by the kernel built on the transition density of the wiener process , i.e. it is easily to see ( , ch . 8 , 1 ) that for all where therefore , the kernel has a singularity if ( for ) and the integral is not well defined for all measures .a measure is a measure of kato s class if it follows from ( [ eq_gaussian_estimates ] ) that a measure satisfies the condition ( [ cond_a ] ) if and only if it belongs to kato s class .[ cond_a_equiv ] a measure satisfies the condition ( [ cond_a_prime ] ) if and only if the proof is a slight modification of that for the case of given in , theorem 4.5 ( see also , exercise 1 on p. 12 ) . here is a non - negative borel measurable function .we use the representation ( [ eq_character_2 ] ) in the proof .[ example_local_time ] let . for each ,the measure belongs to kato s class and corresponds to the w - functional }\left(w_s\right)ds,\ ] ] which is called the local time of a wiener process at the point .assume that is a measure of kato s class .this means now that )<\infty. ] ( if there is no misunderstanding , sometimes we will consider as a function on ,{\mathds{r}^d}) ] almost surely .the main result on differentiability of a flow generated by equation ( [ eq_main ] ) with respect to the initial conditions is given in the following theorem .[ theorem_main ] let be such that for all is a function of bounded variation on .put .assume that the measures belong to kato s class .let , be a solution to the integral equation where is the -identity matrix , the integral on the right - hand side of ( [ eq_derivative_main ] ) is the lebesgue - stieltjes integral with respect to the continuous function of bounded variation .then is the derivative of in -sense : for all , , , , where is a norm in the space .moreover , where is the lebesgue measure on . the differentiability was proved in .we give a representation for the derivative .note that the sobolev derivative is defined up to the lebesgue null set .consider the non - homogeneous sde similarly to the arguments given in section [ section_w_functionals ] a theory of non - homogeneous additive functionals of non - homogeneous markov processes can be constructed .all the formulations and proofs can be literally rewritten with natural necessary modifications .unfortunately , there are no corresponding references , therefore we did not carry out the corresponding reasonings . consider examples of functions for which are measures of kato s class . [ remark_cond_for_smooth_functions ]let for all , be a lipschitz function . by rademacher s theorem the frecht derivatives exist almost surely w.r.t .the lebesgue measure .it is easy to verify that they are bounded and the frecht derivative coincides with the derivative considered in the generalized sense .then belongs to kato s class .let now be a bounded domain in with boundary .put .it follows from example [ example_w_manifold ] that for all is a measure of kato s class because ( cf . ) where is the outward unit normal vector at the point condition ( [ cond_a_prime ] ) is also satisfied by the measure generated by being a linear combination of the form where , is a bounded domain in with boundary .further examples of can be obtained as the limits of sequences of the functions of form ( [ eq_func_a ] ) . in one - dimensional caseall the functions of bounded variation generate measures belonginig to kato s class ( see example [ example_local_time ] ) .see also example [ example_w_hausd_meas ] showing that if are `` hausdorff - type '' measures with a parameter greater than , then satisfies assumptions of the theorem .the idea of the proof of theorem [ theorem_main ] is to approximate the solution of equation ( [ eq_main ] ) by solutions of sdes with smooth coefficients .the definition and properties of approximating equations are given in sections [ section_approximation ] , [ section_convergence_derivatives ] .the proof of the theorem itself is presented in section [ section_proof ] .for let be a non - negative function such that , and .put where the function satisfies the assumptions of theorem [ theorem_main ] .note that and in passing to subsequences we may assume without loss of generality that for almost all w.r.t .the lebesgue measure .consider the sde put .denote by the matrix of derivatives of in , i.e. , then satisfies the equation where is the -dimensional identity matrix .[ lemma_converg_solutions ] _ for each , _ 1 . for all and any compact set , 2 . for all where is a norm in the space 1 ) follows from the uniform boundedness of the coefficients and the finiteness of the moments of a wiener process ; 2 ) is proved in , theorem 3.4 . for put by the properties of convolution of a generalized function ( see , ch .2 , 7 ) , for each , put and ( c.f .remark [ remark_hj_dec ] ) .then , according to theorem [ theorem_sufficient_condition ] , there exist w - functionals of a wiener process on which correspond to the measures and have characteristics of the form the functional is given by the formula ( see example 2 ) . [ lemma_converg_w_functionals ]_ for each _ , , , , where is the distribution of the process the following simple proposition used for the proof of lemma [ lemma_converg_w_functionals ] is easily checked .[ proposition_convolution_characteristics ] let be from the kato class , be the characteristics of the corresponding w - functionals of a wiener process , and the representation hold true. then the relation is fulfilled . to prove the convergence of functionals in mean squareit is sufficient to show that for each , ( see theorem [ theorem_convergence_characteristics ] ) . then the uniform convergence in probability follows from proposition [ proposition_uniform_convergence ] . for each we have because of the condition ( [ cond_a_prime ] ) , for each , we can choose so small that is less then . to obtain the same estimate for , note that by the associative , distributive and commutative properties of convolution ( see , ch .ii , 7 ) , we get consider .the functions are equicontinuous in for ] , is the wiener measure , and put , is the distribution of the process ,,{\mathds{r}^d}) ] , , .then is a sequence of random elements in the space taking values on ,{\mathds{r}^d}) ] .this implies the first assertion of proposition [ proposition_kulik ] . according to lemma [ lemma_converg_w_functionals ] , as in probability measure uniformly in . ] in measure .so the second assertion of proposition [ proposition_kulik ] is justified .the absolute continuity of the distribution of w.r.t .the measure follows from girsanov s theorem .the density is defined by the formula as where is a norm in , we have that for each , ( cf . , theorem 6.1 ) .the uniform integrability of the family follows from the estimate valid for .thus all the assertions of proposition [ proposition_kulik ] are fulfilled and we have in probability . the lemma is proved .recall that are the solutions of equations ( [ eq_derivative_main ] ) , ( [ eq_derivative ] ) , respectively . in this sectionwe show the convergence of the sequence in probability uniformly in .this together with lemma [ lemma_converg_solutions ] allow us to prove theorem [ theorem_main ] .[ lemma_converg_derivatives ] 1 . for all , , , 2 . for all where for the proof we need the following two propositionsthe first one is a version of the gronwall - bellman inequality and can be obtained by a standard argument .[ prop_gronwall_lemma ] let be a continuous function on , be a non - negative continuous function on , be a non - negative , non - decreasing function , and .if for all , then [ proposition_moments_a ] for all , there exists a constant such that the statement of the proposition follows from lemma [ lemma_exponent_moment ] and the inequalities ( [ eq_gaussian_estimates ] ) , which allow us to obtain the estimates uniform in . for all , define the variation of on ] , and ). ] as then }\left|\int_0^t f(s)dg_n(s)-\int_0^t f(s)dg(s)\right|\to 0 , \n\to\infty.\ ] ] we get consider the first summand in the right - hand side of ( [ eq_hhhh ] ) .put , and then lemma [ lemma_converg_functionals ] , proposition [ proposition_moments_a ] , and proposition [ proposition_monot_convergence ] provide that in probability .similarly it is proved that the second summand in the right - hand side of ( [ eq_hhhh ] ) tends to as .this and statement 1 ) entail statement 2 ) of the lemma .define approximating equations by ( [ eq_main_n ] ) , where are determined by ( [ eq_a_n ] ) . from lemma [ lemma_converg_solutions ] andthe dominated convergence theorem we get the relation }\int_u|\varphi_{n , t}^i(x)-\varphi_t^i(x)|^pdx\to 0 , \ n\to \infty,\ ] ] valid for any bounded domain , , and so for each there exists a subsequence such that }\int_u|\varphi_{n_k^i , t}^i(x)-\varphi_{t}^i(x)|^pdx\to 0 \ \mbox{a.s . as }\ k\to \infty.\ ] ] without loss of generality we can suppose that }\int_u|\varphi_{n , t}^i(x)-\varphi_{t}^i(x)|^pdx\to 0 \ \mbox{a.s .n\to \infty.\ ] ] arguing similarly and taking into account lemma [ lemma_converg_derivatives ] we arrive at the relation }\int_u|y_{n , t}^{ij}(x)-y_{t}^{ij}(x)|^pdx\to 0 , \n\to\infty , \ \mbox{almost surely},\ ] ] which is fulfilled for all since the sobolev space is a banach space , the relations ( [ eq_convergence_solutions ] ) , ( [ eq_convergence_derivatives ] ) mean that is the matrix of derivatives of the solution to ( [ eq_main ] ) .let us verify .we have for all it follows from lemmas [ lemma_converg_solutions ] and [ lemma_converg_derivatives ] that the following lemma implies the relation in probability and hence in all .this completes the proof of the theorem , as and implies .let be a measure of kato s class .then for any , , , for , denote by the shift of the measure by the vector , i.e. for each , then note that for fixed and the process can be considered as a markov process starting from , and its distribution is equivalent to the distribution of the wiener process starting from .indeed , where similarly to the proof of lemma 5 it can be checked that the family of the radon - nikodym densities are uniformly integrable with respect to . by proposition [ lemma_converg_solutions ] and lemma [ proposition_kulik ] to prove ( [ eq_continuity_w_functionals ] ) it suffices to verify that by denote the restriction of the measure to the ball put , .then it is easy to see that the function is uniformly continuous in \times { \mathds{r}^d}. ]this together with entails ( [ eq_convergence_w_functionals_2 ] ) .note that is a w - functional .let us estimate its characteristic . from the estimates ( [ eq_gaussian_estimates ] ) we obtain ( see also the proof of theorem 6.6 . in ) where .taking into account ( [ eq_additive ] ) , we get by proposition [ prop_gikhman_skorokhod ] , therefore , the second moment of is bounded uniformly in .this implies the uniform integrability and , consequently the convergence in holds in ( [ eq_girsanov_theorem ] ) .then the characteristic of the functional is equal to if we show that then the statement of the lemma follows from proposition [ proposition_uniquely_defined ] .we have , for each , consider . arguing as in the proof of ( [ eq_similar ] ) we arrive at the inequality making use of ( [ eq_gaussian_estimates ] ) and changing the variableswe get for each , the condition ( [ cond_a_prime ] ) allows us to choose so small that further , the measure converges weakly to the -measure at the point .the function is equicontinuous in for , \x\in{\mathds{r}^d}$ ] .so uniformly in and .besides , from ( [ eq_gaussian_estimates ] ) by the dominated convergence theorem , now the equality ( [ eq_f_tilde ] ) follows from ( [ eq_i_ii ] ) and ( [ eq_iii ] ) .the lemma is proved .the authors thank prof . le jan and prof .kulik for fruitful discussions .we are appreciate prof .portenko for his useful remarks to the manuscript .we also are grateful to the anonymous referee for his thorough reading and valuable comments which helped to improve essentially the exposition .
we consider a -dimensional sde with an identity diffusion matrix and a drift vector being a vector function of bounded variation . we give a representation for the derivative of the solution with respect to the initial data .
to introduce a model which innately and efficiently represents the salient properties of receptive fields of neurons in the v1 to mt motion processing stream . as one traces the neural pathway connecting the input layers of v1 to area mt ,neurons become increasingly specialized for motion detection . in ascending order ,attributes such as orientation selectivity , direction selectivity , speed tuning , increasing preferred speeds , component spatial frequency selectivity , and pattern ( plaid ) spatial frequency selectivity are sequentially acquired .furthermore , the temporal frequency filtering properties of the neuronal population changes in a particular way along this pathway .specifically , the proportion of bandpass temporal frequency filtering neurons to lowpass temporal frequency filtering neurons increases .existing receptive field models do not represent this fundamental emergent property .our specific aim is to introduce and describe a model for the receptive fields of v1 to mt neurons which innately and efficiently represents the aforementioned properties .the detection of moving objects in our environment is both critical to our experience of the world and to our very survival .imagine a hunter in rapid pursuit of a small agile animal .both predator and prey detect abrupt changes in velocity occurring over an interval of less than a tenth of a second .the precision of our visual motion detection system is undoubtedly striking . the middle temporal ( mt ) area of the mammalian brain plays a key role in the visual processing of motion .the importance of area mt in motion processing is supported by a growing wealth of electrophysiological evidence which was initiated by dubner and zeki s 1971 report . for purposes of motion processing , the main input signals into mt originate in specialized cells of the primary visual cortex ( v1 ) .kuffler s early studies of retinal ganglion cell response properties led to similar studies of simple cortical cells by hubel and wiesel .together , their work generated great interest in neuronal receptive fields .the functional forms of these receptive fields are at once beautifully simple yet enormously complex .hence mathematical models have been used in step with electrophysiological studies to advance our understanding of their properties .initially , focus was predominantly on their spatial structure .now , however , their temporal structure is increasingly studied in tandem . in particular , for most neurons in the v1 to mt processing stream , it is now appreciated that spatial and temporal features can not be studied separately .they are spatiotemporally inseparable entities .in particular , motion is encoded by orientation in the spectral domain , and spatiotemporally - oriented filters are therefore motion detectors .therefore at first glance it may seem an easy matter to mathematically model motion detecting neurons .the challenge , however , is to develop physiologically sound receptive field models which reflect the hierarchical structure of the motion processing stream .various models do exist which are spatiotemporally - oriented filters , and are therefore motion detectors from a mathematical standpoint .however , they fail to represent one of the most salient characterizing attributes of the motion processing stream : the lowpass to bandpass distribution of temporal frequency filtering properties along the v1 to mt specialization hierarchy . this feature is discussed further in the next subsection .in addition to the above temporal frequency filter property profile , the neurons of the v1 to mt motion processing stream have certain characteristics which set them apart and prescribe bounds on models of their spatiotemporal receptive field structure .namely , they are tuned to specific orientations , directions , and speeds ; and they are more likely than other v1 cells to exhibit `` end - stopping '' phenomena . of the v1 cells ,the subclass with direct synaptic projections from v1 to mt are the most specialized towards motion and are thought to be the main channels of motion substrates incident into mt neurons .for instance , churchland et al showed that v1 and mt neurons lose direction selectivity for similar values of spatial disparity , suggesting v1 is mt s source of direction selectivity . additionally , these v1 to mt neurons cluster motion substrate properties such as strong direction selectivity and tuning to high speeds .for example , foster et al found that highly direction selective neurons in the macaque v1 and v2 were more likely to be tuned to higher temporal frequencies and lower spatial frequencies , i.e. higher speeds . in other words , neurons that are highly direction selective are more likely to also be tuned to higher speeds .such neurons are higher up in the v1 to mt hierarchy .similarly , mclean and palmer studied direction selectivity in cat striate cortex , and found that space - time inseparable cells were direction selective while space - time separable cells were not direction selective . again reflecting the hierarchy of the motion processing stream .the distribution of some distinct anatomical and histological features have been shown to correlate with the motion - specialization hierarchy .for instance , shipp and zeki s retrograde tracer studies showed that the majority of direct v1 to mt projecting neurons originate within layer 4b .based on their results , they suggested that layer 4b contained a functionally subspecialized anatomically segregated group whose members each have direct synaptic connections with mt cells .the exact properties of the v1 to mt projectors are likely to be intermediate between those of cells in v1 layer 4c and mt neurons .this is because input into mt appears to be largely constituted of magnocellular predominant streams from 4c , while parvocellular predominant streams may play a much less role .progressive specialization along the v1 to mt motion processing stream is also observable in the morphological characteristics of the neurons .for instance , distinguishing attributes of the v1-mt or v2-mt projectors such as size , arborization patterns , terminal bouton morphology , and distribution have been observed .sincich and horton demonstrated that layer 4b neurons projecting to area mt were generally larger than those projecting to layer v2 .overall , neurons in the v1 to mt motion processing stream are highly and progressively specialized .hence description of their receptive fields requires adequately specialized models which reflect not only their individual attributes , but also the emergent hierarchical properties of the network .next we discuss one of the most fundamental of such emergent network properties : the particular distribution of temporal frequency filtering types along the stream .hawken et al found that direction - selective cells were mostly bandpass temporal frequency filters , while cells which were not direction - selective were equally distributed into bandpass and lowpass temporal frequency filtering types .foster et al found a similar phenomenon in macaque v1 and v2 neurons .v1 neurons were more likely to be lowpass temporal frequency filters , while their more specialized downstream heirs , v2 neurons , were more likely to be bandpass temporal frequency filters .less specialized neurons located anatomically upstream ( lower down in the hierarchy ) are more likely to have lowpass temporal frequency filter characteristics , while more specialized neurons located anatomically downstream ( higher up in the hierarchy ) are more likely to display bandpass temporal frequency filter characteristics .for example , lgn cells are at best only weakly tuned to direction and orientation ( , ) and are hence equally distributed into lowpass and bandpass categories .layer 4b v1 cells and mt cells on the other hand , are almost all direction selective and hence are mostly bandpass temporal frequency filters .we firmly believe this classification is not arbitrary , but instead is a direct manifestation of the particular spatiotemporal structure of the v1 to mt motion processing stream .consequently , the receptive field model must reflect this increased tendency for bandpass - ness with ascension up the hierarchy .in other words , the representation scheme must be one that is inherently more likely to deliver bandpass - ness to a more specialized cell ( such as in layer 4b or mt ) and lowpass - ness to a less specialized cell ( such as in layer 4a , 4c , or 4c ) .however , none of the existing receptive field models reflect this underlying spatiotemporal structure . in contrast , as will be seen in section ( [ sec : gabor - einstein wavelet ] ) below , the gabor - einstein wavelet s wave carrier is a sinc function which directly confers the aforementioned salient property .individual gabor - einstein basis elements are lowpass temporal frequency filters ; and bandpass temporal frequency filters can only be obtained by combinations of basis elements .a study by deangelis and colleagues also corroborates the above .they found that temporally monophasic v1 cells in the cat were almost always low pass temporal frequency filters , while temporally biphasic or multiphasic v1 cells were almost always bandpass temporal frequency filters .the explanation for this finding is inherent and explicit in the gabor - einstein wavelet basis , where bandpass temporal character necessarily results from biphasic or multiphasic combination .all monophasic elements on the other hand , are lowpass temporal frequency filters .however , we will see that according to the model , the converse is not true : i.e. not all lowpass temporal frequency filters are monophasic , and not all multiphasic combinations yield bandpass temporal frequency filters .perceptual and electrophysiological studies reveal intriguing similarities between how space and time are mixed in the visual cortex and how they are mixed in the special theory of relativity ( str ) .for instance , within certain limits , binocular neurons can not distinguish a temporal delay from a spatial difference .as a result , inducing a monocular time delay by placing a neutral density filter over one eye but not the other , causes a pendulum swinging in a 2d planar space to be perceived as swinging in 3d depth space .this is known as the pulfrich effect .also , most neurons in the v1 to mt stream are maximally excited only by stimuli moving at that neuron s preferred speed .they are _ speed tuned_. in this subsection , we briefly review and summarize the special theory of relativity .we identify the stroboscopic pulfrich effect and speed tuning as cortical analogues of str s joint encoding of space and time . and finally , we explain how str is used in ( and motivates ) the design of the gabor - einstein wavelet . in 1905 , albert einstein proposed the special theory of relativity .he was motivated by one thing : a firm belief that maxwell s equations of electromagnetism are _ laws of nature_. maxwell s equations are a classical description of light . according to maxwell, light is the propagation of electric and magnetic waves interweaving in a particular way perpendicular to each other .the speed of propagation in a vacuum is the constant m / s .the other governing constraint on str is the _relativity principle_. it has been around and generally accepted since the time of galileo , and it states that the laws of nature are true and the same for all non - accelerating observers .einstein believed maxwell s equations to be laws of nature , and therefore to satisfy the relativity principle .in other words , einstein believed the speed of light must be the same to all non - accelerating observers regardless of their velocity relative to each other . for the speed of light to be fixed , something(s ) had to yield and be unfixed . what had to yield were the components of the definition of speed , i.e. space and time .space and time had to become functions of relative velocity and of each other , to ensure the speed of light remains constant to all non - accelerating observers .the transformation rule of space and time coordinates between two observers moving with constant velocity relative to each other is called the lorentz equations .it specifies how space and time are mixed in str .more specifically , the lorentz equations are the transformation rule between the ( spacetime ) coordinates of two inertial reference frames . for two inertial reference frames , and , moving exclusively in the direction with constant velocity relative to each other ,the lorentz transformation is given by , where ( ) are the coordinates of , ( ) are the coordinates of , is the speed of light , and is the lorentz factor and is given by , we demonstrate below that the stroboscopic pulfrich effect is analogous to str in one respect , and speed tuning of neurons is analogous to str in another respect . in the classical pulfrich effect , a monocular temporal delay results in a perception of depth in pendular swing .it has a simple geometric explanation . by the time the temporally delayed eye `` sees '' the pendulum at a given retinal position ,the pendulum has advanced to a further position in its trajectory , hence the visual cortex always receives images that are spatially offset between the eyes . the spatial disparity results in stereoscopic depth perception .this form of pulfrich phenomena is termed _ classical _ to distinguish it from the stroboscopic pulfrich effect . in the stroboscopic pulfrich effect ,the pendulum is not a continuously moving bulb , but instead is a strobe light which samples the trajectory of the classical pendulum in time and space .the stroboscopic pulfrich effect can not be explained by the simple geometric illustration that explains the classical pulfrich effect .this is because depth is still perceived in sequences involving a temporal delay in flash between eyes , but particularly lacking an interocular spatial disparity in the flash .+ in stereoscopic depth perception , a planar ( x , y ) difference in retinal image position ( ) _ transforms into _ ( is perceived as ) a displacement along the direction . in the stroboscopic pulfrich effect , a temporal difference ( ) between eyes in retino - cortical transmission of image signal ,results in a planar pendulum trajectory being perceived as ( transforming into ) depth . in str, the transformation is between two inertial reference frames moving relative to each other . by analogy, we assert that for the stroboscopic pulfrich effect , there is a transformation between the interocular retinal space ( ) and the perceptual depth space ( ) .we identify the variables ( ) of the interocular space as the difference between eyes of the corresponding variables in the retino - cortical space i.e. is the difference between the left and right eye retinal positions of an image ; and is the difference between the right and left eyes in retino - cortical image transmission times .the transformation may take the form , were is proportional to pendulum velocity and is a function of .note the similarity between and the equation for in the lorentz transformation , .this analogy suggests that similar mathematical structures underlie str and the joint encoding of space and time in the visual cortex .the stroboscopic pulfrich effect is not by itself direct evidence of joint encoding of spatial and temporal disparities , however direct electrophysiological evidence does exist for such linkage in cortical encoding .this mathematical similarity influences our design of the gabor - einstein wavelet in a particular way which we describe in section ( [ gabor_einstein_design ] ) below . but first , we discuss another neuronal phenomenon , speed tuning , which is analogous to str in a certain physical sense .most motion processing stream neurons in v1 or area mt are speed tuned .they have a preferred speed , which is the speed of moving stimuli to which they maximally respond .other speeds also elicit a response , but with a lower rate of action potential . when presented moving sine wave grating stimuli , truly speed tuned neurons respond maximally to their preferred speed independent of the spatial or temporal frequency of the stimuli .the velocity , , of a moving wave is , where is temporal frequency and is spatial frequency . in higher dimensions , where is the spatial frequency in the direction , is the spatial frequency in the direction , is the component of velocity , and is the component of velocity . from both equations above ,it is clear that for a speed tuned neuron , a change in the spatial frequency of the stimuli would necessary require a complementary change in the temporal frequency .hence a broad range of stimuli with widely varying spatial and/or temporal frequencies can maximally excite the neuron , so long as the above equations are satisfied for the neuron s preferred speed .this is analogous to how the special theory of relativity is motivated by fixing the speed of light . in the case of strthe variables which necessarily change in complementary fashion are the space and time coordinates as described in the lorentz equations .we constrained the gabor - einstein model by requiring the following properties : 1 .the minimum possible number of parameters 2 .relativistic - invariance of the wave carrier in addition to orientation in the spatiotemporal frequency domain , localization in the space , time , spatial frequency , and temporal frequency domains are the essential features of the v1 to mt neuron receptive field .these are the most basic attributes of the receptive field . as such, they correspond with the minimum number of model parameters needed .specifically , speed tuning , the spatial and the temporal localization envelopes , the amplitude modulation parameter , and the spatial and temporal frequencies must all be represented .if one adopts the notion of a gaussian spatial localizer , the minimum number of parameters add up to eight : four for the spatial envelope and four for the wave carrier .the four necessary spatial envelope parameters are : amplitude factor , ; variance in the direction , ; variance in the direction , ; and spatial orientation of envelope , .the four necessary wave carrier parameters are : temporal frequency , ; spatial frequency in the x direction , ; spatial frequency in the direction , and the phase , .the orientation of the wave carrier is not an independent parameter .we use it in computer simulations below only for convenience .the essential feature that is not yet explicitly accounted for in the above parameter tally is the temporal envelope of the receptive field .this can be implemented either through the gaussian envelope or through the wave carrier , and in either case may conceivably involve an additional parameter .the next design criteria , _ relativistic - invariance of the wave carrier _ , resolves this ambiguity without adding another parameter .given the analogies of spatiotemporal mixing in the visual cortex to spatiotemporal mixing in the special theory of relativity , it is reasonable to conjecture that their endpoints are within proximity of each other .einstein eventually sought a lexicon in which physical laws are expressed the same way in any two inertial reference frames .that lexicon is called relativistic - invariance ( or lorentz - invariance ) . by analogy, we ultimately seek a lexicon in which physical laws can be expressed the same way in both the interocular and perceptual spaces .lorentz - invariance of physical law is the specification ( or endpoint ) arising out of str .hence , requiring lorentz - invariance of the receptive field s wave carrier is desirable . moreover, this immediately resolves the above ambiguity regarding implementation of the temporal envelope profile .the temporal envelope must be implemented via the wave carrier for relativistic invariance to hold . andsince the receptive field amplitude eventually falls , the sinc function emerges as a natural descriptor .furthermore , the sinc function has the distinct advantage of not introducing any additional parameters .the way space and time are mixed in the visual cortex bears resemblance to the way they are mixed in the special theory of relativity .we demonstrated that the stroboscopic pulfrich effect and speed tuning are cortical analogues of the spacetime mixing mechanics of str . in str, the mixture of space and time is described by the lorentz transform which relates the spacetime coordinates of inertial reference frames moving with constant velocity relative to each other . in the striate and extrastriate motion processing stream , the analogous transformation is between the 3d interocular space ( x , y planar inter - retinal space + inter - retinocortical time ) and the 3d perceptual space ( x , y , z 3d space ) .though on the surface , str and cortical spacetime encoding appear to be unrelated processes , we conjecture and have partly shown a shared underlying mathematical structure .this structure can be exploited to deepen our understanding of cortical encoding of motion , depth , and more fundamentally , the joint encoding of space and time in the visual cortex . here , we proceeded by requiring relativistic - invariance of the receptive field s wave carrier .we simultaneously required that the model be constrained to the minimum possible number of parameters . under these constraints , the sinc function with energy - momentum relation as argument emerged as a natural descriptor of the receptive field s wave carrier .furthermore , the gabor - einstein wavelet explains a number of salient physiological attributes of the v1 to mt spectrum .chief amongst these being the distribution of bandpass to lowpass temporal frequency filter distribution profile ; which we postulate is a fundamental manifestation of the way space and time are mixed in the visual cortex .the remainder of this paper is organized as follows : section ( [ sec : gabor - einstein wavelet ] ) introduces the gabor - einstein wavelet .section ( [ sec : simulations ] ) presents computer simulations .section ( [ sec : discussion ] ) is the discussion .it includes the following subsections : subsection ( [ subsec : model_framework ] ) introduces a simple framework for classifying and naming the components and hierarchical levels of receptive field models .subsection ( [ subsec : model_framework ] ) also identifies the gabor - einstein s place within this larger framework of receptive field modeling .subsection ( [ subsec : related_work ] ) briefly discusses related work .subsection ( [ subsec : higher_order ] ) discusses some phenomena which can not be explained by models such as the native gabor - einstein wavelet in isolation , i.e. models of single neuron receptive fields early in the visual pathway .specifically , it discusses some higher order phenomena and non - linearities which necessarily arise from neuronal population and network interactions .section ( [ sec : conclusion ] ) concludes the paper .in this section , we present the gabor - einstein wavelet .it has the following properties : * like the gabor function , it is a product of a gaussian envelope and a sinusoidal wave carrier . *it differs from the gabor function in that its wave carrier is a relativistically - invariant sinc function whose argument is the energy - momentum relation . * its gaussian envelope contains only spatial arguments . i.e. its spatial envelope is not time - dependent . *it has the minimum possible number of parameters . *its fourier transform is the product of a mixed frequency gaussian and a temporal frequency step function . * like the gabor function, it generates a quasi - orthogonal basis .we define the gabor - einstein wavelet as follows , where the constant multiplier is amplitude ; and are the gaussian variances in the x and y directions ; and are the respective and coordinates of the gaussian center ; , , and are the frequencies of the wave carrier in the , , and directions respectively ; is the sinusoid phase ; and the _ sinc _ function is defined as , we have rotated our coordinates by an angle from a reference state to a state which we denote for notational simplicity . the transformation , is given by , we proceed here with the following instance of the gabor - einstein wavelet , where we have set equal to one .we can always do so for one reference neuron by simply defining the unit of time as , the duration the neuron s frame cycle . as we will see , this value , , is the shortest frame duration to which the neuron can respond . in the above equation ,we have also set , and .the fourier transform is as follows , where is given by , and the _ sign _ function is defined as , the above fourier transform was obtained using mathematica symbolic software .the fourier transform of the general case will likely be challenging to obtain analytically either by hand or symbolic software . hence , we anticipate numerical methods may have an important role to play .the maximum magnitude of the above response function , equation ( [ gabor - einstein - fourier ] ) , is attained where the argument of the exponent is zero , i.e. where , the neuron s preferred spatial frequency , , is dependent on the temporal frequency , , of the stimulus , and is given by the above bivariate quadratic equation s solution , the location where the response maximum is attained is a parametrized curve in 3d frequency space .it is given by , although the location in space where the neuron s maximum response is attained depends on , the magnitude of the maximum response is itself a constant .it is given by , the half magnitude response is attained at values of satisfying , taking the natural logarithm of the above equation yields , defining and , equation ( [ half_max ] ) becomes , in the above form , one readily recognizes this as the ellipse centered at , whose principal axes radii are and in the and directions respectively .the long axis in the frequency domain is the short axis in the spatial domain and vice versa . without loss of generality, we can assume that prior to rotation of the spatial axes by angle , the long axis of the receptive field envelope is parallel to the x axis .then the orientation of the on - off bands , i.e. planes of the spatial wave , are aligned parallel to the long axis of the envelope when the rotation angle , , is related to the polar angle by the relationship , on the other hand , the on - off bands are perpendicular to the long axis of the envelope when , in the case of equation ( [ parallel ] ) , the half magnitude frequency bandwidth is readily seen to equal , the length of the short axis , while in the case of equation ( [ perpendicular ] ) , the half magnitude frequency bandwidth equals , the length of the short axis .the cases of skewed alignment take on values between and and are also computable from the geometry .unlike the preferred spatial frequency , the half - magnitude frequency bandwidth is not dependent on the temporal frequency of the stimulus . in summary ,the gabor - einstein wavelet spatiotemporal receptive field model predicts that the magnitude of a v1 to mt neuron s preferred spatial frequency is linearly dependent on the temporal frequency of the stimulus as shown in equation ( [ eqn : f_0 ] ) . in the next section, we present some receptive field simulations using the gabor - einstein wavelet .+ [ figur_gaussian ]in this section , we present computer simulations demonstrating the gabor - einstein wavelet s properties .we start with simulations which show the model s adherence to basic physiological form .we then present simulations which demonstrate how the gabor - einstein model innately represents the temporal frequency filtering property distribution along the v1 to mt neuronal hierarchy .the following notations are used in the figure captions : is the angle of rotation of the axes of the gaussian envelope as described in equations ( [ eqn : envelope_axes_rotate_1 ] ) and ( [ eqn : envelope_axes_rotate_2 ] ) . is the angle of rotation of the axes of the wave carrier .ctr is the 2-component vector consisting of the and coordinates of the gaussian envelope center respectively . is the 2-component vector consisting of the gaussian variances of the envelope in the and directions respectively . is phase of the wave carrier . is time .and is the 3-component vector consisting of the temporal , x - spatial , and y - spatial frequencies of the wave carrier respectively .for succinctness , , for instance , is taken to be equivalent to .the same short - hand notation applies to the other multi - component vectors .figure ( [ fig : gaussians ] ) shows receptive field envelopes with varying aspect ratios and centers ; figure ( [ fig : wavecarriers ] ) shows the sinc function wave carriers at varying times and at one different orientation ; figures ( [ fig : gabor - einstein - center ] ) and ( [ fig : gabor - einstein - center-3d ] ) show gabor - einstein wavelets with varying envelope centers ; figure ( [ fig : gabor - einstein - spatial - freq ] ) shows gabor - einstein wavelets with varying spatial frequencies ; and figure ( [ fig : gabor - einstein - spatial - phase ] ) shows gabor - einstein wavelets of varying phase .+ [ figur_sinc ] + + [ figur_ge_space ] in figure ( [ fig : gabor - einstein - center ] ) , one sees the anisometry that is conferred by the sinc function wave carrier .the anisometry increases from fig ( [ fig : gabor - einstein - center]a ) through ( [ fig : gabor - einstein - center]d ) , due to the rapid fall off in the sinc function amplitude outside of the central peak over the two half cycles on each side of zero .in contrast , fig ( [ fig : gabor - einstein - center]e ) and fig ( [ fig : gabor - einstein - center]f ) appear similar to each other in form , and each look more isometric than fig ( [ fig : gabor - einstein - center]d ) . this illustrates the presence of both isometric and anisometric basis elements which differ only in a single parameter ( phase , or time , or envelope center ) .this expressiveness in the basis should allow for economical representation of a visual scene . of note ,the amplitudes of the more isometric elements figs ( [ fig : gabor - einstein - center]e ) and ( [ fig : gabor - einstein - center]f ) are much smaller than those of the more anisometric preceding elements .if desired , this can be readily modulated by the constant amplitude factor parameter .+ + [ figur_ge3d_space ] figure ( [ fig : gabor - einstein - center-3d ] ) shows 3-dimensional plots illustrating the above discussed isometry modulation feature of the gabor - einstein basis .figure ( [ fig : gabor - einstein - spatial - freq ] ) shows receptive fields with various spatial frequency tuning .figure ( [ fig : gabor - einstein - spatial - phase ] ) shows various receptive fields which are phase - shifted relative to each other . forany given spatial location in the receptive field , the peak amplitude of a cycle is phase dependent .this is a feature distinct to the gabor - einstein model .and like the anisometry - modulation feature , it confers expressiveness to the basis family .this should allow for efficient representation of natural visual scenes . in the model, time has a similar effect on peak amplitude . andthis behavior is in agreement with space - time inseparable receptive field profiles in cat striate cortex , where a time dependent phase drift has been shown .+ [ figur_phi ] + [ figur_omega ] + + [ figur_sinc_fcn ] + + + [ figur_gef ] figure ( [ fig : sinc and fourier ] ) shows the essential property which the gabor - einstein wavelet inherits from the sinc function .the sinc function s fourier transform is a lowpass filter .higher frequency sinc functions have wider bandwidth .bandpass filters are formed by taking the difference between sinc functions of different frequency as illustrated in figure ( [ fig : sinc and fourier ] ) .the gabor - einstein wavelet s fourier transform has a step function factor inherited from the sinc function .this allows it explain the particular distribution of lowpass to bandpass temporal frequency filter properties of v1 to mt neurons ( foster et al 1985 ; deangelis et al 1993b ; hawken et al 1996 ) in a manner innately representative of the motion - processing stream s neuronal hierarchy. for simplicity of illustration , figure ( [ fig : ge_fourier ] ) shows a 2d ( x , t ) gabor - einstein element .its fourier transform is shown in figs ( [ fig : ge_fourier]a ) and ( [ fig : ge_fourier]b ) , while its ( x , t ) domain representation is shown in fig ( [ fig : ge_fourier]c ) .the lowpass temporal frequency nature of the fourier representation is apparent , as the support straddles the zero .figure ( [ fig : ge_band ] ) shows the construction of temporal frequency bandpass gabor - einstein wavelets .as illustrated , the temporal frequency bandpass gabor - einstein element results from the difference of two temporal frequency lowpass gabor - einstein elements .fig ( [ fig : ge_band]a ) and fig ( [ fig : ge_band]b ) are plots of equation ( [ gabor - einstein - fourier ] ) , i.e. they are the fourier transform of the gabor - einstein wavelet described by equation ( [ gabor - einstein ] ) .we label this basis element `` '' , meaning it is the zero - centered lowpass temporal frequency filter whose temporal frequency bandwidth is one . accordingly , fig ( [ fig : ge_band]c ) and fig ( [ fig : ge_band]d ) are plots of the `` '' gabor - einstein basis element , i.e. they are plots of the zero - centered lowpass temporal frequency filter whose temporal frequency bandwidth is three .fig ( [ fig : ge_band]e ) and fig ( [ fig : ge_band]f ) plot the bandpass temporal frequency filter basis element obtained by taking the difference of the `` '' and `` '' basis elements .the left - hand column of figure ( [ fig : ge_striatotemporal spectrum ] ) plots the gabor - einstein basis elements of fig ( [ fig : ge_band ] ) , while the right hand column plots the corresponding temporal response probed at the spatial origin of the receptive field .there is an increasing complexity of the temporal waveform structure as one progresses from lowpass to bandpass basis element .all pure gabor - einstein basis elements are lowpass temporal frequency filters . on the other hand , the bandpass temporal frequency filter property necessarily results from summation of gabor - einstein basis elements .hence bandpass temporal elements are necessarily complex ( not pure ). however , not all lowpass temporal frequency filters are pure ; and not all basis summations yield bandpass temporal frequency filters .for instance , figure ( [ fig : ge_sum ] ) shows the sum of two pure gabor - einstein basis elements which yield another lowpass temporal frequency filter .the temporal waveform of the composite element is indeed more complex than that of its two pure constituents , however , it appears less complex than the composite temporal waveform of fig ( [ fig : ge_band ] ) .+ + [ figur_sts ] +the neurons in the v1 to mt dorsal stream are highly specialized towards motion detection .furthermore , they are embedded in a hierarchical order of increasing motion specialization .the earliest ( least specialized ) members of the chain are the non - speed tuned v1 cells , while the most specialized members are the mt cells .naturally , lgn cells precede input layer v1 cells both anatomically as well as in level of motion specialization .similarly , cells located in the medial superior temporal ( mst ) area and the parietal lobe are further along than mt cells both anatomically and in motion specialization .we have focused our current study on the v1 to mt spectrum .we set out to obtain a good mathematical representation for the receptive fields of these neurons .the term _ good _ here implies that the representation should faithfully capture the salient features of the motion specialization spectrum and serve as an effective label or address schema for neurons in the motion processing stream .for instance , it is known that mt neurons receive input from v1 neurons and most directly from v1-to - mt projectors , therefore in a good basis , the representation for an mt neuron should result from some combination of representations of some v1 neuron(s ) .furthermore , the resulting mt representation should naturally manifest known attributes of mt neurons .appropriate comparison and discussion of the current work and related work requires a more precise language than currently available .hence , we begin this discussion by introducing a simple framework for classifying and naming the components and hierarchical levels of receptive field models . in the literature, there is currently no clear categorization of the various mathematical descriptions of receptive fields .ironically , the laudable increase in discovery of receptive field attributes and modulation mechanisms may have led to further conceptual entanglement of the mathematical forms of receptive fields . here , we propose a simple framework to organize and classify receptive field descriptions and briefly review some basic concepts .this should help place the current work , the gabor - einstein model , in the appropriate context ; and furthermore , should help organize and direct receptive field research .we will identify a receptive field by its constituents and method of construction .a receptive field can be constructed using two essential things : 1 .primary input receptive fields 2 . input combination scheme the primary input receptive field is the response function of the neuron at the lowest level on the hierarchy of study .for instance , when studying the motion - processing stream between v1 and mst , the primary input receptive field is the response function of a v1 input layer neuron .an input combination scheme is a model for the network of neurons projecting onto the study neuron .for instance , a common scheme is linear summation of weighted v1 responses to yield an mt neuron .the input combination scheme can be classified according to the stage at which input processing , if any , occurs .three classes arise : 1 .pre - processing 2 .intra - processing 3 .post - processing pre - processing is modification of the input responses prior to combining them . for instance, v1 elements are squared in the motion - energy model , after which they are summed to yield the mt response .intra - processing is modification of the input responses during the combination process .for example , the soft max scheme .post - processing is modification of the response after input combination .for instance , divisive normalization schemes .the divisive normalization scheme , proposed by heeger in 1992 , is a neuron population based model of v1 to mt function . in that model ,each neuron s behavior is modulated by that of a population module of which it was a member .specifically , its manifest response is its native response divided by the net sum of the responses of the other neurons in its population module plus a regularization factor .the model bears resemblance to receptor models in pharmacokinetics , where the regularization factor plays the role of a half - saturation constant .the third categorization attribute is the hierarchical level from which input originates relative to the study neuron .the three following classes arise : 1 .feed - forward 2 .isostratal 3 .feedback feed - forward implies the input comes in from a lower hierarchical level .for instance v1 projecting to mt .isostratal implies that the input comes in from the same level .for instance , normalization of mt response by neighborhood neuron pool .feedback implies the input comes in from a higher level in the hierarchy . of note, most receptive fields likely receive contributions from all three types of inputs . in light of the above framework ,the gabor - einstein model is a primary input receptive field for the v1 to mt motion processing stream .it is compatible with any of the aforementioned input combination schemes , hierarchical input routing schemes , and input processing schemes .the work presented in this paper focused mainly on the primary input receptive field , and for simplicity of illustration , assumed a basic ( i.e. linear ) input combination scheme .certain important phenomena can not be explained at the level of early ( e.g. input layer v1 ) primary input receptive fields alone .next , we discuss some aspects of such phenomena including end - stopping , a short selection of non - linearity mechanisms , and contrast - modulated speed tuning .a key distinguishing feature of the gabor - einstein model is how it naturally represents the distribution of lowpass to bandpass temporal frequency filtering properties along the v1 to mt motion specialization hierarchy .ratio models are a class of models with a different objective .the ratio models aim to mechanistically model the way speed tuned units arise from combinations of non - speed tuned bandpass and low pass temporal frequency filters .these models are based on the idea that stimulus temporal frequency is proportional to the ratio of outputs of spatiotemporally separable bandpass to lowpass neuronal outputs .various summation - based input combination schemes along the v1 to mt stream have been used .heeger and colleagues proposed a two stage weighted linear summation model in which mt fields resulted from weighted sums of v1 fields .sereno as well as nowlan & sejnowski proposed multilevel neural network reinforcement learning models in which mt neuron response was a function of a linear summation of v1 responses .tsui et al proposed a model in which mt response was obtained by a soft - max weighted summation of v1 inputs .each of these schemes differs fundamentally in approach from the gabor - einstein model .namely , we focus on the receptive field of a single neuron along the v1 to mt pathway , implicitly representing the network spatiotemporal structure in the inherent attributes of the basis itself . specifically , the natural representation of temporal frequency filter distribution along the v1 to mt spectrum . in the gabor - einstein basis ,the basis itself mandates summation as a means of ascending up the specialization hierarchy .this is a gratifying result , and supports the notion that the sinc function is a natural way to describe the v1 to mt spectrum . here, we discuss certain higher order phenomena whose description exceeds the scope of single early neuron receptive fields .end - stopping is a center - surround type phenomenon in which for instance a full length bar stimulus results in a submaximal response , while some shorter bar stimulus generates maximal response .in other words , the excitatory portion of the receptive field is shorter than the full length of the receptive field. typically , the periphery of the receptive field , i.e. its _ end _ , elicits suppression ( or `` stopping '' ) .hence neurons exhibiting such behavior are said to be `` end - stopped '' .jones and colleagues found that majority ( 94% ) of v1 cells in the macaque were end - stopped to various degrees .moreover , the most prominently end - stopped v1 cells have been found in layer 4b which is known to be the dominant projection layer to mt .furthermore , tsui et al have argued a role for v1 end - stopping in mt motion integration and solution to the aperture problem .it seems therefore that a complete model of v1 to mt neurons must account for end - stopping .end - stopping phenomena would be difficult to explain at the single input layer v1 neuron stage , since it likely involves isostratal interactions .tsui and colleagues modeled end - stopping by specifying the normalization pool as surround cells arranged along the orientation axis of a center cell . with this spatially - specific implementation of normalization ,surround cell activity yielded end - stopping .the response was then fit to a saturation kinetics model equivalent to heeger s divisive normalization model .regarding center - surround phenomena such as end - stopping , the gabor - einstein wavelet can be considered a _ classical receptive field_. a scheme similar to that of tsui et al can be used to implement end stopping using the gabor - einstein wavelet as both center and surround , or in hybrid with other wavelet transform as either center or surround . of note, other schemes have also been used to implement end - stopping .for instance , skottun proposed an orientation - modulated cancellation strategy .the gabor - einstein wavelet is compatible with this scheme as well .the perception of motion is likely a higher order phenomenon that emerges out of the elaborate nonlinear connection network of which mt is only a part .some of these non - v1 areas from which mt receives input include lgn , superior colliculus , and extrastriate regions .some combination of these non - v1 inputs may explain the persistence of visual responsiveness and direction selectivity after v1 lesions or cooling .indeed several researchers have drawn specific attention to the necessity of non - linear models . moreover , these nonlinearities are not just a feature of late ( specialized ) visual neurons , but have been demonstrated very early in the visual pathway .for example , schwartz et al pointed out the need for nonlinear retinal ganglion cell receptive field models . androsenberg et al showed that lgn y cells utilize a nonlinear mechanism to represent interference patterns .fine mechanistic modeling of the single v1 to mt neuron s receptive field s most salient features is certain to yield valuable physiological insight into the myriad non - v1 connections to mt .contrast - modulated speed tuning is likely unexplainable at the level of early primary input receptive fields of single neurons .existing explanations of such phenomena appear to necessarily involve nonlinear isostratal interactions .for instance , divisive normalization . in general, the effects of contrast on the receptive field of v1 to mt neurons can safely be assumed to be highly modulated by neighboring neurons .significant data exists to support such population encoding .for instance , deangelis et al showed that end and side stopping were strongest along the preferred excitatory orientation , yet superimposed suppressive stimuli was much more broadly tuned than the classical receptive field , suggesting a modulatory pool of neurons .such neuron pools also likely mediate the effects of contrast on speed tuning .krekelberg et al found that most mt cells in alert macaques preferred lower speeds for lower stimulus contrast .priebe et al found that a significant population of v1 cells have contrast - modulated speed tuning curves .livingstone and conway found that the speed tuning curves of most v1 neurons in alert macaques were dependent on contrast stimuli .specifically , for lower contrast stimuli , v1 cells shifted their tuning curves to lower speeds ; while for higher contrast stimuli , the opposite was observed .psychophysical studies also show that both high contrast stimuli and high spatial frequency stimuli are associated with higher perceived speeds .the increase in preferred speed with contrast is a phenomenon that likely involves nonlinear isostratal interactions . as such , early single neuron receptive field models in isolation are unable to explain such phenomena .nonetheless the primary input receptive field is the theoretical and physiological foundation of higher order phenomena .hence it is essential to have a physiologically sound model such as the gabor - einstein wavelet which naturally generates the hierarchical specialization structure of the motion stream .the gabor - einstein wavelet can be readily plugged - in to divisive normalization schemes which may be necessary to explain contrast - modulated phenomena .in this paper , we introduced the gabor - einstein wavelet , a new family of functions for modeling the receptive fields of neurons in the v1 to mt motion processing stream .we showed that the way space and time are mixed in the visual cortex has analogies to the way they are mixed in the special theory of relativity .we therefore constrained the gabor - einstein model to a relativistically invariant wave carrier and to the minimum possible number of parameters .these constraints yielded a sinc function wave carrier with energy - momentum relation as argument .the model innately and efficiently represents the temporal frequency filtering property distribution along the motion processing stream .specifically , on the v1 end of the stream , the neuron population has an equal proportion of lowpass to bandpass temporal frequency filters ; whereas on the mt end , they have mostly bandpass temporal frequency filters . from our analysis and simulations, we showed that the distribution of temporal frequency filtering properties along the motion processing stream is a direct effect of the way the brain jointly encodes space and time .we uncovered this fundamental link by demonstrating an analogous mathematical structure between the special theory of relativity and the joint encoding of space and time in the visual cortex .the gabor - einstein model and the experiments it motivates will provide new physiological insights into how the brain represents visual information .the author thanks rudi weikard , marius nkashama , john mayer , ian knowles , xiaobai sun , and peter blair for helpful suggestions on how to approach the fourier transform of the sinc function of multidimensional argument .he thanks greg schwartz and ari rosenberg for helpful discussion on nonlinearities in visual receptive fields .he thanks his mentors and role models in ophthalmology at howard university and the washington area for their dedication to training residents : robert a. copeland jr . , leslie s. jones , bilal khan , earl kidwell , janine smith - marshall , david katz , william deegan iii , melissa kern , ali ramadan , frank spellman , reggie sanders , and michael rivers , emily chew , brian brooks , and wai wong .he thanks claude l. cowan jr whose excellence inspired him to pursue a career in medical retina .he thanks his co - residents at howard ophthalmology for the wonderful experience which resulted from their enthusiasm for patient care and collegial learning : sir gawain dyer , salman j. yousuf , animesh petkar , ninita brown , mona kaleem , mikelson mompremier , saima qureshi , nikisha richards , neal desai , chris burris , natasha pinto , usiwoma abugo , chinwe okeagu , and katrina del fierro .he thanks richard mooney , michael platt , pate skene , fan wang , vic nadler , and kafui dzirasa for supporting his membership to the society for neuroscience .he thanks susan elner , mark w. johnson , john r. heckenlively , paul p. lee and all the amazing faculty at the university of michigan ann arbor ( w.k .kellogg eye center ) for awarding me the clinical fellowship training position in medical retina .he thanks stuart fine and the board of directors of the heed ophthalmic foundation ( nicholas j. volpe , stephen mcleod , joan miller , eduardo alfonso , david wilson , julia heller , and froncie gutman ) for inviting him to participate in the 8th annual heed resident s retreat in chicago illinois .it was a delight to be at that retreat .he thanks eydie miller - ellis , mildred olivier , and the the rabb venable excellence in research program board for selecting him as a participant and for encouraging a career in academia . 90 [ 1]#1 [ 1]`#1 ` urlstyle [ 1]doi : # 1 e. h. adelson and j. r. bergen .spatiotemporal energy models for the perception of motion .a _ , 20 ( 2):0 284299 , 1985 .e. h. adelson and j. r. bergen .the extraction of spatiotemporal energy in human and machine vision . in _ proc .ieee workshop on visual motion , charleston _ ,pages 151156 , 1986 .j. c. anderson and k. a. c. martin . ._ journal of comparative neurology _, 4430 ( 1):0 5670 , 2002 . j. c. anderson , t. binzegger , k. a. c. martin , and k. s. rockland . ._ the journal of neuroscience _ , 180 ( 24):0 1052510540 , 1998 .a. anzai , i. ohzawa , and r. d. freeman .joint - encoding of motion and depth by visual cortical neurons : neural basis of the pulfrich effect ._ nature neuroscience _ , 40 ( 5):0 513518 , 2001 .p. azzopardi , m. fallah , c. g. gross , and h. r. rodman . ._ neuropsychologia _ , 410 ( 13):0 17381756 , 2003 . r. a. berman and r. h. wurtz . ._ the journal of neuroscience _ , 300 ( 18):0 63426354 , 2010 . r. a. berman and r. h. wurtz . . _ the journal of neuroscience _ , 310 ( 2):0 373384 , 2011 .d. boussaoud , l. g. ungerleider , and r. desimone .pathways for motion analysis : cortical connections of the medial superior temporal and fundus of the superior temporal visual areas in the macaque ._ journal of comparative neurology _, 2960 ( 3):0 462495 , 1990 .k. r. brooks , t. morris , and p. thompson . ._ journal of vision _ , 110 ( 14 ) , 2011 .d. c. burr and j. ross .how does binocular delay give information about depth ?_ vision research _ , 190 ( 5):0 523532 , 1979 .t. carney , m. a. paradiso , and r. d. freeman .a physiological correlate of the pulfrich effect in cortical neurons of the cat ._ vision research _ , 290 ( 2):0 155165 , 1989 .m. m. churchland , n. j. priebe , and s. g. lisberger .comparison of the spatial limits on direction selectivity in visual areas mt and v1 ._ journal of neurophysiology _ , 930 ( 3):0 12351245 , 2005 .j. g. daugman .uncertainty relation for resolution in space , spatial frequency , and orientation optimized by two - dimensional visual cortical filters ._ optical society of america , journal , a : optics and image science _ , 20 ( 7):0 11601169 , 1985 .g. c. deangelis , i. ohzawa , and r. d. freeman . ._ journal of neurophysiology _ , 690 ( 4):0 10911117 , 1993 . g. c. deangelis , i. ohzawa , and r. d. freeman . ._ journal of neurophysiology _ , 690 ( 4):0 11181135 , 1993 . g. c. deangelis , r. d. freeman , and i. ohzawa . length and width tuning of neurons in the cat s primary visual cortex. _ journal of neurophysiology _ , 710 ( 1):0 347374 , 1994 .r. dubner and s. m. zeki .response properites and receptive fields of cells in an anatomically defined region of the superior temporal sulcus in the monkey ._ brain research _ , 1971. a. einstein . on the electrodynamics of moving bodies ._ annalen der physik _ , 170 ( 891):0 50 , 1905 .d. j. felleman and j. h. kaas .receptive - field properties of neurons in middle temporal visual area ( mt ) of owl monkeys ._ journal of neurophysiology _ , 520 ( 3):0 488513 , 1984 . d. ferster and k. d. miller .neural mechanisms of orientation selectivity in the visual cortex ._ annual review of neuroscience _ , 230 ( 1):0 441471 , 2000 .k. h. foster , j. p. gaska , m. nagler , and d. a. pollen . ._ the journal of physiology _ , 3650 ( 1):0 331363 , 1985 .d. gabor . ._ electrical engineers - part iii : radio and communication engineering , journal of the institution of _ , 930 ( 26):0 429441 , 1946 .p. girard , p. a. salin , and j. bullier . ._ journal of neurophysiology _ , 670 ( 6):0 14371446 , 1992 .m. j. hawken , r. m. shapley , and d. h. grosof .temporal - frequency selectivity in monkey visual cortex ._ visual neuroscience _ , 130 ( 03):0 477492 , 1996. d. j. heeger .model for the extraction of image flow ._ josa a _ , 40 ( 8):0 14551471 , 1987 .d. j. heeger .normalization of cell responses in cat striate cortex ._ visual neuroscience _ , 90 ( 02):0 181197 , 1992 .d. j. heeger .modeling simple - cell direction selectivity with normalized , half - squared , linear operators ._ journal of neurophysiology _ , 700 ( 5):0 18851898 , 1993 .d. h. hubel .tungsten microelectrode for recording from single units ._ science _ ,1250 ( 3247):0 549550 , 1957 . d. h. hubel .single unit activity in striate cortex of unrestrained cats . _ the journal of physiology _ , 1470 ( 2):0 226238 , 1959 .d. h. hubel and t. n. wiesel .receptive fields of single neurones in the cat s striate cortex . _ the journal of physiology _ ,1480 ( 3):0 574591 , 1959 . h. e. jones , k. l. grieve , w. wang , and a. m. sillito . ._ journal of neurophysiology _ , 860 ( 4):0 20112028 , 2001 .b. krekelberg , r. j. a. van wezel , and t. d. albright .interactions between speed and contrast tuning in the middle temporal area : implications for the neural code for speed . _ the journal of neuroscience _ , 260 ( 35):0 89888998 , 2006 .s. w. kuffler .discharge patterns and functional organization of mammalian retina ._ j. neurophysiol ._ , 160 ( 1):0 3768 , 1953 . i. lampl , d. ferster , t. poggio , and m. riesenhuber .intracellular measurements of spatial integration and the max operation in complex cells of the cat primary visual cortex ._ journal of neurophysiology _ , 920 ( 5):0 27042713 , 2004 . d. n. lee . a stroboscopic stereophenomenon ._ vision research _ , 100 ( 7):0 587593 , 1970 .m. livingstone and d. hubel .segregation of form , color , movement , and depth : anatomy , physiology , and perception ._ science _ , 2400 ( 4853):0 740749 , 1988 . m. s. livingstone and b. r. conway . ._ journal of neurophysiology _ , 970 ( 1):0 849857 , 2007 .s. marelja .mathematical description of the responses of simple cortical cells ._ josa _ , 700 ( 11):0 12971300 , 1980 . j. h. maunsell and d. c. van essen .the connections of the middle temporal visual area ( mt ) and their relationship to a cortical hierarchy in the macaque monkey . _ the journal of neuroscience _ , 30 ( 12):0 25632586 , 1983 .j. h. maunsell and d. c. van essen . ._ journal of neurophysiology _ , 490 ( 5):0 11271147 , 1983 .j. h. maunsell , t. a. nealey , and d. d. depriest . ._ the journal of neuroscience _ , 100 ( 10):0 33233334 , 1990 .j. h. r. maunsell and w. t. newsome . visual processing in monkey extrastriate cortex ._ annual review of neuroscience _ , 100 ( 1):0 363401 , 1987 . j. c. maxwell . ._ the london , edinburgh , and dublin philosophical magazine and journal of science _ , 210 ( 141):0 338348 , 1861 . j. c. maxwell ._ the london , edinburgh , and dublin philosophical magazine and journal of science _ , 210 ( 140):0 281291 , 1861 . j. c. maxwell . ._ the london , edinburgh , and dublin philosophical magazine and journal of science _ , 230 ( 152):0 8595 , 1862 .j. c. maxwell . ._ the london , edinburgh , and dublin philosophical magazine and journal of science _ , 230 ( 151):0 1224 , 1862 .j. mclean and l. a. palmer .contribution of linear spatiotemporal receptive field structure to velocity selectivity of simple cells in area 17 of cat . _ vision research _ , 290 ( 6):0 675679 , 1989 .m. j. morgan .pulfrich effect and the filling in of apparent motion ._ perception _ , 50 ( 2):0 187195 , 1976. m. j. morgan .perception of continuity in stroboscopic motion : a temporal frequency analysis . _ vision research _ , 190 ( 5):0 491500 , 1979 .j. a. movshon and w. t. newsome . ._ the journal of neuroscience _ , 160 ( 23):0 77337741 , 1996 .j. j. nassi , d. c. lyon , and e. m. callaway . ._ neuron _ , 500 ( 2):0 319327 , 2006 .s. j. nowlan and t. j. sejnowski . a selection model for motion processing in area mt of primates . _ the journal of neuroscience _ , 150 ( 2):0 11951214 , 1995 .c. c. pack , r. t. born , and m. s. livingstone .two - dimensional substructure of stereo and motion interactions in macaque visual cortex ._ neuron _ , 370 ( 3):0 525535 , 2003 . j. a. perrone . a visual motion sensor based on the properties of v1 and mt neurons ._ vision research _ , 440 ( 15):0 17331755 , 2004. j. a. perrone .economy of scale : a motion sensor with variable speed tuning ._ journal of vision _ , 50 ( 1 ) , 2005 .j. a. perrone and a. thiele . a model of speed tuning in mt neurons . _ vision research _ , 420 ( 8):0 10351051 , 2002 . n. j. priebe , s. g. lisberger , and j. a. movshon .tuning for spatiotemporal frequency and speed in directionally selective neurons of macaque striate cortex . _ the journal of neuroscience _ , 260 ( 11):0 29412950 , 2006 .carl pulfrich .die stereoskopie i m dienste der isochromen und heterochromen photometrie ._ naturwissenschaften _ , 100 ( 35):0 751761 , 1922 .n. qian . computing stereo disparity and motion with known binocular cell properties . _ neural computation _ , 60 ( 3):0 390404 , 1994 .n. qian and r. a. andersen . a physiological model for motion - stereo integration and a unified explanation of pulfrich - like phenomena ._ vision research _ , 370 ( 12):0 16831698 , 1997 . n. qian and r. d. freeman .pulfrich phenomena are coded effectively by a joint motion - disparity process . _ journal of vision _ , 90 ( 5 ) , 2009 .n. qian , r. a. andersen , and e. h. adelson . ._ the journal of neuroscience _ , 140 ( 12):0 73817392 , 1994 . j. c. a. read and b. g. cumming .. _ journal of neurophysiology _ , 940 ( 2):0 15411553 , 2005 . j. c. a. read and b. g. cumming .the stroboscopic pulfrich effect is not evidence for the joint encoding of motion and depth ._ journal of vision _ , 50 ( 5 ) , 2005 .j. c. a. read and b. g. cumming .all pulfrich - like illusions can be explained without joint encoding of motion and disparity . _journal of vision _ , 50 ( 11 ) , 2005 .m. riesenhuber and t. poggio .hierarchical models of object recognition in cortex ._ nature neuroscience _ , 20 ( 11):0 10191025 , 1999 .k. s. rockland . ._ , 30 ( 2):0 15570 , 1989 .k. s. rockland . ._ journal of comparative neurology _ , 3550 ( 1):0 1526 , 1995 .h. r. rodman and t. d. albright .single - unit analysis of pattern - motion selective properties in the middle temporal visual area ( mt ) ._ experimental brain research _ , 750 ( 1):0 5364 , 1989 .h. r. rodman , c. g. gross , and t. d. albright . ._ the journal of neuroscience _ , 100 ( 4):0 11541164 , 1990 .a. rosenberg , t. r. husson , and n. p. issa .subcortical representation of non - fourier image features ._ the journal of neuroscience _ , 300 ( 6):0 19851993 , 2010 .m. p. sceniak , m. j. hawken , and r. shapley . visual spatial characterization of macaque v1 neurons . _journal of neurophysiology _ , 850 ( 5):0 18731887 , 2001 .g. w. schwartz , h. okawa , f. a. dunn , j. l. morgan , d. kerschensteiner , r. o. wong , and f. rieke . the spatial structure of a nonlinear receptive field . _ nature neuroscience _ , 150 ( 11):0 15721580 , 2012. m. e. sereno ._ neural computation of pattern motion : modeling stages of mation analysis in the primate visual cortex_. the mit press , 1993 .s. shipp and s. zeki . ._ european journal of neuroscience _ , 10 ( 4):0 309332 , 1989 . e. p. simoncelli and d. j. heeger . ._ vision research _ , 380 ( 5):0 743761 , 1998 . l. c. sincich , k. f. park , m. j. wohlgemuth , and j. c. horton . ._ nature neuroscience _ , 70 ( 10):0 11231128 , 2004 . b. c. skottun . a model for end - stopping in the visual cortex . _ vision research _ , 380 ( 13):0 20232035 , 1998 .m. a. smith , n. j. majaj , and j. a. movshon .dynamics of motion signaling by neurons in macaque area mt . _ nature neuroscience _ , 80 ( 2):0 220228 , 2005 .d. j. tolhurst , c. r. sharpe , and g. hart .the analysis of the drift rate of moving sinusoidal gratings ._ vision research _ , 130 ( 12):0 25452555 , 1973 . j. m. g. tsui , j. n. hunter , r. t. born , and c. c. pack .the role of v1 surround suppression in mt motion integration . _journal of neurophysiology _ , 1030 ( 6):0 31233138 , 2010 .l. g. ungerleider and r. desimone .cortical connections of visual area mt in the macaque ._ journal of comparative neurology _ ,2480 ( 2):0 190222 , 1986 .d. c. van essen , j. h. r. maunsell , and j. l. bixby .the middle temporal visual area in the macaque : myeloarchitecture , connections , functional properties and topographic organization ._ journal of comparative neurology _ , 1990( 3):0 293326 , 1981 .a. b. watson and a. ahumada . _ a look at motion in the frequency domain _ ,volume 84352 . national aeronautics and space administration , ames research center , 1983 .a. b. watson and a. j. ahumada , jr . model of human visual - motion sensing ._ journal of the optical society of america . a , optics and image science _ , 20 ( 2):0 322341 , 1985 .x. xu , j. ichida , y. shostak , a. b. bonds , and v. a. casagrande ._ visual neuroscience _ , 190 ( 1):0 97108 , 2001 . n. h. yabuta , a. sawatari , and e. m. callaway .two functional channels from primary visual cortex to dorsal visual cortical areas . _science _ , 2920 ( 5515):0 297300 , 2001 .s. m. zeki .cells responding to changing image size and disparity in the cortex of the rhesus monkey . _ the journal of physiology _ ,2420 ( 3):0 827841 , 1974 . s. m. zeki .the response properties of cells in the middle temporal area ( area mt ) of owl monkey visual cortex ._ proceedings of the royal society of london .series b. biological sciences _ , 2070 ( 1167):0 239248 , 1980 .
our visual system is astonishingly efficient at detecting moving objects . this process is mediated by the neurons which connect the primary visual cortex ( v1 ) to the middle temporal ( mt ) area . interestingly , since kuffler s pioneering experiments on retinal ganglion cells , mathematical models have been vital for advancing our understanding of the receptive fields of visual neurons . however , existing models were not designed to describe the most salient attributes of the highly specialized neurons in the v1 to mt motion processing stream ; and they have not been able to do so . here , we introduce the gabor - einstein wavelet , a new family of functions for representing the receptive fields of v1 to mt neurons . we show that the way space and time are mixed in the visual cortex is analogous to the way they are mixed in the special theory of relativity ( str ) . hence we constrained the gabor - einstein model by requiring : ( i ) relativistic - invariance of the wave carrier , and ( ii ) the minimum possible number of parameters . from these two constraints , the sinc function emerged as a natural descriptor of the wave carrier . the particular distribution of lowpass to bandpass temporal frequency filtering properties of v1 to mt neurons ( foster et al 1985 ; deangelis et al 1993b ; hawken et al 1996 ) is clearly explained by the gabor - einstein basis . furthermore , it does so in a manner innately representative of the motion - processing stream s neuronal hierarchy . our analysis and computer simulations show that the distribution of temporal frequency filtering properties along the motion processing stream is a direct effect of the way the brain jointly encodes space and time . we uncovered this fundamental link by demonstrating that analogous mathematical structures underlie str and the joint cortical encoding of space and time . this link will provide new physiological insights into how the brain represents visual information . ( 1,0)300
this is the second part of the study devoted to hypothesis testing problems in the case when the observations are inhomogeneous poisson processes .the first part was concerned with the regular ( smooth ) case , while this second part deals with non regular ( singular ) situations .we suppose that the intensity function of the observed inhomogeneous poisson process depends on the unknown parameter in a non regular way ( for example , the fisher information is infinite ) .the basic hypothesis is always simple ( ) and the alternative is one - sided composite ( ) . in the first part we studied the asymptotic behavior of the score function test ( sft ) , of the general likelihood ratio test ( glrt ) , of the wald test ( wt ) and of two bayes tests ( bt1 and bt2 ) .it was shown that the tests sft , glrt and wt are locally asymptotically uniformly most powerful . in the present work we study the asymptotic behavior of the glrt , wt , bt1 and bt2 in two non regular situations .more precisely , we study the tests when the intensity functions has a cusp - type singularity or a jump - type singularity . in both casesthe fisher information is infinite .the local alternatives are obtained by the reparametrization , .the rate of convergence depends on the type of singularity . in the cusp case , where is the order of the cusp , and in the discontinuous case .our goal is to describe the choice of the thresholds and the behavior of the power functions as .the important difference between regular and singular cases is the absence of the criteria of optimality .this leads to a situation when the comparison of the power functions can be only done numerically .that is why we present the results of numerical simulations of the limit power functions and the comparison of them with the power functions with small and large volumes of observations ( small and large ) .recall that is an inhomogeneous poisson process with intensity function , , if and the increments of on disjoint intervals are independent and distributed according to the poisson law we suppose that the intensity function depends on some one - dimensional parameter , that is , .the basic hypothesis is simple : , while the alternative is one - sided composite : .the hypothesis testing problems for inhomogeneous poisson processes were studied by many authors ( see , for example , , , and the references therein ) .we consider the model of independent observations of an inhomogeneous poisson process : , where , , are poisson processes with we use here the same notations as in . in particular , is one - dimensional parameter and is the mathematical expectation in the case when the true value is .the intensity function is supposed to be separated from zero on ] and equal to for all . now the random function is defined on .let us fix some and denote the space of continuous functions on with the property . introduce the uniform metric on this space and denote the corresponding borel -algebra .when we study the likelihood ratio process under hypothesis , we take and consider the corresponding measurable space . under the alternative , , we will use this space with .let be the measure induced on the measurable space by the stochastic processes , and be the measure induced ( under the true value ) on the same space by the processes .the continuity with probability of the random functions follows from the inequality below and the kolmogorov theorem .suppose that we already proved the following weak convergence then the distribution of any continuous in the uniform metric functional converge to the distribution of .in particular , if we take we obtain therefore the test .let us note , that we do not know an analytic solution of the equation defining the constant , that is why below we turn to numerical simulations ( see section [ ssc ] ) .note also that and does not depend on . to study the power function we consider the same likelihood ratio process but under the alternative .we can write with an obvious notation .the difference between and is that the `` reference value '' in the first case is fixed ( is equal to ) and in the second case it is `` moving '' ( is equal to ) .the random variable converge in distribution to . for the stochastic process we have a similar convergence , and so , for any fixed , we have now , let be the measure induced on the measurable space by the stochastic processes , and be the measure induced ( under the true value ) on the same space by the stochastic processes .suppose that we already proved the weak convergence then for the power function we can write > \ln h_\varepsilon \!\right\}\\ & \quad={\mathop{\mathbf{\kern 0pt p}}\nolimits}\left\{\sup_{u > 0 } \left[w^h\left(u\right)-\frac{\left|u - u _ * \right|^{2h}}{2}\right ] > \lnh_\varepsilon -\frac{\left|u_*\right|^{2h}}{2}\right\}= \hat\beta \left(u_*\right ) . \ ] ] this limit power function is obtained below with the help of numerical simulations ( see section [ ssc ] ) .let us also note that the limit ( under the alternative ) of the likelihood ratio process is the process defined by to finish the proof we need to verify the convergence . to do this we follow the proof of the convergence given in .we introduce the following relations . 1 ._ the finite - dimensional distributions of converge to those of ._ there exists a positive constant such that _ 3 ._ there exists a positive constant such that _ proofs of these relations are slight modifications of the proofs given in .note that the characteristic function of the vector can be written explicitly and the convergence of this characteristic function to the corresponding limit characteristic function can be checked directly ( see lemma 5 of ) .the inequalities and follow from the lemma 6 and lemma 7 of respectively .these relations allow us to obtain the weak convergence by applying the theorem 1.10.1 of .note that the convergence is a particular case of with .recall that the mle is defined by the equation the wald test ( wt ) has the following form : where is the solution of the second of the equations .introduce as well the random variable as solution of the equation the wt belongs to and its power function in the case of local alternatives , , has the following limit : the mle ( under hypothesis ) converges in distribution hence . for the proofsee .recall that this convergence is a consequence of the weak convergence .let us study this estimator under the alternative , .we have here , as before , now , the limit of the power function of the wt is deduced from this convergence : which concludes the proof .let us note , that we can also give another representation of the limit power function using the process : where is solution of the equation the threshold and the power function are obtained below by numerical simulations ( see section [ ssc ] ) .suppose that the parameter is a random variable with _ a priori _density , .this function is supposed to be continuous and positive .we consider two bayes tests .the first one is based on the bayes estimator , while the second one is based on the averaged likelihood ratio .the first test , which we call bt1 , is similar to wt , but is based on the bayes estimator ( be ) rather than on the mle .suppose that the loss function is quadratic .then the be is given by the following conditional expectation : we introduce the test bt1 as where the constant is solution of the equation introduce as well the function the bt1 belongs to and its power function in the case of local alternatives , , has the following limit : the bayes estimator is consistent and has the following limit distribution ( under hypothesis ) ( for the proof see ) .hence .for the power function we have let us study the normalized difference .we can write ( using the change of variables ) hence ( since and ) .the detailed proof is based on the properties 13 of the likelihood ratio ( see or ( * ? ? ? * theorem 1.10.2 ) ) .let us note , that we can also give another representation of the limit power function using the process : where . the second test , which we call bt2 , is given by here and is solution of the equation introduce as well the function the bt2 belongs to and its power function in the case of local alternatives , , has the following limit : let us first recall how this test was obtained . introduce the mean error of the second kind under alternative of an arbitrary test as where is the double mathematical expectation , that is , the expectation with respect to the measure if we consider the problem of the minimization of this mean error , we reduce the initial hypothesis testing problem to the problem of testing of two simple hypotheses then , by the neyman - pearson lemma , the most powerful test in the class minimizes the mean error is where the averaged likelihood ratio and is chosen from the condition , it is clear that the bt2 coincides with the test if we put . in the proof of the convergence in distribution of the bayes estimator it is shown ( see ( * ? ? ?* theorem 1.10.2 ) and ) that therefore ( under hypothesis ) , and the test belongs to the class . using a similar argument, we can verify the convergence under the alternative , which concludes the proof .let us consider the following example .we observe independent realizations , where \right) ] .a realization of the normalized likelihood ratio , ] , under the hypothesis are given in figure [ z_n_u_cusp ] .to find the thresholds of the glrt and of the wt , we need to find the point of maximum and the maximal value of this function . in the case of the chosen intensity function ,the maximum is attained at one of the cusps of the likelihood ratio ( that is , on one of the events of one of the observed poisson processes ) .it is interesting to note that if the intensity function has the same singularity but with a different sign : , then it is much more difficult to find the maximum ( see figure [ z_n_u_cusp_invers ] ) .the thresholds of the glrt , of the wt and of the bt1 are presented in table [ thr_cusp ] ..[thr_cusp]thresholds of glrt , wt and bt1 [ cols="^,^,^,^,^,^,^,^",options="header " , ] it is interesting to compare the studied tests with the neyman - pearson test ( n - pt ) corresponding to a fixed value of .of course , it is impossible to use this n - pt in our initial problem , since ( the value of under alternative ) is unknown . nevertheless , its power ( as function of ) shows an upper bound for power functions of all the tests , and the distances between it and the power functions of studied tests provide an important information .let us fix some value and introduce the n - pt where and are solutions of the equation denoting , we can rewrite this equation as here is a poisson random variable with parameter , and so the quantities and can be computed numerically .a similar calculation yields the limit power of the n - pt : where is a poisson random variable with parameter .the results of simulations are presented in figure [ pf_dis_rho_3 ] for two cases : and . in both casesthe limit power function of the glrt is the closest one to the limit power of the n - pt , and the limit power function of the bt1 arrives faster to than the others .this study was partially supported by russian science foundation ( research project no .14 - 49 - 00079 ) .the authors thank the referee for helpful comments .
we consider the problem of hypothesis testing in the situation where the first hypothesis is simple and the second one is local one - sided composite . we describe the choice of the thresholds and the power functions of different tests when the intensity function of the observed inhomogeneous poisson process has two different types of singularity : cusp and discontinuity . the asymptotic results are illustrated by numerical simulations . msc 2010 classification : 62m02 , 62f03 , 62f05 . _ key words : _ hypothesis testing , inhomogeneous poisson processes , asymptotic theory , composite alternatives , singular situations .
i have tried in this paper to separate mathematics from physical interpretation . in this closing passagehowever , i will bring the two together again .there is no limitation in bell s theorem on the space in which the hidden variables live .the `` measurement functions '' could be imagined to `` perform '' or `` implement '' calculations in any suitable algebraic or other mathematical framework , and hidden variables can include elements of any exotic mathematical space .moreover there is no objection whatever within bell s theorem to allow the outcomes , though of necessity encoded as and , to be thought of being members of a larger mathematical space than the set . in the context of chsh, all hidden variables can be reduced to , or subsumed in , the outcomes of the other measurements to the two which were actually done .`` realism '' , however it is defined , comes down , effectively , to the mathematical existence of the outcomes of unperformed experiments , alongside of those which were actually performed .`` locality '' refers to the attempt to `` locate '' those counterfactual outcomes in the `` obvious '' region of space and time . alongside of the assumptions of realism and locality ( the second only being meaningfulgiven the first ) we need an assumption of freedom : the freedom of the experimenter to perform either measurement .this does not need to involve metaphysical assumptions either of free will or of existence of true randomness .it just involves the assumption that the physical processes going on at one measurement location ca nt have access to the measurement choice made at the other location , till after the measurement outcome has been committed to . andof course , bell s theorem applies to ordinary correlations computed in the ordinary way on christian s outcomes and , which as we have seen are actually always perfectly anti correlated , whatever the measurement settings .no violation of bell s theorem ( chsh inequality version ) . in real experiments ,ordinary correlations are computed in the ordinary way on binary outcomes and _ do _ violate the chsh inequality .so even if christian s algebra had been correct , what relevance does it have to the real world ?as we have seen , christian s work applies to correlations obtained by dividing the raw correlation between measurement outcomes by the pure bivectors and , and within his own model would lead to standardized correlations which are not even real numbers .this simply has got nothing at all to do with bell s own programme , as far as it is usually interpreted .some of those writing critical evaluation of christian s work have expressed the hope that it might at least provide a mathematical framework for the theoretical side of quantum mechanics , in which the usual structure of hilbert spaces , projection operators , and so on , could be entirely replaced by a mathematical structure having a much closer connection with , for instance , the geometry of the real world .it seems to this author that that is indeed a legitimate quest .however in view of the failure of this particular attempt , those wanting to do this job are going to have to look elsewhere .there remains a psychological question , why so strong a need is felt by so many researchers to `` disprove bell '' in one way or another ? at a rough guess ,at least one new proposal comes up per year .many pass by unnoticed , but from time to time one of them attracts some interest and even media attention .having studied a number of these proposals in depth , i see two main strategies of would - be bell - deniers .but please notice , i do not mean to imply that these strategies are deliberate : i believe they are found `` accidentally '' ; i have no doubt of the sincerity of the proposers .the first strategy ( the strategy , i would guess , in the case in question ) is to build elaborate mathematical models of such complexity and exotic nature that the author him or herself is probably the only person who ever worked through all the details . somewhere in the midst of the complexitya simple mistake is made , usually resulting from suppression of an important index or variable .there is a hidden and non - local hidden variable .the second strategy is to simply build elaborate versions of detection loophole models .sometimes the same proposal can be interpreted in both ways at the same time . interpreting the proposal as the result of a hidden mistake or as a detection loophole modelare both interpretations of the reader , not of the writer . according to the anna karenina principle of evolutionary biology , in order for things to succeed , everything has to go exactly right , while for failure , it suffices if any one of a myriad factors is wrong . since errorsare typically accidental and not recognized , an apparently logical deduction which leads to a manifestly incorrect conclusion does not need to allow a unique diagnosis .if every apparently logical step had been taken with explicit citation of the mathematical rule which was being used , and in a specified context , one could say where the first misstep was taken .but mathematics is almost never written like that , and for good reasons . the writer and the reader , coming from the same scientific community , share a host of `` hidden assumptions '' which can safely be taken for granted , as long as no self - contradiction occurs .saying that the error actually occurred in such - and - such an equation at such - and - such a substitution depends on various assumptions .the author who still sincerely believes in his result will therefore claim that the diagnosis is wrong because the wrong context has been assumed .we can be grateful for christian that he has had the generosity to write his one page paper with a more or less complete derivation of his key result in a more or less completely explicit context , without distraction from the author s intended physical interpretation of the mathematics .the mathematics should stand on its own , the interpretation is `` free '' .my finding is that in this case , the mathematics does not stand on its own .after posting this paper on arxiv.org i became aware that others , not surprisingly , had already published interesting critiques on christian s work .in particular , florin moldoveanu has published a comprehensive review , moldoveanu ( 2011 ) , which also cites many other works .his `` error 1 '' is the same as the error on which i have focussed .i had the advantage of the availability of christian s `` one page paper '' , and restrict myself strictly to that particular presentation .moldoveanu has carried out a review of the whole corpus of works , which is complicated by the fact that different notations and different definitions are used in different papers .i also found it useful to strictly separate mathematical consistency from physical and metaphysical interpretation .if the mathematics is fatally flawed at the outset , then we need not spend energy on tracking down further errors , or on debating the soundness of the ideas .after my own preprint was posted on arxiv.org , christian has published a refutation of it , see christian ( 2012a ) .also , his magum opus , the book christian ( 2012b ) , has now been published .the first twenty - five pages ( which consists of 14 pages of front - matter and 11 numbered pages ) are freely available as a pdf file .the reader will find the identical error as in the one - page paper on page number 10 of the book , in the transition from formula ( 1.23 ) to ( 1.24 ) , where christian appeals to the multiplication table ( 1.8 ) .christian argues that he is not making an error at this point , but introducing a new postulate . in that caseit is curious that he does not draw attention to the fact that he is introducing a daring new ingredient into his model , especially in view of the fact that the new postulate contradicts the earlier made ( and used ) postulates .christian s work has stimulated a number of interesting discoveries and inventions .i would especially like to draw attention to sascha vongehr s `` quantum randi challenge '' on science20.com , see vongehr ( 2011 ) .the idea is to insist that those who believe bell got it all wrong , to deliver by providing computer programs which simulate their local realistic violation of bell s inequality .a successful simulation will get the attention of science journalists and science communicators and educators , and thereby of the whole scientific community , without having to pass the barrier of hostile peer review .christian , j. , ( 2011 ) , _ disproof of bell s theorem _ ,arxiv:1103.1879 , + http://arxiv.org/abs/1103.1879 christian , j. , ( 2012a ) , _ refutation of richard gill s argument against my disproof of bell s theorem _ , arxiv:1203.2529 , http://arxiv.org/abs/1203.2529 christian , j. , ( 2012b ) , _ on the origins of quantum correlations _ , arxiv:1201.0775 , + http://arxiv.org/abs/1201.0775 .this preprint reproduces the first chapter of the book `` disproof of bell s theorem . illuminating the illusion of entanglement '' by j.j .christian , published by brown walker , + http://www.brownwalker.com/book.php?method=isbn&book=1599425645 moldoveanu , f. ( 2011 ) , _ disproof of joy christian s `` disproof of bell s theorem '' _ ,arxiv:1109.0535 , http://arxiv.org/abs/1109.0535 vongehr , s. ( 2011 ) , the official quantum randi challenge , + http://www.science20.com/alpha_meme/official_quantum_randi_challenge-80168* the body of the one page paper arxiv:1103.1879v1 *
i point out a simple algebraic error in joy christian s refutation of bell s theorem . in substituting the result of multiplying some derived bivectors with one another by consultation of their multiplication table , he confuses the generic vectors which he used to define the table , with other specific vectors having a special role in the paper , which had been introduced earlier . the result should be expressed in terms of the derived bivectors which indeed do follow this multiplication table . when correcting this calculation , the result is not the singlet correlation any more . moreover , curiously , his normalized correlations are independent of the number of measurements and certainly do not require letting converge to infinity . at the same time , his unnormalized or raw correlations are identically equal to , independently of the number of measurements ! correctly computed , his standardized correlations are the bivectors , and they find their origin entirely in his normalization or standardization factors . i conclude that his research program has been set up around an elaborately hidden but trivial mistake . in at least 11 papers on quant - ph author joy christian proposes a local hidden variables model for quantum correlations which disproves bell s theorem `` by counterexample '' in a number of different settings , including the famous chsh and ghz versions . fortunately one of these papers is just one page long and concentrates on the mathematical heart of his work . unfortunately for his grand project , this version enables us to clearly see a rather shorter derivation of the desired correlations , which exposes an error in his own derivation . the error is connected with an unfortunate notational ambiguity at the very start of the paper . in a nutshell : the same symbols are used to denote both a certain fixed basis in terms of which two other bases are defined , and , as well as to express the generic algebraic multiplication rules which these latter two bases satisfy . this tiny ambiguity , though harmless locally , is probably the reason why later on , when apparently using the multiplication tables for the new bases , he silently shifts from the derived bases to the original special basis . the reader will need a copy of the one page paper christian ( 2011 ) , arxiv:1103.1879 ( the body of this paper is reproduced in the appendix ) . the context in which he works is so - called geometric algebra , which in this case means that we are working within the so - called even sub - algebra of standard clifford algebra , or if you prefer , with quaternions . at the start of the paper the author fixes a _ bivector basis _ satisfying ( usual kronecker and levi - civita symbols , ) the multiplication rules he writes that , , are _ defined _ by this multiplication table , but that is not exactly true . one can say that the _ algebra _ of bivectors is defined by the multiplication table , but the bivectors themselves are clearly not , since different bases can have the same multiplication table . the _ bivector algebra _ is the algebra of formal real linear combinations of real numbers and , , . it is therefore a four dimension real vector space , with on top of the vector space structure a multiplication operation or _ bivector product_. this is defined by combining ordinary real multiplication with the multiplication table of the bivector basis into the obvious multiplication table for the four vector - space basis elements , , , and . the bivector product is associative but not commutative . non - zero elements have multiplicative inverses , left and right inverses coincide . vector - space scalar multiplication of elements of the algebra by real numbers is identical with algebraic multiplication of elements of the algebra , either from the left or the right , by elements in the one dimensional subspace generated by . all this is no more than a standard definition of the quaternionic number system , which contains a unique copy of the real number system as well as many overlapping copies of the complex number system . every quaternion has a real part and a quaternionic part . if the latter part is zero we call the quaternion real ; if the former part is zero we call it purely quaternionic . if we prefer to talk about elements of the bivector algebra one can correspondingly identify within the algebra two special kinds of elements which we call _ real _ and _ purely bivectorial _ respectively . i will use the word `` bivector '' as synonym for `` element of the bivector algebra '' . according to my terminology , each _ bivector _ can be uniquely decomposed into the sum of a _ real number _ and a _ pure bivector_. christian next defines new sets of bivectors , where or . these can also each be considered to form a bivector basis , but now have multiplication tables which depend on . to be precise : he somewhat dangerously writes that these new bivectors are `` defined '' by the new algebraic rules indeed there is a sense in which this is true , but within the context of the paper , it is clear that the two new `` bivector bases '' are defined in relation to the initially fixed ( even if arbitrary ) basis , and then just `` happen '' to satisfy the new multiplication tables . of course , the bivector basis is the same as the original bivector basis , the bivector basis is just times the first . at the start of the paper christian defines `` measurement functions '' and , where is a binary hidden variable ( a fair coin toss , outcomes coded and ) , and and are two unit vectors in ordinary real three dimensional space . the measurement functions are defined as two bivectors and the definition appears complicated but christian claims , and that claim can easily be checked , that and , which of course are reals . i will give the definition and verify this claim in a moment , using a notation which will make life more easy . but first i want to point out an important consequence . since , it follows that independently of , and ( each within their respective domain ) . this striking relation is however not observed by christian . for completeness , here are christian s definitions of and , expressed in a convenient notation which i will return to again later . use the symbols and also to denote the pure bivectors and . the reader may check that they are both square roots of . define in the same way and . clearly , then we have because and , it follows that in view of this algebraic fact , it is curious that computation of the correlation between many independent and measurements should proceed so laboriously , and indeed , according to a curious definition , but let us accept the definition ( his equation ( 5 ) , and coincidentally mine too ) which christian elsewhere gives reasons for : there is slight ambiguity in this expression concerning the division by two terms : is this division by a product , or is this two successive divisions ? division stands for multiplication by the inverse , but is this supposed to take place on the left or on the right ? the answer is given by inspecting the calculations in christian s equations ( 5 ) to ( 7 ) : the two terms in the denominator of this fraction are supposed to divide left and right hand side of the numerator respectively , corresponding to their order in the denominator . those are the crucial calculations which , end of his ( 7 ) , lead to his desired result the fact we have previously observed concerning the product of the and measurements enables us to make a grand short cut through christian s derivation of the right hand side of his ( 7 ) from the left hand side of his ( 5 ) . with again the convention that and also denote the pure bivectors and , on substituting the value of all the products , we obtain . both of and are square roots of . from we find , hence and . thus . this bivector product must be evaluated as . the second term does not vanish , unless . it is conventionally denoted and it is a pure bivector whose coefficients are the three coefficients of the usual euclidean vector cross product . using the notational identification between pure bivectors and real vectors , we can even define the pure bivector wedge product by writing . we also define the pure bivector dot product ( left hand side a bivector , right hand side a real number ) . conclusion : christian s correlation is not but however christian prefers a more complicated derivation . naturally , if he too follows accurately his own algebraic rules , he can not obtain a different answer . however his derivation appears to make use of the law of large numbers . in the last complicated term before the end of his ( 7 ) , there appears a sum over terms each involving its own , and the argument is clearly that this cancels in the limit as , by the law of large numbers . indeed , if this expression is correct , and unless , it is _ only _ in the limit that it vanishes , when we obtain . something has gone wrong here . the mistake is in the transition from his formulas ( 6 ) to ( 7 ) where christian is using the -multiplication tables to simplify linear combinations of products of the . we have already written down the correct multiplication table , which expresses these products in terms of the same basis vectors . the last appearing in ( 7 ) should actually be ! the other in the same expression is the appearing on the right hand side of the -multiplication table : altogether , . by definition . making this correct substitution gives us a factor . finally we obtain the same result as i earlier got from a shorter route . sanity has been restored .
in wavelet analysis , we often use translation , dilation , and modulation of functions . for a function , throughout the paper we shall use the following notation where denotes the imaginary unit . in this paper we shall use as a dilation factor . in applications , is often taken to be a positive integer greater than one , in particular , the simplest case is often used .classical wavelets are often defined and studied in the time / space domain with the generating wavelet functions belonging to the square integrable function space . for and for a subset of square integrable functions in , linked to discretization of a continuous wavelet transform ( see ) , the following homogeneous wavelet system is generated by the translation and dilation of the wavelet functions in and has been extensively studied in the function space in the literature of wavelet analysis . to mention only a few references here ,see . in this paper , however , we shall see that it is more natural to study a nonhomogeneous wavelet system in the frequency domain .it is important to point out here that the elements in a set of this paper are not necessarily distinct and in a summation means that visits every element ( with multiplicity ) in once and only once .for example , for , all the functions are not necessarily distinct and in means .most known classical homogeneous wavelet systems in the literature are often derived from scalar refinable functions or from refinable function vectors in ( ) .let us recall the definition of a scalar refinable function here .a function or distribution on is said to be _ refinable _ ( or -refinable ) if there exists a sequence of complex numbers , called the _ refinement mask _ or the _ low - pass filter _ for the scalar refinable function , such that with the above series converging in a proper sense , e.g. , in .wavelet functions in the generating set of a homogeneous wavelet system are often derived from the refinable function by where are sequences on , called _ wavelet masks _ or _ high - pass filters_. for the infinite series in and to make sense , one often imposes some decay condition on the refinable function and wavelet filters so that all the infinite series in and are well - defined in a proper sense .nevertheless , even for the simplest case of a compactly supported scalar refinable function ( or distribution ) with a finitely supported mask , the associated refinable function with mask does not always belong to .in fact , it is far from trivial to check whether in terms of its mask , see and references therein for detail .one of the motivations of this paper is to study wavelets and framelets without such stringent conditions on either the generating wavelet functions or their wavelet filters for . for , the fourier transform used in this paper is defined to be , and can be naturally extended to square integrable functions and tempered distributions . under certain assumptions , taking fouriertransform on both sides of and , one can easily rewrite and in the frequency domain as follows : and provided that all the -periodic ( lebesgue ) measurable functions and similarly are properly defined . in the following , we shall see that it is often more convenient to work with and in the frequency domain rather than and in the time / space domain .if there exist positive real numbers and such that the -periodic measurable function satisfies for almost every ] , 3 .the following identity holds in the sense of distributions : more precisely , for all .in the following , we make some remarks about theorem [ thm : main:1 ] .we assumed in theorem [ thm : main:1 ] that all the generating functions in are from the space .note that includes the fourier transforms of all compactly supported distributions and of all elements in all sobolev spaces .this assumption on membership in can be weakened and is only used to guarantee the absolute convergence of the infinite series in . see the remark after lemma [ lem : converg ] in section 2 for more detail on this natural assumption . if we assume additionally that for all and if holds for almost every , by lebesgue dominated convergence theorem , then holds in the sense of distributions . if all elements in are essentially nonnegative measurable functions , then it is not difficult to verify that the conditions in items ( i ) and ( ii ) of theorem [ thm : main:1 ] are equivalent to the following simple conditions : and as we shall see in section 2 , items ( i ) and ( ii ) of theorem [ thm : main:1 ] correspond to a natural multiresolution - like structure , which is closely linked to a fast wavelet transform .the condition in item ( iii ) of theorem [ thm : main:1 ] is a natural normalization condition which is related to . comparing with the characterization of a pair of homogeneous dual wavelet frames in the space or a homogeneous orthonormal wavelet basis in ( e.g. , see ) , theorem [ thm : main:1 ] has several interesting features .firstly , the characterization in items ( i)(iii ) of theorem [ thm : main:1 ] does not involve any infinite series or infinite sums ; this is in sharp contrast to the homogeneous setting in .secondly , as we shall see in section 2 , all the involved infinite sums in the proof of theorem [ thm : main:1 ] are in fact finite sums .this allows us to easily generalize theorem [ thm : main:1 ] to any real dilation factors and to nonstationary wavelets , see sections 2 and 4 for detail .thirdly , we do not require any stability ( bessel ) property of the wavelet systems , while the homogeneous setting in needs the stability property to guarantee the convergence of the involved infinite series .fourthly , we do not require in theorem [ thm : main:1 ] that the generating wavelet functions possess any order of vanishing moments or smoothness , while all the generating wavelet functions in the homogeneous setting require at least one vanishing moment .lastly , from a pair of nonhomogeneous dual wavelet frames in , we shall see in section 3 that one can always derive an associated pair of homogeneous dual wavelet frames in .in fact , most homogeneous wavelet systems in the literature are derived in such a way .we mention that weak convergence of wavelet expansions has been characterized in for homogeneous wavelet systems .similar weak convergence of wavelet expansions that are related to also appeared in the study of homogeneous dual wavelet frames in and their frame approximation properties , for example , see .we also point out that the approach in this paper can be extended to frequency - based homogeneous wavelet systems in the distribution space .the following result generalizes the oblique extension principle ( oep ) and naturally connects a wavelet filter bank with a pair of frequency - based nonhomogeneous dual wavelet frames in .[ thm : main:2 ] let be an integer such that .let and , be -periodic measurable functions on .suppose that there are measurable functions satisfying define as in with and assume that all the elements in belong to .then forms a pair of frequency - based nonhomogeneous dual wavelet frames in the distribution space for some integer ( or for all integers ) , if and only if , with and the following fundamental identities are satisfied : and for all , where and in particular , if all are -periodic measurable functions in and if there exist positive real numbers and such that latexmath:[\[\label{lip : ata } then the frequency - based standard refinable measurable functions with masks and the dilation factor , which are defined by are well - defined for almost every and in fact .then all elements in belong to .moreover , forms a pair of frequency - based nonhomogeneous dual wavelet frames in the distribution space for some integer ( or for all integers ) , if and only if , the identities and are satisfied for all , and in the sense of distributions .note that is automatically satisfied with if and are -periodic trigonometric polynomials with . a similar result to theorem [ thm : main:2 ] also holds when and are refinable measurable function vectors .the identities in and with and are called the oblique extension principle in , provided that all elements in belong to and satisfy some technical conditions to guarantee the bessel ( stability ) property of the homogeneous wavelet systems and in the space ( see ) .in contrast , our results here generally do not require any a priori condition on the generating wavelet functions and provide a natural explanation for the connection between the perfect reconstruction property induced by oep in and in the discrete filter bank setting to wavelets and framelets in the function setting .the structure of the paper is as follows . in order to prove theorems [ thm : main:1 ] and [ thm : main:2 ], we shall introduce some auxiliary results in section 2 .in particular , we shall provide sufficient conditions in section 2 for the absolute convergence of the infinite series in. then we shall prove theorems [ thm : main:1 ] and [ thm : main:2 ] in section 2 .to explain in more detail about our motivation and importance for studying frequency - based nonhomogeneous wavelet systems , we shall discuss in section 3 nonhomogeneous wavelet systems in various function spaces such as and sobolev spaces , as initiated in .we shall see in section 3 that under the stability property , a pair of frequency - based nonhomogeneous dual wavelet frames can be naturally extended from the distribution space to a pair of dual function spaces . in section 3, we shall also explore the connections between nonhomogeneous and homogeneous wavelet systems in the space . to illustrate the flexibility and generality of the approach in this paper , we further study nonstationary wavelets which are useful in many applications , since nonstationary wavelet filter banks can be implemented in almost the same way and efficiency as a traditional fast wavelet transform .however , except a few special cases as discussed in , only few theoretical results on nonstationary wavelets are available in the literature , probably partially due to the difficulty in guaranteeing the membership of the associated refinable functions in and in establishing the stability property of the nonstationary wavelet systems in . in section 4, we present a complete characterization of a pair of frequency - based nonstationary dual wavelet frames in the distribution space . though the statements and notation in section 4 on nonstationary wavelets seem a little bit more complicated comparing with the stationary case in sections 13 , it is worth our effort to provide a better picture to understand nonstationary wavelets , since there are few theoretical results on this topic in the literature . to understand and study wavelet systems in various function spaces , it is our opinion that there are two key fundamental ingredients to be considered .one ingredient is the notion investigated in this paper of a pair of frequency - based nonhomogeneous dual wavelet frames in the distribution space which enables us to completely separate its perfect reconstruction property from its stability property in function spaces .the other ingredient is the stability issue of nonhomogeneous wavelet systems in function spaces which we did nt discuss in this paper but shall be addressed elsewhere .in this section , we study pairs of frequency - based nonhomogeneous dual wavelet frames in the distribution space . to prove theorems [ thm : main:1 ] and [ thm : main:2 ] , we first present some sufficient conditions for the absolute convergence of the infinite series in . for , by denote the set of all -periodic measurable functions such that ( with the usual modification for ) . by the following result, we always have the absolute convergence of the infinite series in provided that all the frequency - based wavelet functions are from the space .[ lem : converg ] let be a nonzero real number and let .then for all , with the series on the left - hand side converging absolutely . note that the infinite sum on the right - hand side of is in fact finite .by we denote the linear space of all compactly supported measurable functions in .note that .more generally , we prove for .denote now we show that are well - defined functions in . in fact , since , has compact support and therefore , is essentially supported inside ] , then it is not difficult to check by that with is a -periodic lipschitz function with some lipschitz exponent . by bernstein theorem , has an absolutely convergent fourier series .now for any , it is easy to prove that indeed holds for all .other assumptions could be used to guarantee .but is a large space containing the fourier transforms of all compactly supported distributions and of all elements in all sobolev spaces . for simplicity of presentation , we shall stick to the space for our discussion of frequency - based wavelets and framelets .[ lem : to1 ] let be a sequence of nonzero real numbers such that .let and be elements in with and . then if and only if , by lemma [ lem : converg ] , we have by , and are compactly supported . since , there exists an integer such that for all , , and .that is , for , becomes if holds in the sense of distributions , then it follows directly from that holds .conversely , we can take such that takes value one on the support of , now it follows from and that hence , holds in the sense of distributions . in the next auxiliary result, we shall study a multiresolution - like structure .more precisely , we have the following result . [lem : twolevel ] let be a nonzero real number .let , , and , be elements in . then if and only if , \label{i : eq1}\\ & i_\fphi^{k}(\xi)+i^{k}_{\fpsi}(\xi)=0 , \qquad a.e.\ , \xi\in \r,\ ; \forall\ ; k\in \z \bs [ \gl^{-1 } \z ] , \label{i : eq2}\\ & i^{\gl k}_\feta(\xi)=0 , \qquad a.e.\ , \xi\in \r,\ ; \forall \ ; k\in [ \gl^{-1 } \z ] \bs \z , \label{i : eq3}\end{aligned}\ ] ] where and , , and by lemma [ lem : converg ] , all the infinite series in converge absolutely and is equivalent to } \ff(\xi ) \ol{\fg(\xi+2\pi k ) } i^{\gl k}_\feta(\gl \xi ) d\xi,\ ] ] which can be easily rewritten as } \ff(\xi ) \ol{\fg(\xi+2\pi k ) } \big(i^{k}_\fphi(\xi ) + i^{k}_\fpsi(\xi)-i^{\gl k}_\feta(\gl \xi ) \big ) d\xi\\ & + \int_\r \sum_{k\in \z \bs [ \gl^{-1 } \z ] } \ff(\xi ) \ol{\fg(\xi+2\pi k ) } \big(i^{k}_\fphi(\xi ) + i^{k}_\fpsi(\xi)\big ) d\xi = \int_\r \sum_{k\in [ \gl^{-1 } \z]\bs \z } \ff(\xi ) \ol{\fg(\xi+2\pi k ) } i^{\gl k}_\feta(\gl \xi ) d\xi . \end{split}\ ] ] sufficiency . if , , and are satisfied , then it is obvious that is true and therefore , holds . necessity. denote ] and be temporarily fixed .then it is easy to check that .consider all such that the support of is contained inside and the support of is contained inside .then it is not difficult to verify that from which we see that becomes for all such that and . from, we must have for almost every .thus , must be true . andcan be proved by the same argument .now we have the generalized version of theorem [ thm : main:1 ] with a general real dilation factor .[ thm : main:1:general ] let be a real number such that .let in be subsets of .then , where is defined in , forms a pair of frequency - based nonhomogeneous dual wavelet frames in the distribution space for some integer ( or for all integers ) , if and only if , and , \end{split}\ ] ] , \label{i : eq2:sp}\\ & \sum_{\ell=1}^\mphi \ol{\fphi^\ell(\xi ) } \tilde \fphi^\ell(\xi+2\pi \df^{-1 } k)=0 , \qquad a.e.\ , \xi\in \r,\ ; \forall \ ; k\in [ \df \z ] \bs \z .\label{i : eq3:sp}\end{aligned}\ ] ] by the following simple observation , we have now it is straightforward to see that for all , if and only if , for all , sufficiency . by lemma [ lem : twolevel ] with , we see that holds and therefore , holds for all and . for , we define now by , we can easily deduce that by lemma [ lem : to1 ] , it follows from that for all .hence , forms a pair of frequency - based nonhomogeneous dual wavelet frames in the distribution space .necessity . by, we can easily deduce that forms a pair of frequency - based nonhomogeneous dual wavelet frames in the distribution space for some integer if and only if it is true for all integers . considering the difference between two consecutive integers and , we see that must hold and therefore , holds . now by lemma [ lem : twolevel ] , , , and hold with and , or equivalently , , , and hold . since holds , we deduce that holds . by lemma [ lem : to1 ], it follows from our assumption that must hold .we point out that theorem [ thm : main:1:general ] and the approach in this paper can be extended to frequency - based homogeneous wavelet systems in the distribution space , which is the dual space of the test function space consisting of all compactly supported functions whose supports are contained inside .we shall address this issue elsewhere .now we are ready to prove theorems [ thm : main:1 ] and [ thm : main:2 ] . by theorem [ thm : main:1:general ] with being an integer , it suffices to show that and are equivalent to the three conditions , , and .note that \bs \z ] . now it is also easy to see that is equivalent to .this completes the proof .we use theorem [ thm : main:1 ] to prove theorem [ thm : main:2 ] as follows : we prove the first part of theorem [ thm : main:2 ] first . by and, we have and similarly by , for all integers .now is equivalent to for all .now it is not difficult to deduce that is equivalent to .note that any ] .it is a standard argument to show that both and in are well - defined measurable functions in and in fact , by the same argument as in ( * ? ? ?* page 93 ) or ( * ? ? ?* page 932 ) , also implies that for any , there exists such that .\ ] ] see the proof of theorem [ thm : mra : ndwf ] in section 4 for more detail on proving and .take in . by the definition of in, we conclude that for almost every ] denotes the -entry of the matrix .since , all {\ell',\ell} ] are well defined elements in .all the results in the previous sections have been mainly built on the multiresolution - like structure in for stationary nonhomogeneous wavelet systems .here stationary means that at scale level the dilation is and the generating wavelet functions are independent of the scale level .the result in lemma [ lem : twolevel ] characterizing the multiresolution - like structure in makes most proofs in the previous sections relatively simple .nonhomogeneous wavelet systems are closely related to nonstationary wavelets , which are useful in many applications since the nonstationary wavelet filter banks can be implemented in almost the same way and efficiency as a traditional fast wavelet transform .however , except a few special cases as discussed in and some references therein , only few theoretical results on nonstationary wavelets are available in the literature . in this section, we shall see that the notion of a pair of frequency - based nonhomogeneous dual wavelet frames in the distribution space is very flexible and similar results hold in the most general setting of fully nonstationary wavelets . since there are few theoretical results on nonstationary wavelets in the literature , it is worth our effort to provide a better picture to understand them in this section. let us first introduce the notion of a pair of frequency - based nonstationary dual wavelet frames in the distribution space .let and be a sequence of nonzero real numbers .let and be subsets of distributions in with and .we say that the pair forms _ a pair of frequency - based nonstationary dual wavelet frames in the distribution space _ if the following identity holds : where the infinite series in converge in the following sense : 1 . for every , all the series and converge absolutely for every integer , , and ; 2 . for every , the following limit exists and the stationary nonhomogeneous wavelet systems considered in previous sections correspond to the case that , , and for all ; that is , the generating wavelet functions remain stationary ( unchanged ) at all the scale levels .[ thm : ndwf ] let be an integer and be a sequence of nonzero real numbers such that .let in and in be subsets of for all integers .then the pair in forms a pair of frequency - based nonstationary dual wavelet frames in the distribution space ,if and only if , ( all the above infinite sums are in fact finite , since for all . ) and where ] and . therefore , sufficiency . for , define therefore , by , for , we have \ , d\xi.\ ] ] now by , for all , we deduce that \ , d\xi.\ ] ] now by and , we conclude that . necessity .the proof of the necessity part is essentially the same as that of lemma [ lem : twolevel ] .since , the set is discrete and closed . for any temporarily fixed and , it is important to notice that .now the same argument as in the proof of lemma [ lem : twolevel ] leads to .similarly , for any temporarily fixed , since , by , the same argument as in the proof of lemma [ lem : twolevel ] leads to .[ cor : ndwf ] let be an integer with and be an integer .let in and in be subsets of for all integers .then the pair in , with for all , forms a pair of frequency - based nonstationary dual wavelet frames in the distribution space , if and only if , for all and ] by . for any \bs \{0\} ] . now it is easy to check that is equivalent to . as another application of theorem [ thm : ndwf ] , using a similar argument as in the proof of corollary [ cor : twf ] , we have the following result on frequency - based nonstationary tight wavelet frames in .[ cor : ntwf ] let be an integer and be a sequence of nonzero real numbers such that .let in and in be subsets of distributions in for all integers . then the following statements are equivalent 1 . is a frequency - based nonstationary tight wavelet frame in , that is , for all and 2 .the pair forms a pair of frequency - based nonstationary dual wavelet frames in the distribution space ; 3 . and , hold with and for all .4 . there exist for all such that for all , and [ prop : ndwf ] let be an integer and be a sequence of nonzero real numbers such that . for integers , let in and be subsets of .then forms a pair of frequency - based nonstationary dual wavelet frames in the distribution space for every integer , if and only if , \cup [ \gl_{j+1}^{-1}\z],\ ; j \ge j_0\ ] ] and where , , are defined in and by the same argument as in theorem [ thm : main:1:general ] , we see that the pair in forms a pair of frequency - based nonstationary dual wavelet frames in for all integers , if and only if , and by lemma [ lem : twolevel ] , is equivalent to . by lemma [ lem : to1 ] ,is equivalent to . for with , the degree of defined to be .we finish this paper by the following result which connects a nonstationary wavelet filter bank obtained via a generalized nonstationary oblique extension principle with a pair of frequency - based nonstationary dual wavelet frames in the distribution space .[ thm : mra : ndwf ] let be a sequence of nonzero integers such that . define and for all .let and , , be -periodic trigonometric polynomials such that for all and define then all , are elements in satisfying let , and , with and be -periodic measurable functions in .define then in and in are subsets of for all .moreover , the pair in forms a pair of frequency - based nonstationary dual wavelet frames in for every integer , if and only if , for all , ,\label{mra : filter : eq2}\end{aligned}\ ] ] and where we first establish an inequality on the decay of , which plays a critical role in this proof to show . by and bernstein inequality '\|_{\tlp{\infty } } \le \deg(\fa^j ) \| [ \fa^j]\|_{\tlp{\infty}} ] denotes the derivative of . observe a simple inequality for and .it is easy to prove that for .consequently , , which leads to .now for any with , by and , we deduce that hence , by , we have on the other hand , by a similar idea as in ( * ? ? ?* page 93 ) and ( * ? ? ?* page 932 ) , we have where . by and , we deduce from the above identity that that is , for all with , we have the above inequality implies the uniform convergence of for on any bounded set . since , we conclude from that . since , it is evident that all are elements in .similarly , we can prove and all are elements in .note that , that is , .so , holds .also note that and for at most countably many .note that with being replaced by is equivalent to for all and , where and for all .now it is easy to directly verify that and are equivalent to .we now show that is equivalent to . by and, we have now by and , using lebesgue dominated convergence theorem , by the same argument as in theorem [ thm : main:2 ] , we conclude that holds if and only if holds . by proposition [ prop : ndwf ], the proof is completed .
in this paper , we study nonhomogeneous wavelet systems which have close relations to the fast wavelet transform and homogeneous wavelet systems . we introduce and characterize a pair of frequency - based nonhomogeneous dual wavelet frames in the distribution space ; the proposed notion enables us to completely separate the perfect reconstruction property of a wavelet system from its stability property in function spaces . the results in this paper lead to a natural explanation for the oblique extension principle , which has been widely used to construct dual wavelet frames from refinable functions , without any a priori condition on the generating wavelet functions and refinable functions . a nonhomogeneous wavelet system , which is not necessarily derived from refinable functions via a multiresolution analysis , not only has a natural multiresolution - like structure that is closely linked to the fast wavelet transform , but also plays a basic role in understanding many aspects of wavelet theory . to illustrate the flexibility and generality of the approach in this paper , we further extend our results to nonstationary wavelets with real dilation factors and to nonstationary wavelet filter banks having the perfect reconstruction property .
given a set of skill requirements ( called task ) , a set of experts who have expertise in one or more skill , along with a social or professional network of the experts , the team formation problem is to identify a competent and highly collaborative team .this problem in the context of a social network was first introduced by and has attracted recent interest in the data mining community .a closely related and well - studied problem in operations research is the assignment problem . here ,given a set of agents and a set of tasks , the goal is to find an agent - task assignment minimizing the cost of the assignment such that exactly one agent is assigned to a task and every task is assigned to some agent .this problem can be modeled as a maximum weight matching problem in a weighted bipartite graph .in contrast to the assignment problem , the team formation problem considers the underlying social network , which for example models the previous collaborations among the experts , while forming teams .the advantage of using such a social network is that the teams that have worked together previously are expected to have less communication overhead and work more effectively as a team .the criteria explored in the literature so far for measuring the effectiveness of teams are based on the shortest path distances , density , and the cost of the minimum spanning tree of the subgraph induced by the team . herethe density of a subgraph is defined as the ratio of the total weight of the edges within the subgraph over the size of the subgraph .teams that are well connected have high density values .methods based on minimizing diameter ( largest shortest path between any two vertices ) or cost of the spanning tree have the main advantage that the teams they yield are always connected ( provided the underlying social network is connected ). however , diameter or spanning tree based objectives are not robust to the changes ( addition / deletion of edges ) in the social network .as demonstrated in using various performance measures , the density based objective performs better in identifying well connected teams . on the other hand, maximizing density may give a team whose subgraph is disconnected .this happens especially when there are small groups of people who are highly connected with each other but are sparsely connected to the rest of the graph .existing methods make either strong assumptions on the problem that do not hold in practice or are not capable of incorporating more intuitive constraints such as bounding the total size of the team .the goal of this paper is to consider the team formation problem in a more realistic setting and present a novel formulation based on a generalization of the densest subgraph problem .our formulation allows modeling of many realistic requirements such as ( i ) inclusion of a designated team leader and/or a group of given experts , ( ii ) restriction on the size or more generally cost of the team ( iii ) enforcing _ locality _ of the team , e.g. , in a geographical sense or social sense , etc .in fact most of the future directions pointed out by are covered in our formulation .the first work in the team formation problem in the presence of a social network presents greedy algorithms for minimizing the diameter and the cost of the minimum spanning tree ( mst ) induced by the team .while the greedy algorithm for minimizing the diameter has an approximation guarantee of two , no guarantee is proven for the mst algorithm . however , impose the strong assumption that a skill requirement of a task can be fulfilled by a single person ; thus a more natural requirement such as `` at least experts of skill are needed for the task '' can not be handled by their method .this shortcoming has been addressed in , which presents a 2-approximation algorithm for a slightly more general problem that can accommodate the above requirement .however , both algorithms can not handle an upper bound constraint on the team size . on the other hand ,the solutions obtained by all these algorithms ( including the mst algorithm ) can be shown to be connected subgraphs if the underlying social graph is connected .two new formulations are proposed in based on the shortest path distances between the nodes of the graph .the first formulation assumes that experts from each skill have to communicate with every expert from the other skill and thus minimizes the sum of the pairwise shortest path distances between experts belonging to different skills .they prove that this problem is np - hard and provide a greedy algorithm with an approximation guarantee of two .the second formulation , solvable optimally in polynomial time , assumes that there is a designated team leader who has to communicate with every expert in the team and minimizes the sum of the distances only to the leader .the main shortcoming of this work is its restrictive assumption that _ exactly _ one expert is sufficient for each skill , which implies that the size of the found teams is always upper bounded by the number of skills in the given task , noting that an expert is allowed to have multiple skills .they exploit this assumption and ( are the first to ) produce top- teams that can perform the given task .however , although based on the shortest path distances , neither of the two formulations does guarantee that the solution obtained is connected .in contrast to the distance or diameter based cost functions , explore the usefulness of the density based objective in finding strongly connected teams . using various performance measures ,the superiority of the density based objective function over the diameter objective is demonstrated .the setting considered in is the most general one until now but the resulting problem is shown to be np hard . the greedy algorithms that they propose have approximation guarantees ( of factor 3 ) for two special cases .the teams found by their algorithms are often quite large and it is not straightforward to modify their algorithms to integrate an additional upper bound constraint on the team size .another disadvantage is that subgraphs that maximize the density under the given constraints need not necessarily be connected .recently considered an _team formation problem where tasks arrive in a sequential manner and teams have to be formed minimizing the ( maximum ) load on any expert across the tasks while bounding the coordination cost ( a free parameter ) within a team for any given task .approximation algorithms are provided for two variants of coordinate costs : diameter cost and steiner cost ( cost of the minimum steiner tree where the team members are the terminal nodes ) .while this work focusses more on the load balancing aspect , it also makes the strong assumption that a skill is covered by the team if there exists at least one expert having that skill .all of the above methods allow only binary skill level , i.e. , an expert has a skill level of either one or zero .we point out that many methods have been developed in the operations research community for the team formation problem , , but none of them explicitly considers the underlying social or professional connections among the experts .there is also literature discussing the social aspects of the team formation and their influence on the evolution of communities , e.g. , .now we formally define the _ team formation _ problem that we address in this paper .let be the set of experts and be the weighted , undirected graph reflecting the relationship or previous collaboration of the experts .then non - negative , symmetric weight connecting two experts and reflects the level of compatibility between them .the set of skills is given by .each expert is assumed to possess one or more skills .the non - negative matrix specifies the skill levels of all experts in each skill .note that we define the skill level on a continuous scale . if an expert does not have skill , then .moreover , we use the notation for the column of , i.e. the vector of skill levels corresponding to skill . a task given by the set of triples , where , specifying that at least and at most of skill is required to finish the given task .+ * generalized team formation problem . * given a task , the generalized team formation problem is defined as finding a team of experts maximizing the _ collaborative compatibility _ and satisfying the following constraints : * * inclusion of a specified group : * a predetermined group of experts should be in . ** skill requirement : * at least and at most of skill is required to finish the task . * * bound on the team size : * the size of the team should be smaller than or equal to , i.e. , . ** budget constraint : * total budget for finishing the task is bounded by , i.e. , , where is the cost incurred on expert . ** distance based constraint : * the distance ( measured according to some non - negative , symmetric function , ) between any pair of experts in should not be larger than , i.e. , .* discussion of our generalized constraints . * in contrast to existing methods , we also allow an upper bound on each skill and on the total team size . if the skill matrix is only allowed to be binary as in previous work , this translates into upper and lower bounds on the number of experts required for each skill . using vertex weights, we can in fact encode more generic constraints , e.g. , having a limit on the total budget of the team .it is not straightforward to extend existing methods to include any upper bound constraints .up to our knowledge we are the first to integrate upper bound constraints , in particular on the size of the team , into the team formation problem .we think that the latter constraint is essential for realistic team formation .our general setting also allows a group of experts around whom the team has to be formed .this constraint often applies as the team leader is usually fixed before forming the team .another important generalization is the inclusion of _ distance _constraints for any general distance function .such a constraint can be used to enforce locality of the team e.g. in a geographical sense ( the distance could be travel time ) or social sense ( distance in the network ) . another potential applicationare mutual incompatibilities of team members e.g.on a personal level , which can be addressed by assigning a high distance to experts who are mutually incompatible and thus should not be put together in the same team .we emphasize that all constraints considered in the literature are special instances of the above constraint set . + * measure of collaborative compatiblity . * in this paper we use as a measure of collaborative compatibility a generalized form of the density of subgraphs , defined as where is the non - negative weight of the edge between and and is defined as , with being the positive weight of the vertex .we recover the original density formulation , via .we use the relation , , where is the degree of vertex and . + * discussion of density based objective . * as pointed out in , the density based objective possesses useful properties like strict monotonicity and robustness . in case of the density based objective , if an edge gets added ( because of a new collaboration ) or deleted ( because of newly found incompatibility ) the density of the subgraphs involving this edge necessarily increases resp.decreases , which is not true for the diameter based objective .in contrast to density based objective , the impact of small changes in graph structure is more severe in the case of diameter objective .the generalized density that we use here leads to further modeling freedom as it enables to give weights to the experts according to their expertise . by giving smaller weight to those with high expertise, one can obtain solutions that not only satisfy the given skill requirements but also give preference to the more competent team members ( i.e.the ones having smaller weights ). + * problem formulation . * using the notation introduced above , an instance of the team formation problem based on the generalized density can be formulated as note that the upper bound constraints on the team size and the budget can be rewritten as skill constraints and can be incorporated into the skill matrix accordingly .thus , without loss of generality , we omit the budget and size constraints from now on , for the sake of brevity .moreover , since is required to be part of the solution , we can assume that , otherwise the above problem is infeasible .the distance constraint also implies that any for which , for some , can not be a part of the solution .thus , we again assume wlog that there is no such ; otherwise such vertices can be eliminated without changing the solution of problem .+ our formulation is a generalized version of the classical densest subgraph problem ( dsp ) , which has many applications in graph analysis , e.g. , see .the simplest version of dsp is the problem of finding a densest subgraph ( without any constraints on the solution ) , which can be solved optimally in polynomial time .the densest--subgraph problem , which requires the solution to contain exactly vertices , is a notoriously hard problem in this class and has been shown not to admit a polynomial time approximation scheme .recently , it has been shown that the densest subgraph problem with an upper bound on the size is as hard as the densest--subgraph problem .however , the densest subgraph problem with a lower bound constraint has a 2-approximation algorithm .it is based on solving a sequence of unconstrained densest subgraph problems .they also show that there exists a linear programming relaxation for this problem achieving the same approximation guarantee .recently considered the following generalized version of the densest subgraph problem with lower bound constraints in the context of team formation problem : where is the _ binary _ skill matrix .they extend the greedy method of and show that it achieves a 3-approximation guarantee for some special cases of this problem . recently improved the approximation guarantee of the greedy algorithm of for problem ( [ eq : teamproblb ] ) to a factor 2 .the time complexity of this greedy algorithm is , where is the number of experts and is the minimum number of experts required . + * direct integration of subset constraint .* the subset constraint can be integrated into the objective by directly working on the subgraph induced by the vertex set .note that any that contains can be written as , for .we now reformulate the team formation problem on the subgraph .we introduce the notation , and we assume wlog that the first entries of are the ones in . the terms in problem ([ eq : teamprob ] ) can be rewritten as moreover , note that we can write : , where denotes the degree of vertex restricted to the subset in the original graph . using the abbreviations , , , , we rewrite the team formation problem as where for all , the bounds were updated as .note that here we already used the assumption : .the constraint , , has been introduced for technical reasons required for the formulation of the continuous problem in section [ sec : equiv_cont_prob ] .the equivalence of problem to follows by considering either ( if feasible ) or the set , where is an optimal solution of , depending on whichever has higher density . to the best of our knowledge there is no greedy algorithm with an approximation guarantee to solve problem ( [ eq : teamprobsim ] ) . instead of designing a greedy approximation algorithm for this discrete optimization problem, we derive an _equivalent _ continuous optimization problem in section [ sec : forte ] .that is , we reformulate the discrete problem in continuous space while preserving the optimality of the solutions of the discrete problem .the rationale behind this approach is that the continuous formulation is more flexible and allows us to choose from a larger set of methods for its solution than for the discrete one .although the resulting continuous problem is as hard as the original discrete problem , recent progress in continuous optimization allow us to find a locally optimal solution very efficiently .in this section we present our method , _ formation of realistic teams _ ( , for short ) to solve the team formation problem , which is rewritten as , using the continuous relaxation .we derive in three steps : 1 .derive an equivalent unconstrained discrete problem ( [ eq : teamprobuncstr ] ) of the team formation problem ( [ eq : teamprobsim ] ) via an _ exact penalty _ approach .2 . derive an equivalent continuous relaxation ( [ eq : teamprobcont ] ) of the unconstrained problem by using the concept of _ lovasz extensions_. 3 .compute the solution of the continuous problem ( [ eq : teamprobcont ] ) using the recent method ratiodca from _fractional programming_. a general technique in constrained optimization is to transform the constrained problem into an equivalent unconstrained problem by adding to the objective a penalty term , which is controlled by a parameter .the penalty term is zero if the constraints are satisfied at the given input and strictly positive otherwise .the choice of the regularization parameter influences the tradeoff between satisfying the constraints and having a low objective value .large values of tend to enforce the satisfaction of constraints . in the followingwe show that for the team formation problem ( [ eq : teamprobsim ] ) there exists a value of that guarantees the satisfaction of all constraints .+ let us define the penalty term for constraints of the team formation problem ( [ eq : teamprobsim ] ) as note that the above penalty function is zero only when satisfies the constraints ; otherwise it is strictly positive and increases with increasing infeasibility .the special treatment of the empty set is again a technicality required later for the lovasz extensions , see section [ sec : equiv_cont_prob ] .for the same reason , we also replace the constant terms and in ( [ eq : teamprobsim ] ) by and respectively , where and .+ the following theorem shows that there exists an unconstrained problem equivalent to the constrained optimization problem ( [ eq : teamprobsim ] ) .[ thm : teamprobuncstr ] the constrained problem ( [ eq : teamprobsim ] ) is equivalent to the unconstrained problem for , where is any feasible set of problem ( [ eq : teamprobsim ] ) such that and is the minimum value of infeasibility , i.e. , , if is infeasible .we define .note that maximizing ( [ eq : teamprobsim ] ) is the same as minimizing subject to the constraints of ( [ eq : teamprobsim ] ) . for any feasible subset ,the objective of ( [ eq : teamprobuncstr ] ) is equal to , since the penalty term is zero .thus , if we show that _ all _ minimizers of ( [ eq : teamprobuncstr ] ) satisfy the constraints then the equivalence follows .suppose , for the sake of contradiction , that , if ) is a minimizer of ( [ eq : teamprobuncstr ] ) and that is infeasible for problem ( [ eq : teamprobsim ] ) .since and , we have under the given condition on , which leads to a contradiction because the last term is the objective value of ( [ eq : teamprobuncstr ] ) at . we will now derive a tight continuous relaxation of problem ( [ eq : teamprobuncstr ] ) .this will lead us to a minimization problem over , which then can be handled more easily than the original discrete problem .the connection between the discrete and the continuous space is achieved via thresholding . given a vector , one can define the sets by thresholding at the value . in order to go from functions on sets to functions on continuous space , we make use of the concept of lovasz extensions .( lovasz extension)[def : lovasz ] let be a set function with , and let be ordered in ascending order .the lovasz extension of is defined by note that for all , i.e. is indeed an extension of from to ( ) . in the following , given a set function , we will denote its lovasz extension by .the explicit forms of the lovasz extensions used in the derivation will be dealt with in section [ sec : algo ] . in the following theoremwe show the equivalence for [ eq : teamprobsim ] .a more general result showing equivalence for fractional set programs can be found in .[ thm : teamprobcont ] the unconstrained discrete problem ( [ eq : teamprobuncstr ] ) is equivalent to the continuous problem for any .moreover , optimal thresholding of a minimizer , yields a set that is optimal for problem ( [ eq : teamprobuncstr ] ) .let .then we have where in the first step we used the fact that and are extensions of and , respectively .below we first show that the above inequality also holds in the other direction , which then establishes that the optimum values of both problems are the same .the proof of the reverse direction will also imply that a set minimizer of the problem ( [ eq : teamprobuncstr ] ) can be obtained from any minimizer of ( [ eq : teamprobcont ] ) via optimal thresholding .+ we first show that the optimal thresholding of any yields a set such that has an objective value at least as good as the one of .this holds because the third step follows from the fact that is non - negative ( ) and ordered in ascending order , i.e. , .since is non - negative , the final step implies that thus we have from inequality ( [ eq : optthres ] ) , it follows that optimal thresholding of yields a set that is a minimizer of problem ( [ eq : teamprobuncstr ] ) .the team formation problem ( [ eq : teamprobsim ] ) is equivalent to the problem ( [ eq : teamprobcont ] ) if is chosen according to the condition given in theorem [ thm : teamprobuncstr ] .this directly follows from theorems [ thm : teamprobuncstr ] and [ thm : teamprobcont ] .while the continuous problem is as hard as the original discrete problem , recent ideas from continuous optimization allow us to derive in the next section an algorithm for obtaining locally optimal solutions very efficiently .we now describe an algorithm for ( approximately ) solving the continuous optimization problem ( [ eq : teamprobcont ] ) .the idea is to make use of the fact that the fractional optimization problem ( [ eq : teamprobcont ] ) has a special structure : as we will show in this section , it can be written as a special ratio of difference of convex ( d.c . ) functions , i.e. it has the form where the functions and are positively one - homogeneous convex functions is said to be positively one - homogeneous if . ] and numerator and denominator are nonnegative .this reformulation then allows us to use a recent first order method called ratiodca . in order to find the explicit form of the convex functions , we first need to rewrite the penalty term as , where using this decomposition of , we can now write down the functions and as where , , denotes the lovasz extension of , and using the functions and defined above , the problem ( [ eq : teamprobcont ] ) can be rewritten in the form .the functions and are convex and positively one - homogeneous , and and are nonnegative .the denominator of is given as and the numerator is given as using prop.2.1 in and the decomposition of introduced earlier in this section , we can decompose .the lovasz extension of is given as and let denote the lovasz extension of ( an explicit form is not necessary , as shown later in this section ) .the equality between and then follows by simple rearranging of the terms .the nonnegativity of the functions and follows from the nonnegativity of denominator and numerator of and the definition of the lovasz extension .moreover , the lovasz extensions of any set function is positively one - homogeneous .finally , the convexity of and follows as they are a non - negative combination of the convex functions and for some .the function is well - known to be convex . to show the convexity of , we will show that the function is submodular is submodular if for all , .the convexity then follows from the fact that a set function is submodular if and only if its lovasz extension is convex . for the proof of the submodularity of the first two sums one uses the fact that the pointwise minimum of a constant and a increasing submodular function is again submodular .writing , the last sum can be written as . using ,we can write its lovasz extension as which is a sum of a linear term and a convex term . the reformulation of the problem in the form enables us to apply a modification of the recently proposed ratiodca , a method for the _ local _ minimization of objectives of the form on the whole . , + , given an initialization , the above algorithm solves a sequence of convex optimization problems ( line 3 ) .note that we do not need an explicit description of the terms and , but only elements of their sudifferential resp . .the explicit forms of the subgradients are given in the appendix .the convex problem ( line 3 ) then has the form where .note that is a _ non - smooth _ problem .however , there exists an equivalent smooth dual problem , which we give below .[ lem : inner_problem ] the problem is equivalent to where with , denotes the projection on the positive orthant and is the simplex .first we use the homogenity of the objective in the inner problem to eliminate the norm constraint .this yields the equivalent problem we derive the dual problem as follows : where .the optimization over has the solution plugging into the objective and using that , we obtain the result. the smooth dual problem can be solved very efficiently using recent scalable first order methods like fista , which has a guaranteed convergence rate of , where is the number of steps done in fista .the main part in the calculation of fista consists of a matrix - vector multiplication . as the social network is typically sparse , this operation costs , where is the number of non - zeros of .ratiodca , produces a strictly decreasing sequence , i.e. , , or terminates .this is a typical property of fast local methods in non - convex optimization .moreover , the convex problem need not be solved to full accuracy ; we can terminate the convex problem early , if the current produces already sufficent descent in .as the number of required steps in the ratiodca typically ranges between 5 - 20 , the full method scales to large networks .note that convergence to the global optimum of ( [ eq : fracset ] ) can not be guaranteed due to the non - convex nature of the problem . however , we have the following quality guarantee for the team formation problem .[ th : quality - guarantee ] let be a feasible set for the problem and is chosen as in theorem [ thm : teamprobuncstr ] .let denote the result of ratiodca after initializing with the vector , and let denote the set found by optimal thresholding of .either ratiodca terminates after one iteration , or produces which satisfies all the constraints of the team formation problem ( [ eq : teamprobsim ] ) and ratiodca generates a decreasing sequence such that until it terminates .we have , if the algorithm does not stop in one step . as shown in theorem ( [ thm : teamprobcont ] ) optimal thresholding of yields a set that achieves smaller objective on the corresponding set function .since the chosen value of guarantees the satisfaction of the constraints , has to be feasible .recall that our team formation problem based on the density objective is rewritten as the following gdsp after integrating the subset constraint : note that here we do not require the additional constraint , , that we added to . in this sectionwe show that there exists a linear programming ( lp ) relaxation for this problem .the lp relaxation can be solved optimally in polynomial time and provides an upper bound on the optimum value of gdsp .in practice such an upper bound is useful to check the quality of the solutions found by approximation algorithms . + the following lp is a relaxation of the generalized densest subgraph problem ( [ eq : teamprobsim2 ] ) . where , is the set of edges induced by .the following problem is equivalent to ( [ eq : teamprobsim2 ] ) , because ( i ) for every feasible set of , there exist corresponding feasible given by , , with the same objective value and ( ii ) an optimal solution of the following problem always satisfies . relaxing the integrality constraints and using the substitution , and , we obtain the relaxation : since this problem is invariant under scaling , we can fix the scale by setting the denominator to 1 , which yields the equivalent lp stated in the theorem .note that the solution of the lp ( [ eq : lp ] ) is , in general , not integral , i.e. , .one can use standard techniques of randomized rounding or optimal thresholding to derive an integral solution from .however , the resulting integral solution may not necessarily give a subset that satisfies the constraints of .in the special case when there are only lower bound constraints , i.e. , problem ( [ eq : teamproblb ] ) , one can obtain a feasible set for problem ( [ eq : teamproblb ] ) by thresholding ( see ) according to the objective of .this is possible in this special case because there is always a threshold which yields a non - empty subset ( in the worst case the full set ) satisfying all the lower bound constraints . in our experiments on problem , we derived a feasible set from the solution of lp in this fashion by choosing the threshold that yields a subset that satisfies the constraints and has the highest objective value .note that the lp relaxation ( [ eq : lp ] ) is vacuous with respect to upper bound constraints in the sense that given that does not satisfy the upper bound constraints of the lp ( [ eq : lp ] ) one can construct , feasible for the lp by rescaling without changing the objective of the lp .this implies that one can always transform the solution of the unconstrained problem into a feasible solution when there are upper bound constraints .however , in the presence of lower bound or subset constraints , such a rescaling does not yield a feasible solution and hence the lp relaxation is useful on the instances of with at least one lower bound or a subset constraint ( i.e. , ) .we now empirically show that consistently produces high quality compact teams .we also show that the quality guarantee given by theorem [ th : quality - guarantee ] is useful in practice as our method often improves a given sub - optimal solution .since we are not aware of any publicly available real world datasets for the team formation problem , we use , as in , a scientific collaboration network extracted from the dblp database .similar to , we restrict ourselves to four fields of computer science : databases ( ) , theory ( ) , data mining ( ) , artificial intelligence ( ) . conferences that we consider for each field are given as follows : db = \{sigmod , vldb , icde , icdt , pods } , t = \{soda , focs , stoc , stacs , icalp , esa } , dm = \{www , kdd , sdm , pkdd , icdm , wsdm } , ai = \{ijcai , nips , icml , colt , uai , cvpr}. for our team formation problem , the skill set is given by \ { , , , } .any author who has at least three publications in any of the above 23 conferences is considered to be an expert . in our dblp co -author graph , a vertex corresponds to an expert and an edge between two experts indicates prior collaboration between them .the weight of the edge is the number of shared publications .since the resulting co - author graph is disconnected , we take its largest connected component ( of size 9264 ) for our experiments . directly solvingthe non - convex problem ( [ eq : teamprobcont ] ) for the value of given in theorem [ thm : teamprobuncstr ] often yields poor results .hence in our implementation of we adopt the following strategy .we first solve the unconstrained version of problem ( [ eq : teamprobcont ] ) ( i.e. , ) and then iteratively solve ( [ eq : teamprobcont ] ) for increasing values of until all constraints are satisfied . in each iteration ,we increase only for those constraints which were infeasible in the previous iteration ; in this way , each penalty term is regulated by different value of . moreover , the solution obtained in the previous iteration of is used as the starting point for the current iteration . [fig : times ] -0.2 cm in this section we perform a quantitative evaluation of our method in the special case of the team formation problem with lower bound constraints and ( problem ) .we evaluate the performance of our method against the greedy method proposed in , refered to as .similar to the experiments of , an expert is defined to have a skill level of 1 in skill , if he / she has a publication in any of the conferences corresponding to the skill .as done in , we create random tasks for different values of skill size , . for each value of sample skills with replacement from the skill set = \ { , , , } .for example if , a sample might contain \ { , , } , which means that the random task requires at least two experts from the skill and one expert from the skill . in figure 1 , we show for each method the densities , sizes and runtimes for the different skill sizes , averaged over 10 random runs . in the first plot , we also show the optimal values of the lp relaxation in .note that this provides an upper bound on the optimal value of .we can obtain feasible solutions from the lp relaxation of via thresholding ( see section [ sec : lprel ] ) , which are shown in the plot as .furthermore , the plots contain the results obtained when the solutions of and are used as the initializations for ( in each of the iteration ) .the plots show that always produces teams of higher densities and smaller sizes compared to and .furthermore , produces better results than the greedy method in several cases in terms of densities and sizes of the obtained teams .the results of + and + further show that our method is able improve the sub - optimal solutions of and significantly and achieves almost similar results as that of which was started with the unconstrained solution of . under the worst - case assumption that the upper bound on computed using the lp is the optimal value , the solution of is optimal ( depending on ) .[ tb : qualteams ] [ cols= " < , < ,< , < " , ] [ table : teams ] -0.2 cm in this experiment , we assess the quality of the teams obtained for several tasks with different skill requirements .here we consider the team formation problem ( [ eq : teamprobsim ] ) in its more general setting .we use the generalized density objective of where each vertex is given a rank , which we define based on the number of publications of the corresponding expert . for each skill, we rank the experts according to the number of his / her publications in the conferences corresponding to the skill . in this wayeach expert gets four different rankings ; the total rank of an expert is then the minimum of these four ranks .the main advantage of such a ranking is that the experts that have higher skill are given preference , thus producing more competent teams .note that we choose a relative measure like rank as the vertex weights instead of an absolute quantity like number of publications , since the distribution of the number of publications varies between different fields .in practice such a ranking is always available and hence , in our opinion , should be incorporated .furthermore , in order to identify the main area of expertise of each expert , we consider his / her relative number of publications . each expert is defined to have a skill level of 1 in skill if he has more than 25% of his / her publications in the conferences corresponding to skill . as a distance function between authors ,we use the shortest path on the _ unweighted version _ of the dblp graph , i.e. two experts are at a distance of two , if the shortest path between the corresponding vertices in the unweighted dblp graph contains two edges . note that in general the distance function can come from other general sources beyond the input graph , but here we had to rely on the graph distance because of lack of other information . in order to assess the _ competence _ of the found teams , we use the list of the 10000 most cited authors of citeseer . note that in contrast to the skill - based ranking discussed above , this list is only used in the evaluation and _ not _ in the construction of the graph .we compute the average inverse rank as in as , where is the size of the team and is the rank of expert on the citeseer list of 10000 most cited authors. for authors not contained on the list we set .we also report the densities of the teams found in order to assess their _compatibility_. we create several tasks with various constraints and compare the teams produced by , and ( feasible solution derived from the lp relaxation ) .note that in our implementation we extended the algorithm of to incorporate general vertex weights , using dinkelbach s method from fractional programming .the results for these tasks are shown in table 1 .we report the upper bound given by the lp relaxation , density value , as well as number and sizes of the connected components .furthermore , we give the names and the citeseer ranks of the team members who have rank at most 1000 .note that could only be applied to some of the tasks and failed to find a feasible team in several cases . as a first taskwe show the unconstrained solution where we maximize density without any constraints .note that this problem is optimally solvable in polynomial time and all methods find the optimal solution .the second task asks for at least three experts with skill . hereagain all methods return the same team , which is indeed optimal since the lp bound agrees with the density of the obtained team .next we illustrate the usefulness of the additional modeling freedom of our formulation by giving an example task where obtaining meaningful , connected teams is not possible with the lower bound constraints alone .consider a task where we need at least four experts having skill ( task 3 ) .for this , all methods return the same disconnected team of size seven where only four members have the skill .the other three experts possess skills and and are densely connected among themselves .one can see from the lp bound that this team is again optimal .this example illustrates the major drawback of the density based objective which while preferring higher density subgraphs compromises on the connectivity of the solution .our further experiments revealed that the subgraph corresponding to the skill is less densely connected ( relative to the other skills ) and forming coherent teams in this case is difficult without specifying additional requirements . with the help of subset and distance based constraints supported by, we can now impose the team requirements more precisely and obtain meaningful teams . in task 4, we require that andrew y. ng is the team leader and that all experts of the team should be within a distance of two from each other in terms of the underlying co - author graph .the result of our method is a densely connected and highly ranked team of size four with a density of 3.89 .note that this is very close to the lp bound of 3.91 .the feasible solution obtained by is worse than our result both in terms of density and .the greedy method can not be applied to this task because of the distance constraint . in task 5we choose bernhard schoelkopf as the team leader while keeping the constraints from the previous task .out of the three methods , only can solve this problem .it produces a large disconnected team , many members of which are highly skilled experts from the skill and have strong connections among themselves . to filter these densely connected members of high expertise , we introduce a budget constraint in task 6 , where we define the cost of the team as the total number of publications of its members .again this task can be solved only by which produces a compact team of four well - known experts .a slightly better solution is obtained when is initialized with the infeasible solution of the lp relaxation as shown ( only in this task ) .this is an indication that on more difficult instances of , it pays off to run with more than one starting point to get the best results .the solution of the lp , possibly infeasible , is a good starting point apart from the unconstrained solution of .tasks 7 , 8 and 9 provide some additional teams found by for other tasks involving upper and lower bound constraints on different skills . as noted in section [ sec : lprel ] the lp bound is loose in the presence of upper bound constraints and this is also the reason why it was not possible to derive a feasible solution from the lp relaxation in these cases . in fact the lp bounds for these tasks remain the same even if the upper bound constraints are dropped from these tasks .by incorporating various realistic constraints we have made a step forward towards a realistic formulation of the team formation problem .our method finds qualitatively better teams that are more compact and have higher densities than those found by the greedy method .our linear programming relaxation not only allows us to check the solution quality but also provides a good starting point for our non - convex method .however , arguably , a potential downside of a density - based approach is that it does not guarantee connected components .a further extension of our approach could aim at incorporating `` connectedness '' or a relaxed version of it as an additional constraint .we gratefully acknowledge support from the excellence cluster mmci at saarland university funded by the german research foundation ( dfg ) and the project nolepro funded by the european research council ( erc ) .the subgradient of is given by , where is the indicator function of the largest entry of .for the subgradient of , using prop . 2.2 . in , we obtain for the subgradient of the terms of the form , defining , an element of the subgradient of the second term of is given as , where and where , if ; -1 if ; $ ] , if . in total, we obtain for the subgradient of ,
given a task , a set of experts with multiple skills and a social network reflecting the compatibility among the experts , _ team formation _ is the problem of identifying a team that is both competent in performing the task and compatible in working together . existing methods for this problem make too restrictive assumptions and thus can not model practical scenarios . the goal of this paper is to consider the team formation problem in a realistic setting and present a novel formulation based on densest subgraphs . our formulation allows modeling of many natural requirements such as ( i ) inclusion of a designated team leader and/or a group of given experts , ( ii ) restriction of the size or more generally cost of the team ( iii ) enforcing _ locality _ of the team , e.g. , in a geographical sense or social sense , etc . the proposed formulation leads to a generalized version of the classical densest subgraph problem with cardinality constraints ( dsp ) , which is an np hard problem and has many applications in social network analysis . in this paper , we present a new method for ( approximately ) solving the generalized dsp ( gdsp ) . our method , , is based on solving an _ equivalent _ continuous relaxation of gdsp . the solution found by our method has a quality guarantee and always satisfies the constraints of gdsp . experiments show that the proposed formulation ( gdsp ) is useful in modeling a broader range of team formation problems and that our method produces more coherent and compact teams of high quality . we also show , with the help of an lp relaxation of gdsp , that our method gives close to optimal solutions to gdsp . [ data mining ] [ graph algorithms ]
let be a ( possibly infinite ) network rooted at node .assume that independent and identically distributed noisy observations of an hidden random variable are available at a subset of the vertices .explicitly , each has access to a private signal where are independent and identically distributed , conditional on .the ` state of the world ' is drawn from a prior probability distribution .the objective is to aggregate information about at the root node under communication constraints encoded by the network structure , while minimizing the error probability at .we ask the following question : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ how much does the error probability at the root node increase due to these communication constraints ? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in order to address this question , consider a sequence of information aggregation problems indexed by .information is revealed in a subset of the vertices .there are rounds in which information aggregation occurs . in each round , a subset of the nodes in make ` decisions ' that are broadcasted to their neighbors . in the initial round , nodes with distance ( with being the graph distance ) broadcast a decision to their neighbors , with a finite alphabet . in the next round ,nodes with distance broadcast a decision to their neighbors . andso on , until the neighbors of announce their decisions in round .finally , the root makes its decision .the decision of any node is a function of decisions of s neighbors in earlier rounds , and , if , on the private signal received by .clearly , the root can possibly access only the private information available at nodes with ( with the graph distance ) .we can therefore assume , without loss of generality , that .it is convenient to think of as the _ information horizon _ at time .consider first the case in which communication is unconstrained .this can be modeled by considering the graph with vertices and edges .in other words , this is a star network , with the root at the center . without loss of generality , we take , with as . a simple procedure for information aggregation would work as follows .each node computes the log - likelihood ratio ( llr ) corresponding to the observed signal , and quantizes it to a value .the root adds up the quantized llrs and decides on the basis of this sum .it follows from basic large deviation theory that , under mild regularity assumptions , the error probability decreases exponentially in the number of observations this result is extremely robust : * * it holds for any non - trivial alphabet ; * * using concentration - of - measure arguments it is easy to generalize it to families of weakly dependent observations ; * * it can be generalized to network structures with weak communications constrains .for instance , proved that the error probability decays exponentially in the number of observations for trees of bounded depth .the crucial observation here is that such networks have large degree diverging with the number of vertices .in particular , for a tree of depth , the maximum degree is at least . at the other extreme ,hellmann and cover considered the case of a line network . in our notations , we have , , and .in they proved that , as long as the llrs are bounded ( namely almost surely for some constant ) , and the decision rule is independent of the node , the error probability remains bounded away from as . if the decision rule is allowed to depend on the node , the error probability can vanish as provided . despite this , even if the probability of error decays to , it does so much more slowly than for highly connected networks .namely , tay , tsitsiklis and win proved that for some .in other words , the communication constraint is so severe that , after steps , the amount of information effectively used by the root is equivalent to a vanishingly small fraction of the one within the ` information horizon ' .these limit cases naturally lead to the general question : given a rooted network , a sequence of information horizons and a finite alphabet , can information be aggregated at the root in such a way that the error probability decays exponentially in ? the question is wide open , in particular for networks of with average degree bounded or increasing slowly ( e.g. logarithmically ) with the system size .networks with moderate degree arise in a number of practical situations . within decentralized detection applications ,moderate degree is a natural assumption for interference - limited wireless networks . in particular , systems in which a single root node communicates with a significant fraction of the sensorsare likely to scale poorly because of interference at the root .standard models for wireless ad hoc networks are indeed based on random geometric graphs whereby each node is connected to a logarithmic number of neighbors .a different domain of applications for models of decentralized decision making is social learning . in this case, each node corresponds to an agent , and the underlying graph is the social network across which information is exchanged .also in this case , it is reasonable to assume that each agent has a number of neighbors which is bounded , or diverges slowly as the total number of agents grows . in many graph - theoretic models of social networks , although a small number of nodes can have large degree , the average degree is bounded or grows logarithmically with the network size .given the slow progress with extreme network structures ( line networks and highly - connected networks ) , the study of general moderate degree networks appears extremely challenging . in this paperwe focus on regular trees .more precisely , we let be the ( infinite ) regular tree with branching factor , rooted at ( each node has descendants and , with the exception of the root , one parent ) . the information horizon is formed by all the nodes at distance from the root , hence . under a broad set of assumptions , we prove that the probability of error decays subexponentially in the size of the information set , cf .( [ eq : subexp ] ) , where depends on the size of the alphabet . more precisely , we establish subexponential convergence in the following cases : 1 . for binary messages and any choice of the decision rule .in fact , we obtain a precise characterization of the smallest possible error probability in this case .2 . for general message alphabet the decision rule does not depend on the node , and satisfies a mild ` irreducibility ' condition ( see section [ subsec : subexp_decay_general ] for a definition ) . in the latter case, one expects that exponential convergence is recovered as the message set gets large .indeed we prove that the optimal exponent in eq .( [ eq : subexp ] ) obeys the upper bound follows from our general proof for irreducible decision rules , while the lower bound is obtained by constructing an explicit decision rule that achieves it .our investigation leaves several interesting open problems .first , it would be interesting to compute the optimal exponent for given degree of the tree and size of the alphabet .even the behavior of the exponent for large alphabet sizes is unknown at the moment ( cf .second , the question of characterizing the performance limits of general , node - dependent decision rules remains open for .third , it would be interesting to understand the case where non - leaf nodes also get private signals , e.g. , .finally , this paper focuses on tree of bounded degree .it would be important to explore generalization to other graph structures , namely trees with slowly diverging degrees ( which could be natural models for the local structure of preferential attachment graphs ) , and loopy graphs .our current results can be extended to trees of diverging degree only in the case of binary signals . in this casewe obtain that the probability of error is subexponential as soon as the degree is sub - polynomial , i.e. for all .the rest of the paper is organized as follows : section [ sec : model ] defines formally the model for information aggregation .section [ sec : binary ] presents our results for binary messages .section [ sec : nodeoblivious ] treats the case of decision rules that do not depend on the node , with general .as mentioned in the introduction , we assume the network to be an ( infinite ) rooted -ary tree , i.e. a tree whereby each node has descendants and one parent ( with the exception of the root , that has no parent ) .independent noisy observations ( ` private signals ' ) of the state of the world are provided to the nodes at all the nodes at -th generation .these will be also referred to as the ` leaves ' .define .formally , the state of the world is drawn according to the prior and for each an independent observation is drawn with probability distribution ( if ) or ( if ) . for notational simplicitywe assume that is finite , and that , for all . also , we exclude degenerate cases by taking .we refer to the refer to the two events and as the hypotheses and .in round 0 , each leaf sends a message to its parent at level 1 . in round 1, the each node at level 1 sends a message to its parent at level 2 . similarly up to round .finally , the root node makes a decision based on the messages it receives .the objective is to minimize .we call a set of decision rules _ optimal _ if it minimizes . we will denote by the set of children of node .we denote the probability of events under by , and the probability of events under by .finally , we denote by the decision rule at node in the tree . if is not a leaf node and , then .the root makes a binary decision .if is a leaf node , it maps its private signal to a message , . in general , s can be randomized .in this section , we consider the case , i.e. , the case of binary messages .consider the case , and for ; where .define the majority decision rule at non - leaf node as follows : takes the value of the majority of ( ties are broken uniformly at random ) .it is not hard to see that if we implement majority updates at all non - leaf nodes , we achieve note that this is an upper bound on error probability under majority updates .our main result shows that , in fact , this is essentially the best that can be achieved .[ thm : binary_lower_bound ] fix the private signal distribution , i.e. , fix and .there exists such that for all and , for any combination of decision rules at the nodes , we have in particular , the error probability decays subexponentially in the number of private signals , even with the optimal protocol .we prove the theorem for the case , and for ; where .the proof easily generalizes to arbitrary and .also , without loss of generality we can assume that , for every node , ( otherwise simply exchange the symbols and modify the decision rules accordingly ) .denote by the ( negative ) logarithm of the ` type i error ' in , i.e. . denote by the ( negative ) logarithm of the ` type ii error ' in , i.e. . the following is the key lemma in our proof of theorem [ thm : binary_lower_bound ] .[ lemma : binary_key_lemma ] given , there exists such that for any we have the following : there exists an optimal set of decision rules such that for any node at level , applying lemma [ lemma : binary_key_lemma ] to the root , we see that .the result follows immediately .lemma [ lemma : binary_key_lemma ] is proved using the fact that there is an optimal set of decision rules that correspond to deterministic likelihood ratio tests ( lrts ) at the non - leaf nodes .choose a node .fix the decision functions of all descendants of .define . + a ) the decision function is a _ monotone deterministic likelihood ratio test _ if : + ( i ) it is deterministic . + ( ii ) there is a threshold such that \b ) the decision function is a_ deterministic likelihood ratio test _ if either or is a monotone deterministic likelihood ratio test .here is the boolean complement of .the next lemma is an easy consequence of a beautiful result of tsitsiklis . though we state it here only for binary message alphabet , it easily generalizes to arbitrary finite .[ lemma : lrtoptimal ] there is a set of monotone deterministic likelihood ratio tests at the nodes that achieve the minimum possible .consider a set of decision rules that minimize .fix the rule at every node except node to the optimal one .now , the distributions and are fixed .moreover , is a linear function of , where denotes the distribution of under hypothesis .the set of achievable s is clearly convex , since randomized is allowed . from (* proposition 3.1 ) , we also know that is compact. thus , there exists an extreme point of that minimizes .now ( * ? ? ?* proposition 3.2 ) tells us that any extreme point of can be achieved by a deterministic lrt .thus , we can change to a deterministic lrt without increasing .if is not monotone ( we know that in this case ) , then we do and .clearly , is unaffected by this transformation , and is now a monotone rule .we do this at each of the nodes sequentially , starting at level , then covering level and so on until the root .thus , we change ( if required ) each decision rule to a monotone deterministic lrt without increasing .the result follows . clearly , if is a monotone lrt , eq. holds .in fact , we argue that there is a set of deterministic monotone lrts with strict inequality in eq . , i.e. , such that holds for all , that are optimal .eq . can only be written when and .consider a leaf node . without loss of generality we can take for each leaf node ( since any other rule can be ` simulated ' by the concerned level 1 node ) .so we have and , eq. holds and is a deterministic lrt .we can ensure these properties inductively at all levels of the tree by moving from the leaves towards the root .consider any node .if , then ( else ) and the parent of is ignoring the constant message received from .we can do at least as well by using any non - trivial monotone deterministic lrt at .similarly , we can eliminate . if and , then eq . must hold for any monotone deterministic lrt , using the inductive hypothesis .let and be binary vectors of the same length .we say if for all .we now prove lemma [ lemma : binary_key_lemma ] . from lemma [ lemma : lrtoptimal ] and eq ., we can restrict attention to monotone deterministic lrts satisfying eq . .we proceed via induction on level . for any leaf node ,we know that . choosing , eq. clearly holds for all nodes at level .. holds for all nodes at level .let be a node at level .let its children be .without loss of generality , assume * claim : * we can also assume proof of claim : suppose , instead , ( so is doing better than on both types of error ) . we can use the protocol on the subtree of also on the subtree of .call the message of under this modified protocol . since , and ( both types of errorhave only become less frequent ) , there exists a randomized function , such that for .thus , node can use to achieve the original values of and , where is decision rule being used at before . clearly , the error probabilities at , and hence at the root , stay unchanged with this .thus , we can safely assume .similarly , we can assume for .clearly , our transformations retained the property that nodes at levels and below use deterministic lrts satisfying eq . .similar to our argument for eq .above , we can make appropriate changes in the decision rules at levels above so that they also use deterministic lrts satisfying eq ., without increasing error probability .this proves the claim .recall that is the decision rule at node .assume the first bit in the input corresponds to , the second corresponds to , and so on . using lemma [ lemma : lrtoptimal ] , we can assume that implements a deterministic likelihood ratio test .define the -bit binary vectors , , , . from lemma [ lemma : lrtoptimal ] and, it follows that for some .* claim : * without loss of generality , we can assume that and .proof of claim : suppose .it follows from lemma [ lemma : lrtoptimal ] and eq . that for every possible . if then we have .suppose .then is a constant and is ignored by the parent of .we can not do worse by using an arbitrary non - trivial decision rule at instead .( the parent can always continue to ignore . )the case can be similarly eliminated .this proves the claim .thus , we can assume without loss of generality .now contribute to type i error and contribute to type ii error .it follows that where we have used the ordering on the error exponents ( eqs . and ) .eqs . and lead immediately to now , for any , we have . plugging and , we obtain from eq . by our induction hypothesis .thus , as required .induction completes the proof .in this section we allow a general finite message alphabet that need not be binary .however , we restrict attention to the case of _ node - oblivious _ rules : the decision rules at all nodes in the tree , except the leafs and the root , must be the same .we denote this ` internal node ' decision rule by .also , the decision rules used at each of the leaf nodes should be same .we denote the leaf decision rule by .the decision rule at the root is denoted by .we call such a node - oblivious decision rule vector . define . in section [ subsec : node - oblivious_efficient_scheme ], we present a scheme that achieves when the error probability in the private signals is sufficiently small . next , under appropriate assumptions , we show that the decay of error probability must be sub - exponential in the number of private signals . for convenience ,we label the messages as the labels have been chosen so as to be suggestive ( in a quantitative sense , see below ) of the inferred log - likelihood ratio .further , we allow the messages to be treated as real numbers ( corresponding to their respective labels ) that can be operated on . in particular , the quantity is well defined for a non - leaf node .the node - oblivious decision rule we employ at a non - leaf node is \left \lfloor \frac{s_i / k - ( m-1)/2}{1 - 1/m}\right \rfloor + \frac{m-1}{2}\ , , & \textup{if } s_i > 0 \end{array } \right .\label{eq : good_decision_rule}\end{aligned}\ ] ] note that the rule is symmetric with respect to a inversion of sign , except that is mapped to the message when is even .the rule used at the leafs is simply and .the decision rule at the root is if we associate with negative quantities , and with positive quantities , then again , the rule at the leafs is symmetric , and the rule at the root is essentially symmetric ( except for the case ) .assume .define and .we show that , in fact , for suitable choice of the following holds : if , then for any node at any level , \geq \nonumber\\ ( l / m ) \gamma^\tau + c \label{eq : letter_prob_decay}\end{aligned}\ ] ] we proceed by induction on .consider at level .we have = 0 ] . choosing , we can ensure that eq. holds at level .note that for , we have .now suppose eq. holds at level .consider node at level . from eq ., for we need \label{eq : si_lb}\end{aligned}\ ] ] for every such that eq . holds , we have .thus , obviously , there are at most such .thus , \\ \leq \ ; & m^k \exp\left(- kc - ( 1/m)l \gamma^{\tau+1 } \right ) \\= \ ; & \exp\left(- c - ( 1/m)l \gamma^{\tau+1 } \right ) \end{aligned}\ ] ] thus , eq . holds at level .induction completes the proof . for and , there exists , and a node - oblivious decision rule vector , such that the following is true : for any , we have & \leq \exp \left \ { - \frac{m-1}{2 m } \big \{k \left ( 1 - 1/m\right ) \big \}^t \right \ } \nonumber\\ & = \exp \left \{- \frac{m-1}{2m}\ , n^\rho \right \}\end{aligned}\ ] ] with .[ thm : good_decision_rule_exists ] assume .for every such that , we have . from lemma [ lemma : good_decision_decay](i ) , where and .obviously , there are at most such .it follows that define , i.e. , is the number of private signals received , one at each leaf .the scheme presented in the previous section allows us to achieve error probability that decays like , where for . in this sectionwe show that under appropriate assumptions , error probability that decays exponentially in , i.e. , , is not achievable with node - oblivious rules .in this section we call the letters of the message alphabet . for simplicity , we consider only deterministic node - oblivious rules , though our results and proofs extend easily to randomized rules .we define here a directed graph with vertex set and edge set that we define below .we emphasize that is distinct from the tree on which information aggregation is occurring .there is a directed edge from node to node in if there exists such that appears at least once in and .informally , if can be ` caused ' by a message vector received from children that includes .we call the _ dependence graph_. [ ass : strongly_connected ] the dependence graph is strongly connected . in other words , for any and such that , there is a directed path from to in . there exists , , and such that , for all the following holds : for node at level , we have and .[ ass : one_letter_dominant ] it is not hard to verify that for , and ( where is same as in lemma [ lemma : good_decision_decay ] and theorem [ thm : good_decision_rule_exists ] ) , the scheme presented in the previous section satisfies all four of our assumptions .in other words , the assumptions are all satisfied in the regime where our scheme has provably good performance .consider a directed graph that is strongly connected .for , let be the length of the shortest path from to .then the _ diameter _ of is defined as -15pt [ thm : subexp_decay ] fix and .consider any node - oblivious decision rule vector such that assumptions [ ass : strongly_connected ] , [ ass : all_letters_+ve_prob ] and [ ass : one_letter_dominant ] are satisfied .let be the diameter of the dependence graph .then , there exists such that we have \geq \exp \left \{-c n^{\brho } \right \}\ , , \end{aligned}\ ] ] where .[ coro : subexp_decay ] fix and .consider any node - oblivious decision rule vector such that assumptions [ ass : strongly_connected ] , [ ass : all_letters_+ve_prob ] and [ ass : one_letter_dominant ] are satisfied .then , there exists such that we have \geq \exp \left \{-c n^{\rho } \right \}\ , , \end{aligned}\ ] ] where .[ rem : mus_vec_maps_to_itself ] we have .it follows that we must have ( else the probability of error is bounded below by for any ) .similarly , we must have .in particular , .it follows from assumption [ ass : all_letters_+ve_prob ] that for any , there is some such that .we prove the lemma by induction on the level .let by assumption , holds .suppose holds .consider node at level .consider any . by inductive hypothesis, we have .it follows that .thus , holds .lemma [ lemma : subexp_decay ] can be thought of as a quantitative version of lemma [ lemma : all_letters_+ve_prob_after_tau0 ] , showing that the probability of the least frequent message decays subexponentially .suppose assumptions [ ass : strongly_connected ] , [ ass : all_letters_+ve_prob ] and [ ass : one_letter_dominant ] are satisfied .fix . consider a node at level .define .let ( cf .assumptions [ ass : all_letters_+ve_prob ] , [ ass : one_letter_dominant ] ) .let .there exists such that for any and , we have , [ lemma : subexp_decay ] we proceed via induction on .first consider .consider a node at level for .consider the descendants of node at level .for any , we know from lemma [ lemma : all_letters_+ve_prob_after_tau0 ] that there must be _ some _ assignment of messages to the descendants , such that .it follows that thus , choosing , we can ensure that eq .holds for and all .now suppose eq. holds for some .consider a node at level .let be the set of descendants of node at level .note that .consider any . by assumption [ ass : strongly_connected ] , there is a directed path in of length at most going from to . by remark [ rem : mus_vec_maps_to_itself ] , we know that .it follows that there is a directed path in of length _ exactly _ going from to .thus , there must be an assignment of messages to nodes in , including at least one occurrence of , such that .using assumption [ ass : one_letter_dominant ] , we deduce that rewriting as and combining with eq ., we obtain induction completes the proof . for the scheme presented in section [ subsec : node - oblivious_efficient_scheme ] , we have , where . for any , theorem [ thm : subexp_decay ]provides a lower bound on error probability with for some .this closely matches the dependence of the upper bound on error probability we proved in theorem [ thm : good_decision_rule_exists ] .we already mentioned that the efficient node - oblivious rule presented in section [ subsec : node - oblivious_efficient_scheme ] satisfies all of assumptions [ ass : strongly_connected ] , [ ass : all_letters_+ve_prob ] and [ ass : one_letter_dominant ] .moreover , it is natural to expect that similar schemes based on propagation of quantized likelihood ratio estimates should also satisfy our assumptions . in this section, we further discuss our assumptions taking the cases of binary and ternary messages as examples .binary messages are not the focus of section [ subsec : subexp_decay_general ] .however , we present here a short discussion of assumptions 1 , 2 and 3 in the context of binary messages for illustrative purposes .proof of claim : call the messages .consider a node - oblivious decision rule vector such that error probability decays to with .then can not be a constant function ( e.g. , identically ) , since this leads to .suppose assumption [ ass : strongly_connected ] is violated .without loss of generality , suppose .then for all .it follows that for node at level , we have for both and .in particular , is bounded away from .this is a contradiction .suppose assumption [ ass : all_letters_+ve_prob ] is violated .then , wlog , all nodes at level transmit the message almost surely , under either hypothesis .thus , all useful information is lost and .this is a contradiction .finally , we show that assumption [ ass : one_letter_dominant ] must hold as well .define for node at level .wlog , suppose occurs infinitely often. then we have , else for infinitely many .define for node at level . if occurs infinitely often, then it follows that and hence occur for infinitely many .so we can have only finitely many times .also , must hold .it follows that occurs only finitely many times .thus , assumption [ ass : one_letter_dominant ] holds with .we first show that if assumption [ ass : all_letters_+ve_prob ] is violated , then . if assumption [ ass : all_letters_+ve_prob ] does not hold , then only at most two letters are used at each level .it follows that we can have a ( possibly node - dependent ) scheme with binary messages that is equivalent to the original scheme at levels and higher .our lower bound on then follows from theorem [ thm : binary_lower_bound ] .thus , even in the best case , performance is significantly worse than the scheme presented in section [ subsec : node - oblivious_efficient_scheme ] .so a good scheme for ternary messages must satisfy assumption [ ass : all_letters_+ve_prob ]. now consider assumption [ ass : strongly_connected ] .let .suppose assumption [ ass : strongly_connected ] is violated .then wlog , there is no path from letter to one of the other letters .it follows that under either hypothesis , we have for node at level .thus , the letter occurs with exponentially small probability , irrespective of .this should essentially reduce , then , to the case of binary messages , and we expect performance to be constrained as above. finally , consider assumption [ ass : one_letter_dominant ] .we can not have for all , since that will lead to for all .similarly , we can also exclude the possibility for all .wlog , suppose and . now consider the problem of designing a good aggregation protocol . by the above , we must have and , for node at level , to each converge to 0 with increasing .further , it appears natural to use the message with an interpretation of ` not sure ' in such a situation .we would then like the probability of this intermediate symbol to decay with , or at least be bounded in the limit , i.e. , for each possible .if this holds , we immediately have assumption [ ass : one_letter_dominant ] ( with and ) .we argued above that our irreducibility assumptions are quite reasonable in various circumstances .in fact , we expect the assumptions to be a proof artifact , and conjecture that a subexponential convergence bound holds for general node - oblivious rules .a possible approach to eliminate our assumptions would be to prune the message alphabet , discarding letters that never appear , or appear with probability bounded by ( because they require descendants from a strict subset of ) .
we consider the decentralized binary hypothesis testing problem on trees of bounded degree and increasing depth . for a regular tree of depth and branching factor , we assume that the leaves have access to independent and identically distributed noisy observations of the ` state of the world ' . starting with the leaves , each node makes a decision in a finite alphabet , that it sends to its parent in the tree . finally , the root decides between the two possible states of the world based on the information it receives . we prove that the error probability vanishes only subexponentially in the number of available observations , under quite general hypotheses . more precisely the case of binary messages , decay is subexponential for any decision rule . for general ( finite ) message alphabet , decay is subexponential for ` node - oblivious ' decision rules , that satisfy a mild irreducibility condition . in the latter case , we propose a family of decision rules with close - to - optimal asymptotic behavior .
geometric integrators are numerical methods that preserve geometric structures and properties of the flow of a differential equation .structure - preserving integrators have attracted considerable interest due to their excellent numerical behavior , especially for long - time integration of equations possessing geometric properties ( see , , ) . an important class of structure - preserving integratorsare _ variational integrators _ ( see ) .this type of numerical schemes is based on discrete variational principles and provides a natural framework for the discretization of lagrangian systems , including forced , dissipative , or constrained ones .variational integrators were introduced in the context of finite - dimensional mechanical systems , but were later generalized to lagrangian field theories ( see ) and applied in many computations , for example in elasticity ( ) , electrodynamics ( ) , or fluid dynamics ( ) .theoretical aspects of variational integration are well understood in the case when the lagrangian describing the considered system is regular , that is , when the corresponding legendre transform is ( at least locally ) invertible .however , the corresponding theory for degenerate lagrangian systems is less developed .the analysis of degenerate systems becomes a little more cumbersome , because the euler - lagrange equations may cease to be second order , or may not even make any sense at all . in the latter case one needs to determine if there exists a submanifold of the configuration bundle on which consistent equations of motion can be derivedthis can be accomplished by applying the dirac theory of constraints or the pre - symplectic constraint algorithm ( see , ) .a particularly simple case of degeneracy occurs when the lagrangian is linear in velocities . in that case, the dynamics of the system is defined on the configuration manifold itself , rather than its tangent bundle , provided that some regularity conditions are satisfied .such systems arise in many physical applications , including interacting point vortices in the plane ( see , , ) , or partial differential equations such as the nonlinear schrdinger ( ) , kdv ( , ) or camassa - holm equations ( , ) . in section [ sec : numerical experiments ] we show how certain poisson systems can be recast as lagrangian systems whose lagrangians are linear in velocities. therefore , our approach offers a new perspective on geometric integration of poisson systems , which often arise as semi - discretizations of integrable nonlinear partial differential equations , e.g. , the toda or volterra lattice equations , and play an important role in the modeling of many physical phenomena ( see , , ) .this paper is organized as follows . in section [ sec : geometric setup ] we introduce a proper geometric setup and discuss the properties of systems that are linear in velocities . in section [ sec : veselov discretization and discrete mechanics ] we analyze the general properties of variational integrators and point out how the relevant theory differs from the non - degenerate case . in section[ sec : variational partitioned runge - kutta methods ] we introduce variational partitioned runge - kutta methods and discuss their relation to numerical integration of differential - algebraic systems . in section [ sec : numerical experiments ] we present the results of our numerical experiments for kepler s problem , a system of two interacting vortices , and the lotka - volterra model .we summarize our work and discuss possible extensions in section [ sec : summary ] .let be the configuration manifold and its tangent bundle . throughout this workwe will assume that the dimension of the configuration manifold is even .we will further assume is a vector space and by a slight abuse of notation we will denote by both an element of and the vector of its coordinates in a local chart on .it will be clear from the context which definition is invoked .consider the lagrangian given by where is a smooth one - form , is the hamiltonian , and .let denote canonical coordinates on , where . in these coordinates we can consider where summation over repeated greek indices is implied .the lagrangian is degenerate , since the associated legendre transform is not invertible .the local representation of the legendre transform is that is , where denote canonical coordinates on .the dynamics is defined by the action functional = \int_{a}^{b } l\big(q(t),\dot q(t)\big)\,dt\ ] ] and hamilton s principle , which seeks the curves such that the functional ] satisfying the initial conditions , and , where . then , and , uniformly on ] satisfying the initial condition and let be the unique smooth solution of on the interval ] as .with these assumption one can easily see that where is the exact discrete lagrangian for . since is regular , is properly defined on the whole space ( or at least in a neighborhood of ) and the associated exact discrete legendre transforms satisfy the properties ( see ) where is the solution to the regularized euler - lagrange equations satisfying the boundary conditions and , and we denoted , . in the spirit of, we can assume the following definitions of the exact discrete legendre transforms where and by uniform convergence of .note that , where and are projections ( both are diffeomorphisms ) .this is a close analogy to ( see section [ sec : geometric setup ] ) .we also note the property where is the solution of satisfying the initial condition .this further indicates that our definition of the exact discrete legendre transforms is sensible .note that .it is convenient to redefine , that is , so that both transforms are diffeomorphisms between and .the discrete euler - lagrange equations for can be obtained as the limit of the discrete euler - lagrange equations for , that is , one can substitute in and take the limit on both sides to obtain this equation implicitly defines the exact discrete lagrangian map , which , given our definitions , necessarily takes the form .using the discrete legendre transforms we can define the corresponding exact discrete hamiltonian map as .the simple calculation shows that the discrete hamiltonian map associated with the exact discrete lagrangian is equal to the hamiltonian flow for , i.e. , the evolution of the discrete systems described by coincides with the evolution of the continuous system described by at times , let us illustrate these ideas with a very simple example for which analytic solutions are known .let and let denote local coordinates on .the tangent bundle is , and the induced local coordinates are .consider the lagrangian the corresponding euler - lagrange equations are simply so the flow is the identity , i.e. , .let denote canonical coordinates on the cotangent bundle .the legendre transform is let be a timestep .note .the exact discrete lagrangian is therefore let us now consider the -regularized lagrangian the corresponding euler - lagrange equations take the form one can easily verify analytically that & + \frac{1}{2}\bigg [ ( y_f - y_i)+(x_f - x_i ) \cot \frac{t}{2 \epsilon}\bigg ] \sin \frac{t}{\epsilon } \nonumber \\ & - \frac{1}{2}\bigg [ ( x_f - x_i)-(y_f - y_i ) \cot \frac{t}{2 \epsilon}\bigg ] \cos \frac{t}{\epsilon } , \nonumber \\y^\epsilon(t ) = \frac{1}{2}\bigg [ ( y_i+y_f)+(x_f - x_i ) \cot \frac{t}{2 \epsilon}\bigg ] & -\frac{1}{2}\bigg [ ( x_f - x_i)-(y_f - y_i ) \cot \frac{t}{2 \epsilon}\bigg ] \sin \frac{t}{\epsilon } \nonumber \\ & - \frac{1}{2}\bigg [ ( y_f - y_i)+(x_f - x_i ) \cot \frac{t}{2 \epsilon}\bigg ] \cos \frac{t}{\epsilon},\end{aligned}\ ] ] is the solution to satisfying the boundary conditions and .note that if or , then as this solution is rapidly oscillatory and not convergent .however , if ( cf .assumption [ ass : boundary value problem assumption ] ) then we have and this solution converges uniformly ( in this simple example it is in fact equal ) to the solution of with the same initial condition .we can also find an analytic expression for the exact discrete lagrangian associated with as restricting the domain to we get , and comparing to we verify that indeed holds .the discrete legendre transforms associated with take the form restricting the domain to and taking the limit as in , we can define the exact discrete legendre transforms associated with comparing with , we see that the property is satisfied , which replicates the analogous property for regular lagrangians . for a given continuous system described by the lagrangian ,a variational integrator is constructed by choosing a discrete lagrangian which approximates the exact discrete lagrangian .we can define the order of accuracy of the discrete lagrangian in a way similar to that for discrete lagrangians resulting from regular continuous lagrangians ( see ) .a discrete lagrangian is of order if there exists an open subset with compact closure and constants and such that for all solutions of the euler - lagrange equations with initial conditions and for all .we will always assume that the discrete lagrangian is non - degenerate , so that the discrete euler - lagrange equations can be solved for .this defines the discrete lagrangian map and the associated discrete hamiltonian map , as in section [ sec : discrete mechanics ] .of particular interest is the rate of convergence of to .one usually considers a _ local error _( error made after one step ) and a _ global error _ ( error made after many steps ). we will assume the following definitions , which are appropriate for differential - algebraic systems ( see , , , ) .a discrete hamiltonian map is of order if there exists an open set and constants and such that for all and .[ thm : definition of the order of convergence ] a discrete hamiltonian map is convergent of order if there exists an open set and constants , and such that where , for all , , and . if the lagrangian is regular , then one can show that a discrete lagrangian is of order if and only if the corresponding hamiltonian map is of order ( see ) .also , the associated hamiltonian equations are a set of ordinary differential equations , and under some smoothness assumptions one can show that if is of order , then it is also convergent of order ( see ) . however ,in the case of the lagrangian it is not true in general both the order of the discrete lagrangian and the local order of the discrete hamiltonian map may be different than the actual global order of convergence ( see , ) , as will be demonstrated in section [ sec : variational partitioned runge - kutta methods ] .[ [ example - midpoint - rule . ] ] example : midpoint rule .+ + + + + + + + + + + + + + + + + + + + + + + in a simple example we will demonstrate that the variational order of accuracy of a discretization method is unaffected by a degeneracy of a lagrangian . in order to calculate the order of a discrete lagrangian , we can expand in a taylor series in and compare it to the analogous expansion for . if the two expansions agree up to terms , then is of order . expanding in a taylor series about and substituting it in , we get the expression where we denoted , , etc . , and the lagrangian l and its derivatives are computed at . for the lagrangian the values of , , determined by differentiating sufficiently many times and substituting the initial condition .note that in case of regular lagrangians the value of is determined by the boundary conditions , , and the higher - order derivatives by differentiating the corresponding euler - lagrange equations , but apart from that the expression remains qualitatively unaffected .the _ midpoint rule _ is an integrator obtained by defining the discrete lagrangian calculating the expansion in yields comparing this to shows that the discrete lagrangian defined by the midpoint rule is second order regardless of the degeneracy of .however , as mentioned before , if is degenerate we can not conclude about the global order of convergence of the corresponding discrete hamiltonian map .the midpoint rule can be formulated as a runge - kutta method , namely the 1-stage gauss method .we discuss gauss and other runge - kutta methods and their convergence properties in more detail in section [ sec : variational partitioned runge - kutta methods ] .note that low - order variational integrators for lagrangians based on the midpoint rule have been studied in and in the context of the dynamics of point vortices .to construct higher - order variational integrators one may consider a class of partitioned runge - kutta ( prk ) methods .variational partitioned runge - kutta ( vprk ) methods for regular lagrangians are described in and . in this sectionwe show how vprk methods can be applied to systems described by lagrangians such as . as in the case of regular lagrangians, we will construct an -stage variational partitioned runge - kutta integrator for the lagrangian by considering the discrete lagrangian where the internal stages , , , satisfy the relation and are chosen so that the right - hand side of is extremized under the constraint a variational integrator is then obtained by forming the corresponding discrete euler - lagrange equations .the -stage variational partitioned runge - kutta method based on the discrete lagrangian with the coefficients and is equivalent to the following partitioned runge - kutta method applied to the hamiltonian dae : [ eq : prk for dae ] ^t \dot q_i - dh(q_i ) , } \qquad \qquad \qquad i=1,\ldots , s,\\ \label{eq : prk for dae 2 } \dot p^i & = [ d\alpha(q_i)]^t \dot q_i - dh(q_i ) , \qquad \qquad \qquad \ !i=1,\ldots , s , \\\label{eq : prk for dae 3 } q_i & = q + h \sum_{j=1}^s a_{ij } \dot q_j , \phantom{-dh(q_i ) , } \qquad \qquad \qquad i=1,\ldots , s , \\\label{eq : prk for dae 4 } p^i & = p + h \sum_{j=1}^s \bar a_{ij } \dot p_j , \phantom{-dh(q_i ) , } \qquad \qquad \qquad \ , i=1,\ldots , s,\\ \label{eq : prk for dae 5 } \bar q & = q + h \sum_{j=1}^s b_j \dot q_j,\\ \label{eq : prk for dae 6 } \bar p & = p + h \sum_{j=1}^s b_j \dot p_j,\end{aligned}\ ] ] where the coefficients satisfy the condition and denote the current values of position and momentum , denote the respective values at the next time step , , , and , , , are the internal stages , with , and similarly for the others .see theorem vi.6.4 in or theorem 2.6.1 in .the proof is essentially identical .the only qualitative difference is the fact that in our case the lagrangian is degenerate , so the corresponding hamiltonian system is in fact the index-1 differential - algebraic system rather than a typical system of ordinary differential equations .+ [ [ existence - and - uniqueness - of - the - numerical - solution . ] ] existence and uniqueness of the numerical solution .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + given and , one can use equations to compute the new position and momentum .first , one needs to solve - for the internal stages , , , and .this is a system of equations for variables , but one has to make sure these equations are independent , so that a unique solution exists .one may be tempted to calculate the jacobian of this system for , and then use the implicit function theorem . however , even if we start with consistent initial values , the numerical solution for will only approximately satisfy the algebraic constraint ; so and can not be assumed to be the solution of - for , and consequently , the implicit function theorem will not yield a useful result .let us therefore regard and as -dependent , as they result from the previous iterations of the method with the timestep .if the method is convergent , it is reasonable to expect that is small and converges to zero as is refined .the following approach was inspired by theorem 4.1 in .[ thm : existence of the numerical solution for prk ] let and be smooth in an -independent neighborhood of and let the matrix be invertible with the inverse bounded in , i.e. , there exists such that where , , is the identity matrix , and denotes the block diagonal matrix suppose also that satisfy then there exists such that the nonlinear system - has a solution for .the solution is locally unique and satisfies substitute and in and to obtain for , where for notational convenience we left the s as arguments of , and , but we keep in mind they are defined by , so that is a nonlinear system for and .let us consider the homotopy for .it is easy to see that for the system has the solution and , and for it is equivalent to .let us treat and as functions of , and differentiate with respect to this parameter .the resulting ode system can be written as [ eq : ode system for dot qi and dot pi ] where for compactness we introduced the following notations : , similarly for ; is the -dimensional vector of ones ; , and similarly , denotes the block diagonal matrix with , where denotes the hessian matrix of the respective function , and summation over is implied .the system is further simplified if we substitute in . this way we obtain an ode system for the variables of the form \frac{d \dot q}{d\tau } = \nonumber \\ & ( \mathcal{\bar{a } } \mathbbm{1}_s)\otimes dh(q ) - \frac{1}{h}\mathbbm{1}_s\otimes\big(p-\alpha(q)\big).\end{aligned}\ ] ] since is smooth , we have {ij } = a_{ij } d\alpha(q_i ) = a_{ij } d\alpha(q_j ) + o(\delta ) = \big [ ( \mathcal{a}\otimes i_n ) \{d\alpha\ } \big]_{ij } + o(\delta),\ ] ] where for assumed small , but independent of . moreover , since and are smooth , the term , as a function of , is bounded in a neighborhood of 0 .therefore , we can write as \frac{d \dot q}{d\tau } = ( \mathcal{\bar{a } } \mathbbm{1}_s)\otimes dh(q ) - \frac{1}{h}\mathbbm{1}_s\otimes\big(p-\alpha(q)\big).\end{aligned}\ ] ] by , for sufficiently small and , the matrix has a bounded inverse , provided that remain in .therefore , the ode with the initial condition has a unique solution on a non - empty interval , which can be extended until any of the corresponding leaves .let us argue that for a sufficiently small we have . given and, the ode implies that therefore , we have and further for .this implies that all remain in for if is sufficiently small .consequently , the ode has a solution on the interval ] has a bounded inverse , therefore implies , that is , for some constant .note that for we have , and therefore , which completes the proof of the local uniqueness of a numerical solution to - . .[ [ remarks . ] ] remarks .+ + + + + + + + the condition may be tedious to verify , especially if one uses a runge - kutta method with many stages . however , this condition is significantly simplified in the following special cases : 1 . for a non - partitioned runge - kutta methodwe have , and the condition is satisfied if is invertible , and the mass matrix , as defined in section [ sec : equations of motion ] , is invertible in and its inverse is bounded .if is antisymmetric , then the condition is satisfied if is invertible , and the matrix is invertible in and its inverse is bounded .an interesting special case is obtained if we have , in some local chart on , for some constant matrix . without loss of generalityassume that is invertible and antisymmetric .the lagrangian then takes the form the euler - lagrange equations become and the hamiltonian dae system is let us consider a special case of the method with , i.e. , a non - partitioned runge - kutta method . applying it to we get [ eq : prk for dae for linear alpha ] is antisymmetric and invertible , then by theorem [ thm : existence of the numerical solution for prk ] the scheme yields a unique numerical solution to if the runge - kutta matrix is invertible .[ thm : equivalence with a rk for the ode ] suppose is invertible and . then the method is equivalent to the same runge - kutta method applied to .substitute and in , and use the fact to obtain since is invertible , this implies substituting this in yields together with and , this gives a runge - kutta method for . moreover , substituting and in , and using , one has that is , satisfy the algebraic constraint .+ [ thm : invariance of the primary constraint ] the numerical flow on defined by leaves the primary constraint invariant , i.e. , if , then .if the coefficients of the method satisfy the condition , then is a variational integrator and the associated discrete hamiltonian map is symplectic on , as explained in section [ sec : discrete mechanics ] . given corollary [ thm : invariance of the primary constraint ] , we further have : [ thm : symplecticity on the primary constraint ] if the coefficients and in satisfy the condition , then the discrete hamiltonian map associated with is symplectic on the primary constraint , that is , .[ [ convergence . ] ] convergence .+ + + + + + + + + + + + various runge - kutta methods and their classical orders of convergence , that is , orders of convergence when applied to ( non - stiff ) ordinary differential equations , are discussed in many textbooks on numerical analysis , for instance and . when applied to differential - algebraic equations , the order of convergence of a runge - kutta method may be reduced ( see , , ) . however , in the case of theorem [ thm : equivalence with a rk for the ode ] implies that the classical order of convergence of non - partitioned runge - kutta methods is retained .[ thm : retention of the classical order of convergence ] a runge - kutta method with the coefficients and applied to the dae system retains its classical order of convergence .let be the classical order of the considered runge - kutta method , an initial condition , the exact solution to such that , and the numerical solution obtained by applying the method iteratively times with .theorem [ thm : equivalence with a rk for the ode ] states that the method is equivalent to applying the same runge - kutta method to the ode system .hence , we obtain convergence of order in the variable , that is , for a fixed time and an integer such that , we have the estimate for some constant ( cf .definition [ thm : definition of the order of convergence ] ) . by corollary [ thm : invariance of the primary constraint ] we know that , so we have the estimate which completes the proof , since . + of particular interest to us are runge - kutta methods that satisfy the condition , for instance symplectic diagonally - implicit runge - kutta methods ( dirk ) or gauss collocation methods ( see ) .the -stage gauss method is of classical order , therefore we have : [ thm : order of convergence of gauss methods ] the -stage gauss collocation method applied to the dae system is convergent of order .as mentioned in section [ sec : variational error analysis ] , the midpoint rule is a 1-stage gauss method , therefore it retains its classical second order of convergence .[ [ backward - error - analysis . ] ] backward error analysis .+ + + + + + + + + + + + + + + + + + + + + + + + the system can be rewritten as the poisson system with the structure matrix ( see , ) .the flow for this equation is a _ poisson map _ , that is , it satisfies the property ^{t } = \lambda^{-1},\ ] ] which is in fact equivalent to the symplecticity property or written in local coordinates on or , respectively .let represent the numerical flow defined by some numerical algorithm applied to .we say this flow is a _ poisson integrator _ if ^{t } = \lambda^{-1}.\ ] ] the left - hand side of can be regarded as a quadratic invariant of . by theorem[ thm : equivalence with a rk for the ode ] the method is equivalent to applying the same runge - kutta method to . if its coefficients also satisfy the condition , then it can be shown that the method preserves quadratic invariants ( see theorem iv.2.2 in ) .therefore , we have : [ thm : r - k methods as poisson integrators ] if is invertible , the coefficients and satisfy the condition , and , then the method is a poisson integrator for. the true power of symplectic integrators for hamiltonian equations is revealed through their backward error analysis : a symplectic integrator for a hamiltonian system with the hamiltonian defines the _ exact _ flow for a nearby hamiltonian system , whose hamiltonian can be expressed as the asymptotic series owing to this fact , under some additional assumptions , symplectic numerical schemes nearly conserve the original hamiltonian over exponentially long time intervals ( see for details ) .a similar result holds for poisson integrators for poisson systems : a poisson integrator defines the exact flow for a nearby poisson system , whose structure matrix is the same and whose hamiltonian has the asymptotic expansion ( see theorem ix.3.6 in ) .therefore , we expect the non - partitioned runge - kutta schemes satisfying the condition to demonstrate good preservation of the original hamiltonian .see section [ sec : numerical experiments ] for numerical examples .partitioned runge - kutta methods do not seem to have special properties when applied to systems with linear , therefore we describe them in the general case in section [ sec : nonlinear alpha ] .when the coordinates are nonlinear functions of , then the runge - kutta methods discussed in section [ sec : linear alpha ] lose some of their properties : a theorem similar to theorem [ thm : equivalence with a rk for the ode ] can not be proved , most of the runge - kutta methods ( whether non - partitioned or partitioned ) do not preserve the algebraic constraint , i.e. , the numerical solution does not stay on the primary constraint , and therefore their order of convergence is reduced , unless they are _stiffly accurate_. let us again consider non - partitioned methods with .convergence results for some classical runge - kutta schemes of interest can be obtained by transforming into a semi - explicit index-2 dae system .let us briefly review this approach .more details can be found in and .the system can be written as the quasi - linear dae where and ^t & -i_n \\ 0 & 0 \\ \end{matrix } \right ) , \qquad\qquad\qquad f(y ) = \left ( \begin{matrix } dh(q ) \\p-\alpha(q ) \\ \end{matrix } \right),\ ] ] where denotes the identity matrix .let us introduce a slack variable and rewrite as the index-2 dae system [ eq : index 2 dae form ] this system is of index 2 , because it has dependent variables , but only differential equations , and some components of the algebraic equations have to be differentiated twice with respect to time in order to derive the missing differential equations for .note that is a singular matrix of constant rank , therefore it can be decomposed ( using gauss elimination or the singular value decomposition ) as for some non - singular matrices and . since is assumed to be smooth , one can choose and so that they are also smooth ( at least in a neighborhood of ) .premultiplying both sides of by turns the dae into [ eq : index 2 dae block form ] where we introduced the block structure , , and since is invertible , we can assume without loss of generality that the block is invertible , too ( one can always permute the columns of otherwise ) .let us compute from and substitute it in .the resulting system , [ eq : index 2 dae final block form ] has the form of a semi - explicit index-2 dae provided that has a bounded inverse .it is an elementary exercise to show that the partitioned runge - kutta method is invariant under the presented transformation , that is , it defines a numerically equivalent partitioned runge - kutta method for .runge - kutta methods for semi - explicit index-2 daes have been studied and some convergence results are available .convergence estimates for the component of can be readily applied to the solution of . as in section[ sec : linear alpha ] , of particular interest to us are variational runge - kutta methods , i.e. , methods satisfying the condition , for example gauss collocation methods ( see , ) .however , in the case when is a nonlinear function , the solution generated by the gauss methods does not stay on the primary constraint and this affects their rate of convergence , as will be shown below . for comparison, we will also consider the radau iia methods ( see ) , which , although not variational / symplectic , are _ stiffly accurate _ , that is , their coefficients satisfy for , so the numerical value of the solution at the new time step is equal to the value of the last internal stage , and therefore the numerical solution stays on the submanifold .we cite the following convergence rates for the component of after and : * -stage gauss method convergent of order , * -stage radau iia method convergent of order . with the exception of the midpoint rule ( ), we see that the order of convergence of the gauss methods is reduced . on the other hand ,the radau iia methods retain their classical order .[ [ symplecticity . ] ] symplecticity .+ + + + + + + + + + + + + + since the gauss methods satisfy the condition , they generate a flow which preserves the canonical symplectic form on , as explained in section [ sec : discrete mechanics ]. however , since the primary constraint is not invariant under this flow , a result analogous to corollary [ thm : symplecticity on the primary constraint ] does not hold , i.e. , the flow is not symplectic on . in section [ sec : numerical experiments ]we present numerical results for the lobatto iiia - iiib methods ( see ) .their numerical performance appears rather unattractive , therefore our theoretical results regarding partitioned runge - kutta methods are less complete .below we summarize the experimental orders of convergence of the lobatto iiia - iiib schemes that we observed in our numerical computations ( see figure [ fig : convergence plots for kepler s problem ] , figure [ fig : convergence plot for point vortices ] , and figure [ fig : convergence plot for the lotka - volterra model ] ) : * -stage lobatto iiia - iiib inconsistent , * -stage lobatto iiia - iiib convergent of order 2 , * -stage lobatto iiia - iiib convergent of order 2. comments regarding the symplecticity of these schemes are the same as for the gauss methods mentioned above in section [ sec : runge - kutta methods ] .in this section we present the results of the numerical experiments we performed to test the methods discussed in section [ sec : variational partitioned runge - kutta methods ] .we consider kepler s problem , the dynamics of planar point vortices , and the lotka - volterra model , and we show how each of these models can be formulated as a lagrangian system linear in velocities . a particle or a planet moving in a central potential in two dimensions can be described by the hamiltonian where denotes the position of the planet and its momentum ; is an arbitrary constant .the corresponding lagrangian can be obtained in the usual way as if one performs the standard legendre transform , , then will take the usual nondegenerate form , quadratic in velocities .however , one can also introduce the variable and view as , that is , a lagrangian linear in velocities ( see ) . comparing andwe see that the corresponding is singular . without loss of generalitywe replace with its antisymmetric part , which is invertible , and consider the lagrangian as a test problem we considered an elliptic orbit with eccentricity and semi - major axis .we took the initial condition at the pericenter , i.e. , , , , .this is a periodic orbit with period .a reference solution was computed by integrating until the time using verner s method ( a 6-th order explicit runge - kutta method ; see ) with the small time step .the reference solution is depicted in figure [ fig : reference solution for kepler s problem ] . using verner s method with the time step .,scaledwidth=80.0% ]we solved the same problem using several of the methods discussed in section [ sec : variational partitioned runge - kutta methods ] for a number of time steps ranging from to .the value of the solutions at was then compared against the reference solution .the max norm errors are depicted in figure [ fig : convergence plots for kepler s problem ] .we see that the rates of convergence of the gauss and the 3-stage radau iia methods are consistent with theorem [ thm : retention of the classical order of convergence ] and corollary [ thm : order of convergence of gauss methods ] . for the lobatto iiia - iiib methods we observe a reduction of order .the 2-stage lobatto iiia - iiib method turns out to be inconsistent and is not depicted in figure [ fig : convergence plots for kepler s problem ] .both the 3- and 4-stage methods converge only quadratically , while their classical orders of convergence are 4 and 6 , respectively .we also investigated the long - time behavior of our integrators and conservation of the hamiltonian . for convenience, we set in , so that on the considered orbit .we applied the gauss methods with the relatively large time step and computed the numerical solution until the time .figure [ fig : energy plots for gauss methods for kepler s problem ] shows that the gauss integrators preserve the hamiltonian very well , which is consistent with corollary [ thm : r - k methods as poisson integrators ] .we performed similar computations for the lobatto iiia - iiib and radau iia methods , also with .the results are depicted in figure [ fig : energy plots for lobatto and radau methods for kepler s problem ] .the 3- and 4-stage lobatto iiia - iiib schemes result in instabilities , the planet s trajectory spirals down on the center of gravity , and the computations can not be continued too far in time . the hamiltonian shows major variations whose amplitude grows in time .the non - variational radau iia scheme yields an accurate solution , but it demonstrates a gradual energy dissipation . over the time interval ] shown in the _ left column_.,scaledwidth=90.0% ] point vortices in the plane are another interesting example of a system with linear ( see , , ) .a system of interacting point vortices in two dimensions can be described by the lagrangian with the hamiltonian where denotes the location of the -th vortex , is its circulation , and is an arbitrary constant . as a test problem we considered the system of vortices with circulations and , respectively , and distance between them .the vortices rotate on concentric circles about their center of vorticity at and .we took the initial condition at , , and .the analytic solution can be found ( see ) as where .this is a periodic solution with period .see figure [ fig : reference solution for point vortices ] . and.,scaledwidth=80.0% ] we performed similar convergence tests as in section [ sec : kepler s problem ] .the value of the numerical solutions at time t=7 were compared against the exact solution .the max norm errors are depicted in figure [ fig : convergence plot for point vortices ] .the results are qualitatively the same as for kepler s problem . over the time interval ] ( _ right column _ ) , with a close - up on the initial interval ] ( _ right column _ ) , with a close - up on the initial interval $ ] shown in the _left column_.,scaledwidth=90.0% ]we analyzed a class of degenerate systems described by lagrangians that are linear in velocities , and presented a way to construct appropriate higher - order variational integrators .we pointed out how the theory underlying variational integration is different from the non - degenerate case and we made a connection with numerical integration of differential - algebraic equations .we also performed numerical experiments for several example models .our work can be extended in several ways . in section [ sec : lotka - volterra model ] we presented our numerical results for the lotka - volterra model , which is an example of a system for which the coordinate functions are nonlinear .the 1- and 3-stage gauss methods performed exceptionally well and preserved the hamiltonian over a very long integration time .it would be interesting to perform a backward error ( or similar ) analysis to check if this behavior is generic .if confirmed , our variational approach could provide a new way to construct geometric integrators for a broader class of poisson systems. it would also be interesting to further consider _ constrained _ systems with lagrangians that are linear in velocities and construct associated higher - order variational integrators .this would allow to generalize the space - adaptive methods presented in , to degenerate field theories , such as the nonlinear schrdinger , kdv or camassa - holm equations .we would like to thank prof .ernst hairer and dr .joris vankerschaver for useful comments and references .partial funding was provided by nsf grant ccf-1011944 .
in this paper we construct higher - order variational integrators for a class of degenerate systems described by lagrangians that are linear in velocities . we analyze the geometry underlying such systems and develop the appropriate theory for variational integration . our main observation is that the evolution takes place on the primary constraint and the hamiltonian equations of motion can be formulated as an index-1 differential - algebraic system . we also construct variational runge - kutta methods and analyze their properties . the general properties of runge - kutta methods depend on the velocity part of the lagrangian . if the velocity part is also linear in the position coordinate , then we show that non - partitioned variational runge - kutta methods are equivalent to integration of the corresponding first - order euler - lagrange equations , which have the form of a poisson system with a constant structure matrix , and the classical properties of the runge - kutta method are retained . if the velocity part is nonlinear in the position coordinate , we observe a reduction of the order of convergence , which is typical of numerical integration of daes . we verify our results through numerical experiments for various dynamical systems .
let us assume in the following that the underlying state variables whose values determine the value of an option at exercise dates follow a known stochastic process .generally , a stochastic process is specified by defining the state space , an index parameter ( usually the time ) and the dependence relation between the random variables . the latter is usually given in terms of a stochastic integro - differential equation containing some deterministic drift terms and random changes in addition , notably diffusion or jump processes .memory or so - called non - markov effects may readily be included into the transition probabilities .non - markov models have found recently considerable interest in the finance community , because the heath , jarrow and morton ( 1992 ) approach to interest rate evolution can be shown to possess non - markovian features , in general .we assume ` smoothness ' for the transition probability density function which determines the probabilities based on a path s evolution history that the path will comes close to any element of the state space at some time in the future .these are supposed to be continuous functions of the path variables . a discrete set of sequentially selected random points in timeis often called a chain " .a path is the corresponding generalization for a continuous time evolution . at first, we will restrict ourselves to options and stochastic processes which result in the following simple property of the yes - no boundary : * the _ boundary _ between yes and no regions at any given decision time is a simply connected hypersurface in the state space with dimension of one unit less than the dimension of the state space . " as a consequence of the assumption , there are no disconnected pieces of yes or no regions at a given time either . with the exception of some pathological cases , assumption a1will be compatible only with option payoffs contingent on the values of the state variables at exercise date . instead of specifying these pathological stochastic processeswe simply require here : * the payoff from exercising an option is only a function of the state variables at the exercise date but not at earlier times , i.e. . " in general , the topology of exercise and hold regions of american options may be more complicated than allowed by our assumptions .two examples are some types of barrier options and path - dependent options .suppose the owner of a barrier option receives only some payoff if a security stays within some specified range of values at exercise date .therefore the option will not be exercised for ` too small ' and ` too large ' values of the underlying which leads to two disconnected no regions .another perhaps more interesting example of options not covered by the assumption stated above are american path - dependent options whose valuation we will return to later in this section . here the payoff depends on the values which the underlying state variables have taken in the past , e.g. on the arithmetic mean within some time interval . projected into the space of state variables at any given time yes and no region are completely scrambled .it may be optimal to exercise or hold an option for the _ same _ present value of the state variables depending on the different histories of each path .lateron , we will show that a redefinition of the state space enables us to cover the case of path - dependent options by the same algorithm .we should also mention that the assumption a2 restricts the memory properties of the underlying stochastic process .they should not spoil the assumed property of the yes - no boundary in the state space .this simply means that although the future evolution of each path may depend on the past , the ` average outcome ' determining the decision whether to exercise the option at present time is not influenced by the past but of markovian nature .assumption a1 allows us to define coordinates in the dimensional state space such that any element of the state space can be uniquely characterized by some point of the yes - no boundary and an additional coordinate in a peculiar way . for any path passing through at present timethe following inequalities hold between the payoff from exercising the option and the option value if the option is being held at least until the next exercise date : the content of eq .( [ eq_coord ] ) is to provide convenient coordinates from which one can easily read off whether point is a boundary point ( ) , in the yes ( ) or in the no ( ) region .the knowledge of all or just their signs is tantamount to finding the yes - no boundary itself .unfortunately , a straightforward application of the inequalities as expressed in eq .( [ eq_coord ] ) to determine the boundary is prohibitive . estimating the expected payoff from holding the option and comparing it to the payoff from exercising would require a ` new simulation within the simulation ' .such a procedure becomes numerically awkward and would severely restrict the complexity of the problem to be tackled , concerning e.g. the nature of the stochastic process and the number of exercise dates .we will proceed now to estimate the location of the yes - no boundary based on a set of sampled paths . for this purposeit will be necessary to evolve the state variables backward in time .this feature of the novel monte carlo approach which it shares with all other numerical approaches to option pricing like the binomial tree method is caused by the boundary conditions of the problem .the option price is a specified function of the state variables only on the expiration date .of course , the location of the yes - no boundary is also trivially determined at expiration time . for a simple call or put option on a single asset it is the point .the continuous time process will be presented as a sequence of time steps which add up to the lifetime of the contract .the time steps are supposed to match the distance between decision times exactly if a finite number of exercise opportunities has been specified in the option contract . for american options which give the owner the right of exercise at any time this is an approximation . however , choosing ` small ' time steps the results will be close to the continuum limit .the current time for which the location of the yes - no boundary is being sought can be represented by the integer index ( with for american options ) .since the location of the yes - no boundary will be determined step by step moving backward in time , it is implicitly assumed from now on that the yes - no boundary has already been determined for later times + 1, + 2, , ( within the usual uncertainties due to the numerical imprecision ) .therefore we are able to evaluate for each path its payoff from optimal exercise at _ later _ times : the path - dependent payoff has been discounted to time in eq .( [ calg_i+1 ] ) . represents the earliest time ( under the constraint ) at which path crosses the boundary from the no region into the yes region or equals if the path stays within the no region for all times between and .( for the latter case =0 , of course . )we have used the subscript for the argument of to emphasize that the payoff defined above depends only on the values of the path variables for times but not on the past evolution of the state .so far , we are not able to calculate the discounted pathwise payoff , i.e. for time .this would require the knowledge of the yes - no boundary at time . however , the path - dependent payoffs given in eq .( [ calg_i+1 ] ) enter into the expression for the value of the option at time subject to the condition that the option is _ not _ immediately exercised ( and , of course , that the state variables have evolved along path so far ) : the integration over all future path segments in eq . ( [ eq_opt_price_ih ] ) is an ordinary multiple integral ( but a path integral in the continuous time limit ) . denotes the transition probability density function and describes the probability ( density ) to evolve along the path segment in the future up to time depending on the present and previous ( for non - markov processes ) states of path .next we define a continuous path in the state space by letting be a continuous parameter within some range such that all are elements of the state space and vice versa .let us assume for a moment that we know already that this line intersects with the yes - no boundary but we do not know which point it is .( somebody else has constructed the path . )can we estimate the value of the yes - no boundary point using the information provided by the path sample and obtain its correct value in the limit ?the answer is positive .we are able to construct a function of the states whose only maximum is at , i.e. the boundary point , for .it is clear that our ability to define such a function has important repercussions for the question how to _ find _ the yes - no boundary .the virtue of path is to provide us with the knowledge that it has exactly one intersection point with the yes - no boundary .we do not necessarily need this information .instead , we may search for the global maximum on continuous paths connecting points in the yes region with points in the no region . according to statement a1 they will cut through the yes - no boundary at least once .( it will not hurt the search procedure , if we accidentally hit the boundary several times reflected by several local maxima . )it is also rather straightforward to determine selected points which are in either the yes or the no region for sure .for instance , points lie in the no region if the payoff from exercising is zero . or the point with zero value of the underlying is certainly an element of the yes region for a plain vanilla put , because the payoff can not get any better . in order to proceed with the proof of our conjecture we define a generally suboptimal exercise policy for all paths intersecting with at time which depends on an arbitrary point .we define a preliminary `` yes '' region ( with the quotation marks reminding us that the true yes boundary may be different ) as to cover all paths with state variable which fulfill . for instance , in case of a plain vanilla put ( call ) option on a single underlying asset we may declare that lies in the `` yes '' region if it fulfills the condition ( .the pathwise payoff from the -dependent policy is it is clear that the pathwise payoffs will sometimes be larger for ` wrong ' boundary points ( ) than for the correct one .this reflects the additional information about the future which is encoded in the path segment .the essential point is now to sum over all paths which removes this bias : the function depends on time , the sampled paths intersecting with ( of number ) and on .our central proposition can be stated now as follows : * taking the limit of infinitely many paths , has one extremum which is a maximum within the one - dimensional domain .its position is the intersection point of with the yes - no boundary = . " in order to prove proposition p1 we are going to rearrange the sum in eq .( [ eq_optim_mc ] ) .we put all paths which have essentially the same transition probability density function into the same ( albeit infinitesimally small ) ` bin ' . herewe need the assumption stated above that the transition probability density function is a continuous function of the path variables . after ` binning ' and taking the limit eq . ( [ eq_optim_mc ] )turns into the transition probability density function appears in eq .( [ eq_optim_cont ] ) , because relative frequencies approach the probabilities which are governing the random selection of paths according to bernoulli s law of large numbers .the probability density to have a path segment close to needs to be defined in accordance with the integration measure and is normalized to one . the quantity in brackets on the right hand side of eq .( [ eq_optim_cont ] ) can be readily evaluated using eqs .( [ eq_opt_price_ih ] ) and ( [ eq_pisss ] ) to be coincides with the option price , except for values in a certain range .furthermore , on virtue of eq .( [ eq_coord ] ) is always _ smaller _ than for such chosen values : the conditions of eq .( [ eq_di ] ) together with the positiveness of the weight factors in eq .( [ eq_optim_cont ] ) guarantee that this completes the proof of proposition p1 .as stated above we may turn things around and construct the yes - no boundary from a set of lines in state space which connect elements of the yes and no region .the global maximum of function along each line is the supposed intersection point with the yes - no boundary .this process may be iterated until the desired precision is reached .this standard optimization procedure becomes particularly simple in case of plain vanilla options .since the yes - no boundary is just a point at any exercise date , its location is already completely specified with one iteration step .the determination of the yes - no boundary at time ( index ) enables us to calculate the pathwise payoff at this time .going step - by - step backward in time we may finally calculate the pathwise payoffs at initial time whose average determines the option value at contract time .the presented optimization technique may be used in the problem of valuation of american _ path - dependent _ options as well . here the payoff from exercising the option depends on some function of the state variables in the past which we will denote as in the following .for instance , may be the arithmetic or geometric average of any state variable over some time window or the value at some specified time(s ) of the past ( look - back options ) . for simplicity, we restrict ourselves here to stochastic processes of markovian nature whose parameters have been specified . in this casethe exercise decision at any time before expiry is a function of just two each possibly vector - valued variables , the values of the state variables and of the path - dependent variable , both taken at the decision time .the difference between standard path - dependent and plain vanilla options is just that the role of the current value of the underlying variable in plain - vanilla contracts is played by in path - dependent contracts , e.g. max for a put .the topology of the exercise region for this kind of path - dependent options is very simple but only in the space of state variables _ enlarged by the additional dimension(s ) of _ .the yes - no boundary in the - space is again simply connected .only its projection into the space leads to a scrambling of yes and no regions .this suggests immediately how the suggested optimization technique may be applied for the case of these path - dependent options .we equip the space of states with additional dimensions by declaring to be one of the stochastic variables .of course , the stochastic character of this variable is rather funny .its change with time is completely deterministic and of non - markovian type .however , these features are in accordance with the basic assumptions entering into the proof of proposition p1 . the complications of such path - dependent option contracts just result in a higher dimension of the space in which we have to track the yes - no boundary .we are now in a position to compare our novel strategy for an extraction of american - style option prices from monte carlo simulation to previously suggested ones .boyle , broadie , and glassermann discuss mainly three strategies in their review paper to overcome the inherent limitations of the naive monte carlo procedure : the bundling algorithm invented by tilley ( 1993 ) , the stratified state aggregation algorithm suggested by barraquand and martineau ( 1995 ) and broadie and glassermann s ( 1995 ) algorithm based on simulated trees . in tilley sapproach bundles of paths which are ` close ' in state space at given time are considered to emanate from one single point for the purpose of option valuation .this introduces some error whose magnitude can not be easily controled .the bias of the exercise decision will be very strong if a bundle contains only few paths .such a situation is likely to emerge in situations in which there are many exercise dates or several state variables .barraquand and martineau introduce another type of ` bundling ' , however , in the payoff space .our approach shares similarities with tilley s algorithm in the sense that it contains an implicit ` bundling ' of paths ( called ` binning ' ) in the proof of the proposition p1 .however , we have actually never to carry out this bundling in a numerical calculation , because we stick to an optimization of the ` inclusive ' function which sums up all bins .broadie and glassermann proposed an algorithm based on a ` bushy tree ' structure in state space which avoids partitioning of the state or payoff space . in their modelmany branches emanate from each node .this process is replicated as often as there are decision times in the problem .the replication process effectively limits to small numbers ( on the order of 4 ) in practical computations .they also discuss the problem that the suggested estimators of the option prices tend to be biased in ` up ' or ` down ' direction . and converge only to the correct value in the limit .this problem is of great relevance for practical calculations ( which are always at finite ) .they use the idea to calculate two estimators one biased low and the other high to obtain confidence limits from the simulations .lateron , we are going to apply this idea in the framework of our approach .in this paper we will only consider stochastic processes of rather simple nature , so - called ideal wiener or diffusion processes ( for the of any state variables ) . in physics these processesare usually called brownian motion .such processes seem to be most commonly used in valuation problems .furthermore , numerical techniques have been developed to calculate prices for plain vanilla and other simple types of american options .in particular , the binomial tree method may provide very accurate values and will be used as a benchmark here against which the results from the monte carlo simulations are being tested . in this sectionwe will restrict ourselves to _ puts _ on the current value and on the geometric or arithmetic average of one single underlying security .an ideal wiener process can be characterized by the differential transition law governing the change of the state variable within a time interval . is a random number drawn from a normal distribution with mean 0 and variance 1 . is in general the drift velocity which may be replaced by the risk - free instantaneous interest rate using arbitrage arguments .sometimes one needs to follow an ideal wiener process only _ backward _ in time , e.g. in case of monte carlo estimation of plain vanilla options . using the _ bridge _ construction start and final value are known a random value at intermediate time may be chosen according to after having fixed final and start values , iterative use of eq .( [ eq_wiener_bw ] ) may be employed to generate a time reverted copy of the standard diffusive wiener - type motion . for the simulation results presented belowwe have used a simple importance sampling scheme .the exercise value of strongly out - of - the money options at expiry will be zero on most paths .therefore the weight of the rare paths resulting in nonzero option value has been artificially enhanced in the path generation process . of course , the enhancement factors are taken out again in the calculation of the path - averaged prices . in its most simple formthe monte carlo algorithm which is suitable for valuation of american - style options consists of the following steps : ( 1 ) : : generate path sample .( 2 ) : : go step - by - step backward in time starting at expiry . ( 3 ) : : track the yes - no boundary by employing the optimization of function along arbitrary paths connecting yes and no region ( cf .( [ eq_optim_mc ] ) ) .( 4 ) : : evaluate the pathwise payoffs at the earlier time analogous to eq .( [ calg_i+1 ] ) .( 5 ) : : repeat the process until the contract time is reached . ( 6 ) : : average the pathwise payoffs discounted to initial time over the whole path sample ( cf .( [ eq_amopt_mc ] ) ) .let us discuss now some aspects related to the _ finite _ size of the path sample in practical calculations .it is clear that step * ( 6 ) * does not provide an unbiased estimator of the option price at any finite but is biased upward .the reason is that an error is made extracting the yes - no boundary from the finite size path sample .a deviation of the extracted from the true boundary has its root in the information about the mismatch between frequency distribution of the paths and underlying probabilities which enters into the optimization process .the estimator becomes only asymptotically unbiased .since we are not able to construct an unbiased estimator , we may get an estimate of the error introduced by the bias in step * ( 6 ) * by constructing an estimator which is downward biased at finite but also asymptotically unbiased .it is very simple to construct such an estimator : ( 6 ) : : generate a path sample independent from the first one and evaluate the american option prices using the previously calculated yes - no boundary .the reason that the prices calculated from this path sample are biased downward mirrors the one for the upward bias in the former case . hereany deviation from the true boundary implies a suboptimal exercise policy .we hasten to add that the algorithm presented so far has one shortcoming related to the combined effects of finite sample size and well - determined values of the underlying variables at start time . only with paths crossing the yes - no boundaryits optimal location can be determined , of course .formally , the proof of proposition p1 requires that the probability densities appearing in eq .( [ eq_optim_cont ] ) be positive - definite in a region covering the yes - no boundary .however , this condition will be violated in practice at early times , since all sampled paths emanate from a common starting point . on the other side , in such a situationthe knowledge of the precise location of the boundary becomes irrelevant for the valuation problem .it suffices to know at this stage on which side of the boundary the vast majority of paths can be found .two variants with differing degree of sophistication have been developed to determine the earliest time for which the algorithm above is to be used : ( 7a ) : : if the yes - no boundary gets closest to either one of the endpoints of , e.g. the paths with smallest or largest value for plain vanilla options , it is assumed that the boundary can be found outside the range of the path sample for all earlier times .( 7b ) : : `` flashlight mechanism '' : when the yes - no boundary moves towards a region only poorly covered by the original path sample , additional path segments are created which evolve in the region of the yes - no boundary at this time .as far as the proof of proposition p1 is concerned the probability densities in eq .( [ eq_optim_cont ] ) do _ not _ need to be consistent with the initial value specification for the pricing problem . herewe make use of this freedom to generate a set of paths whose initial values are tuned to ` shed light ' on the location of the yes - no boundary .concerning accuracy simulation mode * ( 7b ) * is only barely noticably superior over mode * ( 7a ) * for the tested sample of plain vanilla options .all simulation results for more complicated options reported on in this paper have been achieved employing mode * ( 7b ) * . atfinite the values and therefore also the maximum of the payoff function do not change between values of its argument which are taken by two neighboring paths . furthermore, the precision with which the true maximum of function can be estimated from a finite size sample is a multiple ( ) of the typical distance between paths in the region along the yes - no boundary .[ fig_1 ] shows the monte carlo estimation of the optimization function for a plain vanilla put option on a security of ( time - dependent ) value in order to illustrate the finite effect .selected values are displayed at a fixed time but varying the size of the path sample .the true minimum is quite shallow for the selected time as can be seen from the figure .the shallowness is related to the strong time dependence of the location of the yes - no boundary , because we consider a time shortly before expiry .the location of the maximum can be reliably extracted from the simulations only for rather large samples on the order of paths .one may ask whether in view of the imprecisions associated with any finite sample size it is worthwile to search for the exact maximum of .in fact , a well - chosen ` smoothing ' procedure which subtracts the white random noise may give more accurate results than taking the real maximum of as a boundary point .for this paper we compared two variants how to determine the yes - no boundary points : ( 3a ) : : determine the yes - no boundary point as the element of the set of all intersection points of the sampled paths with path for which restricted to these points attains a maximum value .( 3b ) : : determine the yes - no boundary point as the element of the set of all grid points for which restricted to these points attains a maximum value .the distance between sites on the grid is lowered stepwise until some pre - defined precision is reached or more than one local extremum appears for the set of values of restricted to the grid sites .ultimately , in the limit of infinitely many paths both variants may give the same correct result for the optimization of .however , computation employing variant * ( 3b ) * is much quicker .it requires a number of operations growing only linearly with while variant * ( 3a ) * needs at least on the order of operations .we have generated monte carlo estimates for prices of randomly sampled standard american - style put options ( see table for details ) .we have employed the two simulation modes * ( 3a ) * and * ( 3b ) * to compare their effectiveness .the frequency distribution of the relative errors in the simulation results based on monte carlo mode * ( 3a ) * are shown in fig .[ fig_2 ] , for mode * ( 3b ) * in fig .[ fig_3 ] .these errors are determined from the normalized difference to the prices extracted from binomial - tree calculations : the monte carlo estimates have been calculated for the path sample which has been used to determine the yes - no boundary and for an independently generated path sample .the comparison between the results in the two modes reveals that both lead to similar accuracy .in addition , the figures display the distribution of the differences between monte carlo values based on the original and the independent path sample .upward and downward bias are clearly revealed for the simulation results in mode * ( 3a ) * ( lower left panel of fig .[ fig_2 ] ) . on the other side ,the fact that mode * ( 3b ) * employs only an approximate optimization of the payoff function tends to wash out most of the up- and downward biases .[ fig_3 ] shows also the error distribution of the corresponding european option estimations .we note that the typical size of the errors is comparable to the results for american puts .the error in both cases is thus dominated by the finite size of the path sample ( in figs .[ fig_2 ] and [ fig_3 ] ) . the error induced by the bias in the construction of the yes - no boundary at finite seems to be of less importance .let us turn now to the case of put options contingent on the geometric or arithmetic mean value of the underlying variable . as discussed in the preceeding section exercise and hold regions and therefore the yes - no boundary are simply connected in the space .the boundary is therefore one - dimensional in this space .we have chosen a very simple procedure in order to track the boundary .these options are not directly contingent on the current value at any of the decision times which makes the more relevant variable. therefore we sort the sampled paths into bins covering the relevant region . ignoring the differences between values _ inside _ each binwe have reduced the optimization problem again to finding the value at which the payoff function attains maximum value .this optimization proceeds as described already for plain vanilla options . for the calculations described here we have taken the number of bins in to be 20 .monte carlo simulation results for a randomly generated sample of put options contingent on geometric averages are presented in fig .[ fig_4 ] . as before ,errors in the simulation have been calculated in case of geometric averaging by comparing the monte carlo estimations for the option prices to binomial tree results .here we consider only averages over the whole lifetime of the option .this is an problem for the binomial - tree method , being the number of time steps . allowing the time window for the averaging to slide would increase the storage and cpu time requirements by one power of .in contrast , the monte carlo approach is not impacted severely by such a generalization .a comparison with the corresponding errors of the simulation results for european options under the same conditions ( not shown ) reveals again that the errors are dominated by the finite size of the path sample and not by the uncertainty in determining the yes - no boundary . the effect of upward and downward bias in the estimations due to choosing either the original or the independently generated path sample are very weak as can be seen from the lower left panel of fig . [ fig_4 ] .as explained before this is a direct result of the approximations in the optimization procedure * ( 3b ) * .it is noteworthy that the bias from parametrizing the yes - no boundary on a grid ( in direction ) is stronger than either bias from taking one of the two path samples .the representation of the yes - no boundary on a grid tends to _ underestimate _ the option prices because of the coarse graining procedure involved .the small nonstatistical downward net bias can be read off from the average deviation between monte carlo estimation and binomial tree results for the options contingent on geometric averages .it amounts to - 0.24 % for the results based on the original path sample and - 0.1 % for the independent path sample .results for prices of options contingent on arithmetic averages over the whole lifetime of the option are also included in fig . [ fig_4 ] ( lower right panel ) . since no other accurate method is available , we present the distribution of relative deviations between the values based on an approximate formula suggested by ritchken , sankarasubramanian , and vijh ( 1989 ) and the simulation results .the approximate formula relates the price for a claim contingent on the arithmetic average to the corresponding option price using geometric averaging via here ( ) denotes `` value of american ( european ) option '' .the subscript ( ) refers to arithmetic ( geometric ) averaging .the superscripts and characterize the method of calculation , binomial tree and monte carlo respectively .[ fig_4 ] shows that the approximation works reasonably well .we have shown in this paper how to construct monte carlo algorithms for american option valuation .we are in the lucky situation that the suggested algorithm turns out to be conceptually simpler than the other algorithms suggested so far .it depends only linearly on the number of sampled paths and exercise dates which makes it computationally feasable to get close to the continuum limit for these two variables .thus no price is to be paid for the accuracy of this novel monte carlo algorithm in terms of more cpu time or exceedingly large storage requirements .the crucial element , the optimization of a certain payoff function of the path sample , turns out to be numerically rather stable .the tracking of the yes - no boundary is achieved most precisely in those regions of the state space covered densely by sampled paths , i.e. where it is needed for the option price estimation .there is a further reason for the algorithm s stability . in casea maximum turns out to be shallow and is therefore not easily found by the algorithm , the option price does not depend much on the exact location of the boundary between early - exercise and hold region .indeed , we have demonstrated that the errors in the simulation results are typically dominated by the statistical errors if the size of the path sample is on the order of .this points to one important direction of future research .paskov and traub ( 1995 ) have shown that the replacement of pseudo - random numbers in the traditional monte carlo approach by a series of so - called quasi - random numbers may lead to a spectacular gain in the precision of the achievable results , at least for some problems .the reason is that the quasi - random numbers are distributed more evenly . by using appropriately chosen sequences of quasi - random numbers sample sizes for estimation of european optionscould be reduced by orders of magnitude , without loss of accuracy .it would open new dimensions for the applications of the novel monte carlo algorithm if this would also hold true for american options .moreover , we would like to add that the extraction of sensitivity coefficients usually called the `` greeks '' employing the novel monte carlo algorithm is as straightforward as for previously considered situations , e.g. by fu and hu ( 1995 ) .we have presented some simulations for option types for which other means of calculation are known .however , the monte carlo algorithm presented here provides unique opportunities to evaluate path - dependent option prices for which no other methods are available or computationally feasible .one example , the case of options contingent on arithmetic averages has been discussed already in this paper .another candidate are `` look - back '' options of american style .usually , look - back options allow the owner to buy or sell an asset for a price dependent on the values of the asset during the whole lifetime until expiry , in the simplest case the minimum or maximum .restricting the time window for the look back but allowing for early exercise adds american features to these rather common options .another virtue of the presented monte carlo algorithm is to allow option valuation for more complicated stochastic processes than wiener - type diffusion .p. boyle , m. broadie , and p. glassermann , 1997 , monte carlo methods for security pricing , in risk publication group , ed .: pre - course material for risk training course ( january 1997 ) ` monte carlo techniques for effective risk management and option pricing ' , ( risk publication group , london ) .caption : mean values of riskless interest rate , volatility , start price , strike price and time until expiry are given in first row , their standard deviations respectively in second row .the random values are chosen according to independent normal distributions . in a second step, some parameter values may be rejected and replaced by other random values if they are nonpositive .furthermore , strike price is required to be within a 2 ( 1 ) range around the mean value at maturity for options contingent on the current ( average ) value .this constraint could be relaxed using a more sophisticated importance sampling scheme .the number of evenly spaced time steps for the monte carlo simulation and the binomial - tree calculations has been fixed arbitrarily to be equal 100 .monte carlo estimation of the optimization function for a plain vanilla put option . option andstochastic process parameters are chosen as in hull s book ( 1993 ) , example 14.1 . the monte carlo simulation with number of time steps equal50 has been repeated varying the total number of sampled paths : , , and . only the function values at time step 45 are displayed for 100 values of the argument .the location of the yes - no boundary according to a binomial - tree calculation is pointed at by an arrow .frequency distribution of the relative errors in simulation estimates of a sample of plain put option prices .the error is determined from the normalized difference to the prices extracted from binomial - tree calculations .the monte carlo estimates are calculated for the path sample which has been used to determine the yes - no boundary ( biased up ) as well as for an independently generated path sample ( biased down ) and the average of the two samples ( mc average ) .in addition , the figure displays the distribution of the difference between upward and downward biased monte carlo values ( lower left panel ) .each path sample encompasses generated paths per option .the monte carlo simulation employed the mode in which the ` global maximum ' of the optimization function was searched .the content of this figure is the same as in the previous figure , with the exception of two features .the mode of the monte carlo simulation has been to look up the maximum of the optimization function on a grid ( see text ) .furthermore , the distribution of errors for the averaged monte carlo values which looks very similar to the corresponding distributions of upward and downward biased estimates is not displayed . instead , the distribution of the errors for the corresponding _european _ option values is displayed ( in the lower right panel ) .frequency distribution of deviations between simulation estimates of put option prices on geometric ( left side ) and arithmetic average values ( right side ) and benchmark calculations .the benchmark for the prices on geometric averages is provided by the rather accurate binomial - tree method , and for the arithmetic averages by the approximate formula eq .( [ eq_approx ] ) .
it is shown how to obtain accurate values for american options using monte carlo simulation . the main feature of the novel algorithm consists of tracking the boundary between exercise and hold regions via optimization of a certain payoff function . we compare estimates from simulation for some types of claims with results from binomial tree calculations and find very good agreement . the novel method allows to calculate so far untractable path - dependent option values . 21.5 cm -1.5 cm * valuation of path - dependent american options + using a monte carlo approach + * + h. sorge + department of physics and astronomy , + state university of new york at stony brook , ny 11794 - 3800 pricing of options is an important area of research in the finance community . the field has been pioneered by black and scholes ( 1973 ) . they made a major breakthrough by deriving a formula for the price of any contingent claim which depends on a non - dividend - paying stock . using the assumption of no arbitrage , they were able to show that the price of a derivative security can be expressed as the expected value of its discounted payoffs . the expectation is taken under the assumption of a _ risk - neutral _ evolution of the value of the underlying security . merton ( 1973 ) generalized these ideas to situations in which the interest rates themselves fluctuate in time . due to the complexity of the underlying dynamics , numerical methods have become increasingly popular in modern finance . they are used for a variety of purposes , for instance valuation of securities and stress testing of portfolios . analytical solutions for problems in finance have been found only for rather special cases . for instance , in order to gain the solution for prices of european options black and scholes had to assume that the evolution of the underlying asset price ( more precisely its natural logarithm ) follows a so - called wiener process with time - independent volatility . european options can be exercised only at expiration date . in contrast , american - style options which can be exercised during the whole lifetime of the option can be valued only numerically . furthermore , numerical methods have to be employed if the dimension of the problem increases ( more than one state variable ) or in case of more realistic approaches to the stochastic process like for some no - arbitrage models of interest rate evolution . the monte carlo approach lends itself very naturally to the evaluation of security prices and interest rates . schematically , it consists of the following steps . first , sample paths of the state variables ( asset prices and so on ) over the relevant time horizon are generated . the cash flows of the securities on each sample path are evaluated , as determined by the structure of the security in question . the discounted cash flows are averaged over the sample paths . the monte carlo method is very flexible , since it does not depend much on the specific nature of the underlying stochastic process . its accuracy is also independent on the dimensionality of the problem which is its dominant advantage over more traditional numerical integration methods . it is outside the scope of this paper to discuss the various facets of recent research on the use of monte carlo in the finance area . the reader may consult the concise recent reviews of boyle , broadie , and glassermann ( 1997 ) on these topics . in particular , they report on recent progress in developing more _ efficient _ monte carlo algorithms . standard monte carlo methods converge notoriously slow ( with 1/ , being the number of sampled paths ) . a central goal of recent research activity has therefore been the refinement of variance reduction techniques ( antithetic and control variates ) , importance sampling and low - discrepancy random number sequences . if the development in sciences like physics is any guide , monte carlo applications in finance will become even more important . with computational prowess further increasing the monte carlo method may even shed its ` brute force ' image from the past which was founded on its rather slow convergence property . boyle ( 1977 ) was the first to propose the use of monte carlo methods for option pricing in the literature . since the exercise date of european contingent claims is fixed , the mechanics of a price evaluation of european options employing the monte carlo method is rather straightforward . let us begin by collecting the notation for the most important variables which we are going to use throughout the rest of the paper . in general , we follow the notation in hull s book ( 1993 ) , except that the present time for which the option price is calculated is designated to be `` 0 '' ( without loss of generality ) . = the value of the state ( asset price , ) at time + underlying the derivative security + ( which may be a single variable or a vector ) , + = the initial value of the state variables , + + x = strike price of the option , + p = path on which the underlying state evolves in time , + = the value of the option _ if _ exercised at time + ( for a simple call and + for a put ) , + = instantaneous risk - less interest rate at time , + = volatility ( standard deviation ) of state variable + or their components respectively , + t = the lifetime of the option contract ( expiration date ) , + = the option value at time . we may easily estimate the european option value from a monte carlo path sample for the variables related to the derivative security . these paths need to be generated with probabilities determined by the underlying stochastic process . the option value at the expiration date has been specified in the option contract . in general , it may depend on the path of the relevant state variables between closing the contract and expiration date . each pathwise final option value from the simulation is discounted backward in time to determine its value at time 0 . since we do not know at time =0 yet on which path the underlying variables will evolve , the average over all paths is taken to estimate the present option value : the discount factor includes the risk - free interest rate , possibly averaged over the lifetime of the option . for simplicity , it is assumed in eq . ( [ eq_europt_mc ] ) that all sampled paths have equal probability . generalization to a situation in which paths are characterized by different probabilities which may arise e.g. in the context of monte carlo optimization by importance sampling is straightforward . the pricing of options which may be exercised prior to the expiration date by monte carlo simulation is more involved . here , the owner holds the right to exercise the option at several ( bermudan ) or possibly infinitely many ( american ) ` decision dates ' . many types of american contingent claims trade on exchanges and in the over - the - counter market . examples include options , swaptions , binary options and asian options . it has been shown by roll ( 1977 ) , geske ( 1979 ) , and whaley ( 1981 ) that exercise of _ call _ options which give the owner the right to purchase some underlying asset at the agreed - on strike price is usually unfavorable before expiration date , except close to ex - dividend dates . the situation is much less clear - cut in case of _ put _ options which give the owner the right to sell the asset at the strike price . in these situations the ` naive ' monte carlo is encountering unsurmountable difficulties . in order to decide whether it is more favorable at some intermediate time to hold or exercise the option the owner needs to compare the expected payoff in the two cases . the maximum of the two forms the option value at that time which after discounting and taking the expectation value leads to the following expression for the option at closing date : the angular brackets denote the expectation value with respect to the ( risk - free ) probability measure . the path - dependent times are the so - called optimum stopping times which may be any of the decision dates . a single path provides clearly insufficient information to evaluate the option value in case of non - exercise at any of the decision dates . the insufficiency of the naive monte carlo method to deal with the optimum stopping problem has lead some authors like hull ( 1993 ) to the claim in the literature that monte carlo simulation can only be used for european - style options " . on the other side , evaluation of american - style options via monte carlo simulation has found already some consideration in the literature as reviewed boyle , broadie , and glassermann ( 1997 ) . their common denominator is a ` clever ' estimate of the option prices on the decision dates by bundling subsets of paths . these authors noted that the suggested algorithms can not be considered satisfactory yet . either some approximations may lead to uncontroled errors affecting the simulation results or the required large computational effort limits their applicability , e.g. to a small number of exercise dates . in this paper we are suggesting a completely different strategy to calculate american - style put and call option prices via monte carlo simulation . during the sampling of the paths we do _ not _ attempt to estimate the expectation of the payoff if the owner continues to hold the option . instead , we use the sampled paths to evaluate the boundary between the early - exercise region and the hold region in the space of variables entering into the option contract . the location of this boundary is the crucial piece of information whose knowledge allows the straightforward use of the monte carlo procedure for option price estimation . we treat the valuation of american options as an _ optimization problem _ of a certain payoff function which depends on the set of sampled paths . this function depends also on the exercise policy , i.e. the boundary between the early - exercise region and the complementary hold region ( yes - no boundary in the following ) . maximation of the payoff function provides an estimator of the yes - no boundary which gets arbitrarily close to the true location of the boundary with increasing number of sampled paths . for the plain vanilla put and call options , the boundary is rather simple at any exercise date , a point for one state variable , a line for a two - dimensional space of state variables and so on . after the boundary has been estimated the price estimation proceeds as for european options , because the option prices for points on the boundary are known . the only difference to price estimation of european options as in eq . ( [ eq_europt_mc ] ) is that concerning paths crossing the yes - no boundary each pathwise option value is discounted starting from the point at which the path crosses the boundary for the first time : time denotes the earliest time at which the -th path crosses the yes - no boundary ( if at all ) . otherwise equals the time until expiration of the option ( ) . is the agreed - on payoff from exercising the option at time under the assumption that path represents the realized evolution of the underlying state variables . here we have tacitly assumed that the starting point lies in the no region . of course , the algorithm is able to handle the other possibility immediate exercise as well . in this case calculation of the option price is trivial , however . in a later section of this paper we will demonstrate that the novel monte carlo algorithm achieves an accuracy in the determination of american option values which compares well with corresponding calculations for european options . we compare the results for a spectrum of standard american puts with results from binomial tree calculations . this method originally introduced by cox , ross , rubinstein ( 1979 ) is widely used for option valuation . here we assume an ideal wiener - type stochastic process for a single state variable , because a binomial tree is well suited to provide very accurate results in this situation . of course , this comparison serves mostly illustrative purposes to show that the method works at finite . lateron , we are going to present numerical results for _ path - dependent _ american options . we consider two cases , puts on either the geometric or the arithmetic average over the lifetime of the option . assuming again an underlying ideal wiener process we may compare the monte carlo estimations to results from binomial tree calculations . while options on the geometric average can be priced by the tree method to arbitrary accuracy no such tree method is available for options contingent on the arithmetic average value . some approximative method has been suggested to express the latter option prices by the corresponding prices in case of geometric averaging . therefore we will compare the approximative formula to our more accurate procedure . in general , it is envisioned that the novel monte carlo approach and some variants will be applied for valuation problems which can not or not easily be tackled by other methods . such problems encompass stochastic processes with non - markov properties , stochastic volatilities , multi - dimensional state variables and other path - dependent american options .
influenza epidemics are observed around the world during the wintertime and with a strong seasonal component in temperate regions .influenza is a disease caused by the influenza virus , an rna virus belonging to the orthomyxoviridae .many features are common with those of the paramyxovirus infections of the acute upper respiratory tract. typical symptoms of the disease are characterized by fever , myalgia , severe malaise , non - productive cough , and sore throats .the disease spreads when an infected individual coughs or sneezes and sends the virus into the air , and other susceptible individuals inhale the virus .the virus is also believed to be transmitted when a person touches a surface that is contaminated with the virus ( _ e.g. _ door knob , etc . ) and then touches the nose or eyes .infected individuals can transmit the virus almost within a day following infection ( _ i.e. _ latent period ) .although it is generally believed that infected individuals can pass the virus for 3 - 7 days following symptom onset , there is some uncertainty on the duration of the infectious period .the generation time ( _ i.e. _ sum of latent and infectious periods ) for influenza , reported and assumed in the literature , ranges from 3 days to about 6 days .individuals that are infected with influenza are believed to become permanently immune against the specific virus strain .hence , the virus is able to persist in the human population through relatively minor ( single point ) mutations in the virus composition known as drifts .influenza ( sub)types a / h3n2 , a / h1n1 and b are currently co - circulating in the human population .major changes in the virus composition via recombination or gene reassortment processes ( known as genetic shifts ) can lead to the emergence of novel influenza viruses with the potential of generating dramatic morbidity and mortality levels around the world .the 1918 - 19 influenza pandemic known as the * spanish influenza * caused by the influenza virus a ( h1n1 ) has been the most devastating in recent history with estimated worldwide mortality ranging from 20 to 100 million deaths with a case fatality of - percent .the worldwide 1918 influenza pandemic spread in three waves starting from midwestern united states in the spring of 1918 .the deadly second wave began in late august probably in france while the third wave is generally considered as part of normal more scattered winter outbreaks similar to those observed after the 1889/90 pandemic .subsequent pandemics during the 20th century are attributed to subtyes a ( h2n2 ) from 1957 - 58 ( asian influenza ) and a ( h3n2 ) in 1968 ( hong kong influenza ) .the ability to quickly detect and institute control efforts at the early stage of an influenza pandemic is directly linked to the final levels of morbidity and mortality in the population . to appropriately assess the disaster size of a probable future pandemic, we have to quantify the transmission potential ( and its associated uncertainty ) .although it is difficult to directly measure the transmissibility of a future pandemic , historical epidemiologic data is readily available from previous pandemics , and as a reference quantity for future pandemic planning , mathematical and statistical analyses of the historical data can offer various insights .in particular , because many historical records tend to document only the temporal distribution of cases or deaths ( _ i.e. _ epidemic curve ) , we modelers have faced with a difficult need to clarify the mechansms of the spread of influenza using such time - evolution data alone . in this paper, we review a number of mathematical and statistical methods for the estimation of the transmission potential of pandemic influenza , focusing on theoretical techniques to maximize the utility of the temporal distribution of influenza cases .the methods that have been incorporated in this review include the applications of epidemiologically structured epidemic models , explicitly duration - structured epidemic system , and stochastic processes ( _ i.e. _ branching and counting processes ) .whereas this review does not cover the spread of influenza in space , spatial heterogeneity in transmission and the growing interest in the role of contact network are briefly discussed as the future challenge .the basic reproduction number , ( pronounced as _ r nought _ ) , is a key quantity used to estimate transmissibility of infectious diseases .theoretically , is defined as the average number of secondary cases generated by a single primary case during its entire period of infectiousness in a completely susceptible population .as the epidemic progresses , the number of susceptible individuals is decreased due to infection , and the reproduction number decays following where and are , respectively , the number of susceptible individuals at time and before the epidemic starts ; the latter is equivalent to the total population size given that all individuals are susceptible before the beginning of an epidemic .clearly , this definition only applies to ( novel ) emerging infectious diseases ( _ e.g. _ the epidemic of severe acute respiratory syndrome ( sars ) from 2002 - 3 ) or re - emerging infectious diseases that had not circulated in the population in question for long enough to allow for residual immunity in the population to disappear due to births and deaths .+ the reproduction number is directly related to the type and intensity of interventions necessary to control an epidemic since the objective is to make as soon as possible . to achieve ,one or a combination of control strategies may be implemented .for example , one of the best known uses of is in determining the critical coverage of immunization required to eradicate a disease in a randomly mixing population .that is , when vaccine is available against a disease in question , it is of interest to estimate the critical proportion of the population that needs to be vaccinated ( _ i.e. _ vaccination coverage ) in order to attain .for example , in the u.s prior to 1963 , a vaccine against measles was not available and hence recurrent epidemics of measles were observed with approximately million cases per year and a mean of deaths .the introduction of the vaccine in the u.s . reduced the incidence by 98 percent .+ the critical vaccination coverage , ( in a randomly mixing population ) can be estimated from the of the disease in question as follows : + where is the efficacy ( _ i.e. _ direct effectiveness ) of vaccination . given in ( [ eqn_intro1 ] ) suggests that the disease could be eradicated even when all susceptible individuals are not vaccinated .the protection conferred to the population by achieving a critical vaccination coverage is known as * herd immunity * . + a brief history of the theoretical developments on the basic reproduction number and its analytical computation via epidemic modeling is given elsewhere .the mathematical definition and calculation of using next - generation arguments is described initially by odo diekmann and his colleagues , where is the dominant eigenvalue of the resulting next generation matrix .further elaborations and reviews can be found elsewhere .classically , rather than the threshold phenomena , was used to suggest the _ severity _ of an epidemic , because the proportion of those experiencing infection at the end of an epidemic depends only on ( see section 3 ) .+ statistical methods to quantitatively estimate have been reviewed by klaus dietz .depending on the characteristics of data and underlying assumptions of the models , can be estimated using various different approaches .in addition to the final size equation , of an epidemic of newly emerging disease can be estimated from the * intrinsic growth rate * , which is also referred to as the _ rate of natural increase _ , suggesting the natural growth rate of infected individuals in a fully susceptible population ( discussed in section 4 ) . moreover , for simple epidemic models with relatively few parameters , can be estimated with other unobservable quantities by rigorous curve fitting of model equations to the observed epidemic data ( discussed in section 3) . not only but also can be estimated from the temporal distribution of infectious diseases , reconstructing the transmission network or inferring the time - inhomogeneous number of secondary transmissions .+ to estimate the basic reproduction number of endemic diseases , different approaches are taken .one would need first to carry out serological surveys to quantify the fraction of the population that is effectively protected against infection ( _ i.e. _ age- and/or time - specific proportion of those possessing acquired immunity needs to be estimated ) . through this effort ,the * force of infection * , the rate at which susceptible individuals get infected , is estimated .for example , this is the case for rubella , mumps and measles ( that are still circulating in some regions of the world even when high effective vaccination coverage is achieved ) .although the estimation of for endemic diseases is out of the scope of this review , methodological details and the applications to estimate the force of infection and can be found elsewhere .+ in practice , the reproduction number denoted simply by and defined as the number of secondary cases generated by a primary infectious cases in a partially protected population might be useful . can also be estimated from the initial growth phase of an epidemic in such a partially immunized population . in a randomly mixing population ,the relationship between the basic reproduction number ( ) and the reproduction number ( ) is given by where is the proportion of the population that is effectively protected against infection ( in the beginning of an epidemic ) .besides , for many recurrent infectious diseases including seasonal influenza , estimating the background immunity in the population is extremely difficult due to cross - immunity of antigenically - related influenza strains and vaccination campaigns . + with reagard to seasonal influenza , the reproduction number ( ) over the last 3 decades has been estimated at ( se ) in the united states , france , and australia with an overall range of . an estimate of has been reported for a single a / h3n2 season in france , and some estimates have been reported in the range - for several consecutive influenza seasons in england and wales .a particularly high estimate of has been suggested for the 1951 influenza epidemic in england and canada .+ because influenza pandemics such as the spanish flu from 1918 - 19 are associated to the emergence of novel influenza strains to which most of the population is susceptible , it might be reasonable to assume that the reproduction number .previous studies have estimated that of the 1918 - 19 influenza pandemic ranged between and depending on the specific location and pandemic wave considered , type of data , estimation method , and level of spatial aggregation , which has ranged from small towns to entire nations with several million inhabitants .table [ tablewaves ] lists estimates of in recent studies .the variability of estimates suggests that local factors , including geographic and demographic conditions , could play an important role in disease spread . in the following sections , we review how these estimates are obtained and how we shall interpret the estimates , starting from a simple structured epidemic model proposed in 1927mathematical models provide a unique way to analyze the transmission dynamics and study various different scenarios associated to the spread of communicable diseases in population(s ) .the history of the mathematical modeling of infectious diseases greatly remounts to the study of sir ronald ross in 1911 who invented a classic malaria model and also discovered the mosquito - borne transmission mechanisms of malaria . employing a mass action principle for the spread of malaria ,ross explored the effects of controling the mosquito population using simple mathematical models .following his effort , kermack and mckendrick introduced a classical sir ( susceptible - infectious - removed ) epidemic model in 1927 , which is most frequently utilized in the present day , given by the following system of nonlinear ordinary differential equations ( odes ) : where denotes susceptible individuals at time ; , infected ( assumed infectious ) individuals at time ; and , recovered ( assumed permanently immune ) individuals at time ; is the transmission rate ; the recovery rate ; and is the total population size which is assumed constant for a closed population ( _ i.e. _ a population without immigration and emmigration ) .susceptible individuals in contact with the virus enter the infectious class ( ) at the rate .that is , homogeneous mixing between individuals is assumed . + the basic reproduction number , , for the epidemic system ( [ eq1 ] ) is given by the product of the transmission rate and the mean infectious period .that is : classically , has been known as a quantity to suggest * severity * of an epidemic .indeed , analytical expression of in ( [ eq_r0 ] ) is derived simply by solving the above system ( [ eq1 ] ) .replacing in the right hand side of by ( ) , we get integrating both sizes of ( [ fsize1 ] ) from 0 to infinity , since , and because we assume and , equation ( [ fsize2 ] ) can be rewritten as in the above equation ( [ fsize3 ] ) , * final size * , _ i.e. _ , the proportion of those experiencing infection among a total number of individuals in a community following a large scale epidemic , is defined as .that is , therefore , the following * final size equation * of an autonomous sir ( or seir ) model is obtained : equation ( [ fsize5 ] ) can be analytically derived using both deterministic ( models governed by odes or partial differential equations ( pdes)) and stochastic models . despite the usefulness of ( [ eq1 ] ) , sir assumptions given by odes are not always directly applicable to real data .one of the reasons include that there is no disease where an infected individual can cause secondary transmission immediately after his / her infection. + accordingly , we have used slightly extended compartmental models in the previous studies to describe the transmission dynamics of the 1918 - 19 influenza pandemic and estimate the reproduction number .we now describe two different seir ( susceptible - exposed - infectious - removed ) models that have been used to estimate the reproduction number .the first model is the simple seir model , and the second model accounts for asymptomatic and hospitalized individuals .+ the simple seir model classifies individuals as susceptible ( s ) , exposed ( e ) , infectious ( i ) , recovered ( r ) , and dead ( d ) .susceptible individuals in contact with the virus enter the exposed class at the rate , where is the transmission rate , is the number of infectious individuals at time and is the total population for any .the entire population is assumed to be susceptible at the beginning of the epidemic .individuals in latent period ( e ) progress to the infectious class at the rate ( where suggests the mean latent period ) .we assume homogeneous mixing ( _ i.e. _ random mixing ) between individuals and , therefore , the fraction is the probability of a random contact with an infectious individual in a population of size .since we assume that the time - scale of the epidemic is much faster than characteristic times for demographic processes ( natural birth and death ) , background demographic processes are not included in the model .infectious individuals either recover or die from influenza at the mean rates and , respectively .recovered individuals are assumed protected for the duration of the outbreak .the mortality rate is given by [ cfp/(1-cfp ) ] , where cfp is the mean case fatality proportion .the transmission process can be modeled using the system of nonlinear differential equations : where is the cumulative number of infectious individuals .the basic reproduction number of the above system ( [ eqn12 ] ) is given by the product of the mean transmission rate and the mean infectious period , .+ a more complex seir model ( figure [ figdiagram ] ) classifies individuals as susceptible ( ) , exposed ( ) , clinically ill and infectious ( ) , asymptomatic and partially infectious ( ) , diagnosed and reported ( ) , recovered ( ) , and death ( ) .the birth and natural death rates are assumed to have a common rate ( 60-year life expectancy as in ) .the entire population is assumed susceptible at the beginning of the pandemic wave .susceptible individuals in contact with the virus progress to the latent class at the rate where is the transmission rate , and is a reduction factor in the transmissibility of the asymptomatic class ( ) . since there is no explicit evidence estimating and proving the effectiveness of public health interventions , and because a high burden was placed upon the sanitary and medical sectors , diagnosed / hospitalized individuals ( ) are assumed equally infectious .although it is difficult to explicitly evaluate the difference in infectiousness between general community and hospital , we roughly made this assumption since 78 percent of the nurses of the san francisco hospital contracted influenza .a more rigorous assumption requires either statistical analysis of more detailed time - series data or an epidemiological comparison of specific groups by contact frequency .the total population size at time is given by .we assumed homogeneous mixing of the population and , therefore , the fraction is the probability of a random contact with an infectious individual . a proportion of latent individuals progress to the clinically infectious class ( ) at the rate while the remaining ( ) progress to the asymptomatic partially infectious class ( ) at the same rate ( fixed to 1/1.9 days ) .asymptomatic cases progress to the recovered class at the rate .clinically infectious individuals ( class ) are diagnosed ( reported ) at the rate or recover without being diagnosed ( e.g. , mild infections , hospital refusals ) at the rate .diagnosed individuals recover at the rate or die at rate .the mortality rates were adjusted according to the case fatality proportion ( cfp ) , such that . + the transmission process can be modeled using the following system of nonlinear differential equations : we assume the cumulative number of influenza notifications , our observed epidemic data , is given by .seven model parameters ( , , , , , , ) are estimated from the epidemic curve by least squares fitting as explained below . the reproduction number for model ( [ eqn2 ] )is given by ( see ) : and the clinical reporting proportion is given by : in the simplest manner , model parameters can be estimated via least - square fitting of the model solution to the observed data .that is , one looks for the set of parameters whose model solution best fits the epidemic data by minimizing the sum of the squared differences between the observed data and the model solution .that is , we minimize : the standard deviation of the parameters can be estimated by computing the asymptotic variance - covariance matrix of the least - squares estimate by : which can be estimated by where is the total number of observations , is the estimated variance , and are numerical derivatives of .estimates of can be obtained by substituting the corresponding individual parameter estimates into an analytical formula of .further , using the delta method , we can derive an expression for the variance of the estimated basic reproduction number .an expression for the variance of for the simple seir model ( equations [ eqn12 ] ) is given by : this expression depends on the variance ( denoted by ) of the individual parameter estimates as well as their covariance ( denoted by ) .another method to generate uncertainty bounds on the reproduction number is generating bootstrap confidence intervals by generating sets of realizations of the best - fit curve .each realization of the cumulative number of case notifications ( , , , ) is generated as follows : for each observation for , , , days generate a new observation for ( ) that is sampled from a _ poisson _ distribution with mean : ( the daily increment in from day to day ) .the corresponding realization of the cumulative number of influenza notifications is given by where , , , , .the reproduction number was then estimated from each of simulated epidemic curves to generate a distribution of estimates from which simple statistics can be computed including confidence intervals .these statistics need to be interpreted with caution .for example , confidence intervals for derived from our bootstrap sample of should be interpreted as containing of future estimates when the same assumptions are made and the only noise source is observation error .it is tempting but incorrect to interpret these confidence intervals as containing the _ true _ parameters with probability .figure [ figr - seijr ] shows the temporal distributions of the reproduction number and the proportion of the clinical reporting obtained by the bootstrap method after fitting the complex seir epidemic model to the initial phase of the fall influenza wave using 17 epidemic days of the spanish flu pandemic in san francisco , california .in addition to the estimation of , it is of practical importance to evaluate time - dependent variations in the transmission potential .explanation of the time course of an epidemic can be partly achieved by estimating the effective reproduction number , , defined as the actual average number of secondary cases per primary case at time ( for ) should not be confused with the number of removed individuals using the same notation . in the following arguments of this paper , denotes the effective reproduction number . ] .although effective interventions against spanish influenza may have been limited in the early 20th century , it is plausible that the contact frequency leading to infection varied with time owing to the huge number of deaths and dissemination of information through local media ( _ e.g. _ newspapers ). shows time - dependent variation with a decline in susceptible individuals ( intrinsic factors ) and with the implementation of control measures ( extrinsic factors ) .if , it suggests that the epidemic is in decline and may be regarded as being _ under control _ at time ( vice versa , if ) . to appropriately understand the theoretical concept of ,let us firstly consider an explicitly infection - age structured epidemic model .whereas kermack - mckendrick model governed by odes ( i.e. sir and seir models as discussed above ) has been well - known , kermack and mckendrick had actually proposed an infection - age structured model in their initial publication in 1927 , the mathematical importance of which was recognized only after 1970s .let us denote the numbers of susceptible and recovered individuals by and .further , let be the density of infectious individuals at time and * infection - age * ( _ i.e. _ time since infection ) .the model is given by where is referred to as the force of infection ( foi ) ( _ i.e. _ as discussed in section 2 , foi is defined as the rate at which susceptible individuals get infected ) which is given by : and is the rate of recovery at infection - age . it should be noted that the above model has not taken into account the background host demography ( _ i.e. _ birth and death ) . in a closed population ,the total population size is thus given by : it should also be noted that , although is referred to as _ density _ , it is not meant to be a normalized density ( _ i.e. _ integral of over and does not sum up to 1 ) .rather , we use density to mathematically refer to the absolute frequency in the infection - age space .+ the system ( [ eqn_hn1 ] ) can be reasonably integrated where and suggests the density of initially infected individuals at the beginning of an epidemic . in the following arguments , we call as * incidence of infection* ( _ i.e. _ new infections at a given point of time ) .it is not difficult to derive from ( [ eqn_hn1])- ( [ eqn_hn5 ] ) .thus , the subequation of in system ( [ eqn_hn1 ] ) is rewritten as \quad\ ] ] taking into account the initial condition in ( [ eqn_hn4 ] ) , equation ( [ eqn_hn6 ] ) is rewritten as \left [ g(t)+\int_{0}^{t } \psi(\tau)j(t-\tau)\,d\tau \right ] \quad\ ] ] where considering the initial invasion phase ( _ i.e. _ initial growth phase of an epidemic ) , we get a linearized equation the equation ( [ eqn_hn10 ] ) represents lotka s integral equation , where the basic reproduction number , , is given by thus , the epidemic will grow if and decline to extinction if .the above model can yield the same final size equation as seen in models governed by odes . +assuming that the infection - age distribution is stable , we get a simplified renewal equation where is the product of and .moreover , assuming that we observe an exponential growth of incidence during the initial phase ( i.e. where and are , respectively , a constant ( ) and the intrinsic growth rate ) , the following relationship must be met replacing in the right hand side of ( [ eqn_hn12 ] ) by ( [ eqn_hn13 ] ) , we get removing from both sides of ( [ eqn_hn14 ] ) , we get the lotka - euler characteristic equation: further , we consider a probability density of the * generation time * ( _ i.e. _ the time from infection of an individual to the infection of a secondary case by that individual ) , denoted by : using ( [ eqn_hn16 ] ) , the equation ( [ eqn_hn15 ] ) can be replaced by the equations ( [ eqn_hn13])-([eqn_hn17 ] ) are what wallinga and lipsitch discussed in a recent study , reasonably suggesting the relationship between the generation time and .accordingly , the estimator of using the intrinsic growth rate is given by : where is the moment generating function of the generation time distribution , given the intrinsic growth rate .is not used for equation ( [ eqn_hn18 ] ) and rather document ( [ eqn_hn18 ] ) as the estimator of .most likely , there are two reasons for this .first , we can not assure if all individuals are susceptible to pandemic influenza before the start of epidemic ( as discussed in section 2 ) .second , we can not assume that infection - age distribution is stable during the initial growth phase , which is highlighted in ( [ eqn_hn4 ] ) .thus , it should be remembered that the above discussion is mathematically tight in theory , but there are certain number of assumptions to apply the concept to observed data . since writing alone is always confusing ( as it is unclear if is concerned with time or immunity status ) , here we use instead . ]equation ( [ eqn_hn18 ] ) significantly improved the issue of estimating using the intrinsic growth rate alone , because ( [ eqn_hn18 ] ) permits validating estimates of by various different distributional assumptions for .the issue of assuming explicit distributions for latent and infectious periods has been highlighted in recent studies , and indeed , this point is in part addressed by ( [ eqn_hn18 ] ) , because the convolution of latent and infectious periods yields .moreover , since the assumed lengths of generation time most likely yielded different estimates of for spanish influenza by different studies , equation ( [ eqn_hn18 ] ) highlights a critical need to clarify the generation time distribution using observed data .+ here we briefly show a numerical example .figure [ hn_fig1 ] shows the daily number of influenza deaths during spanish influenza pandemic in a suburb of zurich , 1918 . since the non - linear phase is difficult to analyze , our interest to estimate with this dataset is limited to the initial growth phase only ( right panel in fig [ hn_fig1 ] ) .even though the data represent deaths over time ( _ i.e. _ not infection events with time ) , we can directly extract the same intrinsic growth rate as practised with onset data , assuming that death data are a good proxy for morbidity data ( see our discussions in section 6 ) .assuming exponential growth in deaths as shown in ( [ eqn_hn13 ] ) , the intrinsic growth rate is estimated to be 0.16 per day .supposing that is arbitrarily assumed to follow a gamma distribution with mean and coefficient of variation , , is given by although there is no concensus regarding the generation time of spanish influenza , we assume it ranges from 2 - 5 days . assuming further that , is estimated to range from 1.36 ( for day ) to 2.07 ( for days ) . in the following ,let us consider the non - linear phase of an epidemic .derivation of given by ( [ eqn_hn18 ] ) assumes an exponential growth which is applicable only during the very initial phase of an epidemic ( or , when the transmission is stationary over time ) , and thus , it is of practical importance to widen the utility of above - described renewal equations in order to appropriately interpret the time - course of an influenza pandemic .let us explicitly account for the depletion of susceptible individuals , as we deal with an estimation issue with time - inhomogeneous assumptions ( i.e. non - linear phase ) . adopting the * mass action * assumption , we get : where should be interpreted as the reproductive power at time and infection - age at which an infected individual generates secondary cases .we refer to the latter part of equation ( [ eqn_hn182 ] ) as a non - autonomous renewal equation , where the number of new infection at time is proportional to the number of infectious individuals ( as assumed in the renewal equation in the initial phase ) .+ using equation ( [ eqn_hn182 ] ) , the effective reproduction number , ( _ i.e. _ instantaneous reproduction number at calendar time ) is defined as : following ( [ eqn_hn19 ] ) , we can immediately see that with an autonomous assumption ( _ i.e. _ where contact and recovery rates do not vary with time ) is given by : which is shown in . in practical terms ,equation ( [ eqn_hn20 ] ) suggests that time - varying decrease in transmission potential as well as decline in the epidemic reflects only depletion of susceptible individuals .this corresponds to a classic assumption of the kermack and mckendrick model .+ however , as we discussed in the beginning of this section , we postulate that human contact behaviors ( and other extrinsic factors ) modifies the dynamics of pandemic influenza , assuming that the decline in incidence does reflect not only depletion of susceptibles but also various extrinsic dynamics ( _ e.g. _ isolation , quarantine and closure of public buildings ) .thus , instead of the assumption in ( [ eqn_hn182 ] ) , we shall assume time - inhomogeneous ; _ i.e. _ to describe .+ to derive simple estimator of , it is convenient to assume separation of variables for ( implicitly assuming that the relative infectiousness to infection - age is independent of calendar time ) . under this assumption , is rewritten as the product of two functions and : arbitrarily assuming a normalized density for , _i.e. _ , then , it is easy to find that suggesting that the function is equivalent to the effective reproduction number .another function represents the density of infection events as a function of infection - age .accordingly , we can immediately see that is exactly the same as , the generation time distribution .that is , the above arguments suggest that ( _ i.e. _ the rate at which an infectious individual at calendar time and infection - age produces secondary transmission ) is decomposed as : inserting ( [ eqn_hn25 ] ) into ( [ eqn_hn21 ] ) yields an estimator of : the above equation ( [ eqn_hn26 ] ) is exactly what was proposed in applications to sars and foot and mouth disease ; _ i.e. _ discretizing ( [ eqn_hn26 ] ) to apply it to the daily incidence data ( _ i.e. _ using incident cases infected between time and time and descretized generation time distribution ) , was used as the estimator .however , it should be noted that the study in sars implicitly assumed that onset data at time reflects the above discussed infection event .that is , supposing that we observed onset cases reported between and , was calculated as where is the discretized * serial interval * which is defined as the time from onset of a primary case to onset of the secondary cases .the method permits reasonable transformation of an epidemic curve ( _ i.e. _ temporal distribution of case onset ) to the estimates of time - inhomogeneous reproduction number . employing the relative likelihood of case infected by case using the density function of serial interval ; _i.e. _ , using ( [ eqn_hn29 ] ) , expected value and variance of are given by the following where is the total number of reported case onsets at time . + in the present day , only by using the above described methods ( or similar concepts with similar assumptions ) , we can transform epidemic curves into and roughly assess the impact of control measures on an epidemic .however , whereas the equations ( [ eqn_hn27 ] ) and ( [ eqn_hn28 ] ) are similar in theory , we need to explicitly account for the difference between onset and infection event .in fact , when there are many asymptomatic infections and asymptomatic secondary transmissions , serial interval is not equivalent to the generation time , and thus , directly adopting the above methods would be inappropriate . since this point is particularly important in analyzing influenza data, we discuss this issue in section 6 .in the previous sections , we discussed several different methods to estimate either by ( i ) employing detailed curve fitting method assuming a structured epidemic model or ( ii ) using the intrinsic growth rate ( or doubling time ) . summarizing the above discussions, we believe that the readers should benefit from memorizing for the use of the intrinsic growth rate in estimating and remembering the final size equation suggesting the severity of an epidemic as the theoretical concept .indeed , estimator using either the intrinsic growth rate or final size has still continued to play an important role in discussing of pandemic influenza .+ however , it should be noted that deterministic models do not permit incorporating stochasticity explicitly ( _ e.g. _ standard error for is determined by measurement of errors alone ) , as the models argue only _ average number of secondary transmissions _ within the assumed transmission dynamics . that is , our arguments given above explore only the time - evolution of influenza spread inthe * mean field*. to address the issue of variation in secondary transmissions , full stochastic models are called for . + from a viewpoint of data science , the discrete - time branching process , which is also referred to as galton - watson process , can reasonably assess individual heterogeneity in secondary transmissions .as we discussed the initial growth phase of the deterministic model , let us consider the same epidemic phase where we observe a geometric increase in the number of cases by generation .we denote the initial number of infected individuals by in generation 0 .then , during the first generation , cases are produced by secondary transmissions of .similarly , let be the number of infections in generation .the branching process of this type assumes that every infected individual has an independently and identically distributed stochastic random variable representing the number of secondary cases produced by case in generation ( ) , and that environmental stochasticity and immigration / emigration can be ignored .supposing that the pattern of secondary transmission follows a discrete probability distribution with secondary transmission(s ) ; _ i.e. _ , then , the expected number of secondary transmissions and the variance are given by in other words , the concept of probability distribution reflects * offspring distribution * in population ecology , and this permits explicit modeling of variations in secondary transmissions in infectious diseases .this approach is particularly important during the initial phase of an epidemic , because the number of infectious individuals is small in this stage , and thus , it is deemed essential to take into account demographic stochasticity , _i.e. _ , variation in the numbers of secondary transmissions by chance .indeed , the model has been applied to observed outbreak data where we observed the extinction before growing to a major epidemic .+ let us briefly discuss the variation in secondary transmissions and an estimation method of using the discrete - time branching process , deriving analytical expressions of the expected number of infected individuals in generation , and the variance .it is impossible to avoid using the probability generating function ( pgf ) to discuss the branching process .the above described characterize _ positive _ and _ discrete _ number of secondary transmissions , and thus , is a non - zero discrete random variable .the pgf of , is given by there are two basic properties concerning in relation to the epidemic process . first , is by definition the mean value of secondary transmissions ( equation ( [ eqn_hn203 ] ) ) and , thus given by .second , the probability that an infected individual does not cause any secondary transmissions pr(=0 ) is given by , which is useful for discussing threshold phenomena and extinction . if we note that ( i.e. only one index case ) , the galton - watson process has the following pgf identity : even when there are cases in generation 0 ( where ) , we just have to assume that there are different independent infection - trees and thus from the above discussions , the expected number of cases in generation , , and the variance is the process grows geometrically if , stays constant if , and decays geometrically if .these three cases are referred to as * supercritical * , * critical * , and * subcritical * , respectively . however , unlike the deterministic model , it should be remembered that critical process does not permit continued transmissions , and rather , the process becomes extinct almost surely ( i.e. probability of extinction given is one ) .+ mathematically , demographic stochasticity in transmission is represented by a poisson process , which has been practiced in the application of branching processes to epidemics . assuming that mean value of secondary transmissions is a constant , the conditional distribution of observing cases , given cases , follows a poisson distribution : \ ] ] supposing that we analyze influenza data documenting the generations of cases from 0 to in which we observed geometric growth , the likelihood of estimating is proportional to here we apply the above model to the spanish influenza data in zurich ( figure [ hn_fig1 ] ) .assuming that the generation time of length , , is given by the following delta function with the mean length 3 days , then the observed series of data can be grouped by generation ( , , , ... ): since we assumed exponential growth during the initial 16 days in the previous section , here we similarly assume a geometric increase up to the 6th generation . applying ( [ eqn_hn210 ] ) to the above data , maximum likelihood estimate of ( and the corresponding 95 percent confidence intervals ) is 1.51 ( 1.24 , 1.81 ) .the model is simple enough to estimate , and indeed , a slight extension of the discrete - time branching process has been employed to estimate as well as the proportion of undiagnosed cases in the analysis of sars outbreak data .+ it should be noted that the discrete - time branching process assumes homogeneous pattern of spread .a technical issue has arisen on this subject during the sars outbreak . usually , we observe some cases who produce an extraordinary number of secondary cases compared with other infected individuals , which are referred to as * superspreaders*. because of this , observed offspring distributions for directly transmitted diseases tend to be extremely skewed to the right .empirically , it has been suggested that poisson offspring distribution is sometimes insufficient to highlight the presence of superspreaders in epidemic modeling .for example , if non - zero discrete distribution of secondary cases follows a geometric distribution with mean , the pgf is given by a geometric distribution with mean : moreover , if the offspring distribution follows gamma distribution with mean and dispersion parameter , the pgf follows negative binomial distribution : we still do not know if pandemic influenza is also the case to warrant the skewed offspring distributions . to explicitly testif superspreading events frequently exist in influenza transmission , it is necessary to accumulate contact tracing data for this difficult disease , the cases of which often show flu - like symptoms only ( as discussed in section 1 ) . in addition, it should be noted that we can not attribute the skewed offspring distribution to the underlying contact network only . to date , there are two major reasons which can generate superspreaders : ( i ) those who experience very frequent contacts ( social superspreader ) or ( ii ) those who are suffering from high pathogen loads or those who can scatter the pathogen through the air such as the use of nebuliser in hospitals ( biological superspreader ) . from the offspring distributiononly , we can not distinguish these two mechanisms . with regard to the estimation of using final size, we briefly discuss another method based on a stochastic process . as we discussed above , let , and be the numbers of susceptible , infectious and recovered individuals at time , respectively .further , let and denote the transmission rate and the mean duration of the infectious period , respectively .supposing that , the number of individuals who experienced infection between time 0 and time , is given by , the two processes and are increasing counting processes where the general epidemic is explained by : where denotes the -algebra generated by the history of the epidemic and ( where is the size of the susceptible population at time 0 ) .the latter is equivalent to assuming density - independent transmission ( _ i.e. _ also referred to as _ true mass action _ or frequency dependent assumption ) .based on equation ( [ eqn_hn215 ] ) , two zero - mean martingales are defined by : from the martingale theory , a zero - mean martingale is given by thus , the estimator is given by }{u(t ) } \\ & = & \dfrac{-ln(1-\tilde{p})}{u(t ) } \end{array}\ ] ] where is the observed final size ( ) at the end of the epidemic at time .furthermore , the variance of the zero - martingale is given by from the martingale central limit theorem , the estimator is approximately normally distributed in a major outbreak in a large community .the standard error is then consistently estimated by : ^{\dfrac{1}{2}}}{u(t ) } \\ & = & \dfrac{\left[\dfrac{n}{s(0)+\dfrac{1}{2 } } + \dfrac{n}{s(0)+\dfrac{1}{2}}-\hat{\theta}^2r(t ) \right]^{\dfrac{1}{2}}}{u(t ) } \end{array}\ ] ] consequently , the estimator and standard error of are given by : more detailed mathematical descriptions can be found elsewhere .+ here we show a numerical example .let us consider a large epidemic of equine influenza ( _ i.e. _ influenza in horses ) as our case study .in 1971 , a nationwide epidemic of equine-2 influenza a ( h3n8 ) was observed in japan .for example , in niigata racecourse , 580 influenza cases were diagnosed with influenza among a total of 640 susceptible horses .the final size is thus 90.6 percent ( 95 percent ci : 88.4 , 92.9 ) . from this data ,we calculate and its uncertainty bounds .+ using and total number of infected in equation ( [ eqn_hn218 ] ) , is estimated as 0.00408 .therefore , the estimate of is given by equation ( [ eqn_hn221 ] ) .moreover , from equation ( [ eqn_hn220 ] ) where and ( we assume one case was already infected at time ) , we obtain . here , is assumed to follow normal distribution .therefore , the 95 percent confidence interval for is given as =[2.44,2.76]$ ] .+ when applying the final size equation , it should be remembered that ( i ) we assume all individuals are initially susceptible ( in the above described model ) and ( ii ) we assume and are independent of time ( _ i.e. _ constant ) , and thus , that any extrinsic factors should not have influenced the course of the observed epidemic . in the above described models , we always assumed that the pattern of influenza transmission is homogeneous , which is clearly unrealistic to capture the transmission dynamics of influenza . since the last century, it has already been understood that the transmission dynamics are not sufficiently modeled by assuming homogeneous mixing . however , because more detailed data are lacking ( _ e.g. _ epidemic records of pandemic influenza with time , age and space ) , what we could offer has been mainly to extract the intrinsic growth rate from the initial exponential growth , and estimate using the estimator based on a model with the homogeneous mixing assumption .+ one line of addressing heterogeneous patterns of transmission using the observed data is separating household transmission from community transmission . in other words, it is of practical importance to distinguish between individual and group .from the beginning of explicit modeling of influenza , a method to separately estimate the transmission parameters has been proposed , which has been partly extended in a recent study or applied to further old data of pandemic influenza .indeed , an important aspect of this issue was highlighted in a recent study which compared estimates of between those having casual and close contacts . to estimate key parameters of household and community transmissions of influenza , or to simulate realistic patterns of influenza spread , such a consideration is fruitful .+ mathematically elaborating this concept , there are several publications which proposed the basis of analyzing household transmission data employing stochastic models .moreover , a rigorous study has been made to estimate parameters determining the intrinsic dynamics ( _ e.g. _ infectious period ) using household transmission data with time .+ future challenges on the estimation of include the application of such theories to the observed data with some extension .for example , as we discussed above , knowing the generation time would be crucial to elucidate a robust estimate of . however , we do not know if the generation time varies between close and casual contacts ; this should be the case , because , as long as the generation time is given by covolution of latent and infectious periods , close contact should lead to shorter generation time than casual contact . in future studies , influenza models may better to highlight the increasing importance of considering household transmission to estimate the transmission potential using the temporal distribution of infection events .except for our approach in section 3 , mathematical arguments given in this paper are not particularly special for influenza . in other words ,we modelers have employed similarly structured models which describe the population dynamics of other directly transmitted diseases , and such models are applicable not only for influenza but also for many viral diseases including measles , smallpox , chickenpox , rubella and so on . however , influenza has many different epidemiologic characteristics compared to other childhood viral diseases .for instance , following the previous efforts in influenza epidemiology and modeling , we should at least note the following : 1 .detailed mechanisms of immunity have yet to be clarified .since influenza virus has an wide antigenic diversity ( _ i.e. _ unlike other childhood viral diseases , antigenic stimulation is not monoclonal ) , this complicates our understanding in the fraction of immune individuals , cross - protection mechanisms and evolutionary dynamics .flu - like symptoms are too common , and thus , we cannnot explicitly distinguish influenza from other common viral infections without expensive laboratory tests for each case .because of this character , it is difficult to effectively implement usual public health measures ( _ e.g. _ contact tracing and isolation ) .3 . although explicit estimates are limited , a certain fraction of infected individuals does not exhibit any symptoms ( following infection ) .this complicates not only the eradication but also epidemiologic evaluations of vaccines and therapeutics .4 . looking into the details of the intrinsic dynamics , it appears recently that the generation time and infectious period are much shorter than what were believed previously .therefore , despite the relatively small estimate , the turn - over of a transmission cycle ( _ i.e. _ speed of growth ) is rather quick .the incubation period of spanish influenza is as short as 1.5 days , complicating the implementation of quarantine measures .thus , depending on the characteristics of observed data ( and the specific purpose of modeling ) , we have to highlight these factors referring to the best available evidence .this is one of the most challenging issues in designing public health interventions against a potential future pandemic .in addition to the above described issue , we , of course , must remember what the reported data is .in many studies , the compartment or relevant class of infectious individuals of the sir ( or seir ) model was fitted to the observed data .indeed , in the majority of previous classic studies , ( _ i.e. _ removed class ; denoted by in our discussion ) of kermack and mckendrick model was fitted to the data , assuming that the removed class highlights observed data as the reported cases no longer produce secondary cases .however , the observed epidemiologic data is actually neither nor . always , what we get as the temporal distribution reflects _ case onset _ or _ deaths _ with time which is mostly accompanied by some reporting delay . + we believe this is one of the most challenging issues in epidemic modeling . except for rare examples in sexually transmitted diseases ,infection event is not directly observable , and thus , we have to maximize the utility of reported ( observed ) data , explicitly understanding what the data represents .+ in this case , back - calculation of the infection events is called for .let denote the number of onsets at time , this should be modeled by using incidence and the density of the incubation period of length , : further , supposing that is the number of reported cases at time and the density of reporting delay of length is , observed data is modeled as : that is , only by using the observed data and known information of the reporting delay and incubation period distributions , we can translate the observed data into infection process . + in some cases , only death data with time , , is available .similarly , this can be modeled using the backcalculation .let denote the case fatality of influenza which is reasonably assumed time - independent , and further let be the relative frequency of time from onset to death , is given by : even when using _ onset data with delay _ or _ death _ data , it should be noted that the intrinsic growth rate is identical to that estimated from the infection event distribution . assuming that the incidence exhibits exponential growth during the initial phase of an epidemic , _i.e. _ , , equations ( [ eqn_hn301 ] ) and ( [ eqn_hn303 ] ) can be rewritten as and thus , the growth terms ( _ i.e. _ which depends on time ) of and are still identical to that of incidence .in other words , mathematically the equations ( [ eqn_hn304 ] ) and ( [ eqn_hn305 ] ) could be a justification to extract an estimate of the intrinsic growth rate from cases with reporting delay or deaths with time .however , we should always remember that the infection - age distribution is not stable during the initial phase , and moreover , this method can not address individual variation in the secondary transmissions ( _ e.g. _ superspreaders , as we discussed in section 5 ) . in this way , it s not an easy task to clarify the infection events with time .a similar application of the convolution equation has been intensively studied in modeling hiv / aids .since aids has a long incubation period , and because aids diagnosis is certainly reported in the surveillance data ( at least , in industrialized countries ) , backcalculation of the number of hiv infections with time using the nubmer of aids diagnoses and the incubation period distribution has been an issue to capture the whole epidemiologic picture of hiv / aids . in the current modeling practiceusing the temporal distribution of onset events , we are now faced with a need to apply this technique to diseases with much shorter incubation periods .+ now , let s look back at a method to estimate , which was proposed by wallinga and teunis . whereas the method has a background of mathematical reasoning ( as shown in ( [ eqn_hn28 ] ) , section 4.2 ) , the estimator was derived implicitly assuming that _ observed data exactly reflects infection events_. if asymptomatic infection and transmission are rare , this assumption might be justifiable as the lengths of the serial interval and generation time become almost identical .however , as long as we can not ignore asymptomatic transmissions , which is particularly the case for influenza , the assumption might be problematic .+ since of this method was given by summing up the probability of causing secondary transmissions by an onset case _ at the onset time _ of this case , we should rewrite the assumption using a modified _ onset - based _ renewal equation as follows : for simplicity , we ignore reporting delay in the observed data , roughly assuming that the observed data reflects . translating equation ( [ eqn_hn306 ] ) in words , it is implicitly assumed that _secondary transmission happens exactly at the time of onset _ , and based on this assumption , in the right hand side of ( [ eqn_hn306 ] ) can be backcalculated .+ to understand the assumptions behind the above equation , let us assume that incidence is given by where is the transmission rate at * disease - age * ( _ i.e. _ the time since onset of infection ) and is the survivorship of cases following onset .it should be noted that equation ( [ eqn_hn307 ] ) ignores secondary transmissions before onset of illness .as we discussed above , is given by and the incubation period distribution , replacing in the right hand side of ( [ eqn_hn307 ] ) by ( [ eqn_hn308 ] ) , we get where represents infection - age ( _ i.e. _ time since infection ) , and is given by which represents generation time distribution . from equations ( [ eqn_hn309 ] ) and ( [ eqn_hn310 ] ) , we can find that is given by equation ( [ eqn_hn311 ] ) can be further reduced to which represents kermack and mckendrick s assumption .replacing in the right hand side of ( [ eqn_hn308 ] ) by ( [ eqn_hn307 ] ) , we get where denotes the serial interval distribution of calender time and disease - age : equation ( [ eqn_hn313 ] ) is difficult to solve as it includes in the right hand side .however , in the special case , _e.g. _ , let s say when we can assume ( where is constant and is delta function ) , inserting ( [ eqn_hn314 ] ) back to ( [ eqn_hn312 ] ) , which is _ onset - based _ renewal equation which was presented in ( [ eqn_hn306 ] ) .what to be learnt from ( [ eqn_hn315 ] ) is , the assumption that _ secondary transmission happens immediately after onset _ suggests that the _ incubation period distribution is identical to the serial interval distribution _ as shown above , which is a bit funny conclusion . maximizing the utility of observed data has still remained an open question .+ in addition to modeling the temporal distribution , explicit modeling of asymptomatic infection is also called for .provided that there are so many asymptomatic transmissions which are not in the negligible order , we need to shift our concept of transmissibility ; _e.g. _ , rather than , a threshold quantity of symptomatic infection is required .in such a case , application of type - reproduction number might be useful , and it has already been put into practice .in this review , we focused on the use of the temporal distribution of influenza to estimate ( or ) and the relevant key parameters .it must be remembered that our arguments , almost necessarily , employed homogeneous mixing assumption , as we can not extract information on heterogeneous patterns of infection from a single stream of temporal data alone .presently , more information ( _ e.g. _ at least , spatio - temporal distribution ) is becoming available for influenza . in this section , we briefly sketch what can be ( and should be ) done in the future to quantify the transmission dynamics of pandemic influenza .it s not a new issue that heterogeneous patterns of transmission could even destroy the mean field theory in infectious diseases .for example , in a pioneering study of gonorrhea transmission dynamics by hethcote and yorke , an importance of contact heterogeneity was sufficiently highlighted . since a simple model assuminghomogeneous mixing did not reflect the patterns of gonorrhea transmission in the united states , hethcote and yorke divided the population in question into two ; those who are sexually very active and not , the former of which was referred to as * core * group .compared with the temporal distribution of infection given by a model with homogeneous mixing assumption , the simple heterogeneous model with a core group revealed much quicker increase in epidemic size , showing rather different trajectory of an epidemic .given that the variance of sexual partnership is extremely large ( _ i.e. _ if the distribution of the frequency of sexual intercourse is extremely skewed to the right with a very long right tail ) , the estimate of is shown to become considerably high .the finding supports a vulnerability of our society to the invasion of sexually transmitted diseases . following this pioneering study, considerable efforts have been made to approximately model the heterogeneous patterns of transmission using extended mean field equations . + in addition to such an approximation of heterogeneous transmission , recent progress in epidemic modeling with explicit contact network structures suggests that variance of the contact frequency plays a key role in determining the threshold quantity , and in some special cases , the concept of threshold phenomena could be confused . in section 4, we defined the force of infection as in deteministic models given by simple odes ( which ignores infection - age ) , is equivalent to .these are what classical mean field models suggest .+ let us account for an epidemic on networks , whose node - connectivity distribution ( _ i.e. _ the distribution of probabilities that nodes have exactly neighbors ) follows some explicit distribution .the force of infection , which yields , in a static contact network is given by here denotes the average connectivity of the nodes . assuming that follows a power law of the form ( where is constant ) , given that , , and in such a case , even becomes infinite .this implies that the disease spread will continue for any mean estimate of .such a network structure is referred to as * scale free * , complicating disease control efforts in public health .the importance of the network structure would also be highlighted for .+ for sexually transmitted infections , contact frequency is countable ( unlike airborne infection or transmission through droplets ) , and is estimated to be around 3 or a little larger .following such a finding , many non - sexual directly transmitted diseases are also modeled in the present day assuming the scale - free network .however , it should be noted that the pattern of contact does not necessarily follow scale - free for all directly transmitted diseases .indeed , there is no empirical evidence which suggests that the contact structure of any droplet infections follows the power law ( _ i.e. _ we do not know if the above described contact heterogeneity is the case for diseases except for sexually transmitted diseases ) .a typical example of confusion is seen in the superspreading events during the 2002 - 03 sars epidemics , where we can not explicitly attribute the phenomena either to contact network or biological factors ( as long as _ contact _ and infection event are not directly observable ) . we still do not know how we should account for the distribution for influenza and other viral respiratory diseases ( _ i.e. _ power law or not ) which remains to be clarified for each disease in future research .methodoligical developments have been made to account for the network heterogeneity with data .an approximate approach to address this issue is highlighted particularly in spatio - temporal modeling , an excellent account of which is reviewed by matt keeling . +even though it s difficult to quantify the transmission dynamics with an explicit contact network with time , there are useful analytical approximations to capture the dynamics of influenza ( and other respiratory transmitted viral diseases ) and estimate the transmission potential . for example, the force of infection with a power law approximation is reasonably given by : in ( [ eqn_hn404 ] ) , and characterize the epidemic dynamics ; _e.g. _ initial growth ( _ i.e _ if is less than 0 , the modified form acts to dampen the exponential growth of incidence ) and endemic equilibrium ( _ i.e. _ when is greater than 0 , density - dependent damping is increased ) . a model of this typewas actually validated with measles data in england and wales , comparing the prediction with that of employing the mass action principle .+ another approximation might be a pair - wise model , which can explicitly account for the correlation between connected pairs .the model reasonably permits deriving the force of infection using the number of various connected paris , which implies wide applicability to the epidemiologic data of sexually transmitted infections . incorporating spatial heterogeneity in an approximate manner would shed light on further quantifications , and thus , simple and reasonably tractable models which permit spatio - temporal modeling of influenza are expected ( _ e.g. _ ) . summarizing the above discussions , we have presented modeling approaches that can quantify the transmission potential of pandemic influenza .as we have shown , temporal case distributions have been analyzed in many instances , and previous efforts have come close to maximize the utility of temporal distributions ( _ i.e. _ epidemic curve ) .however , at the same time , we have also learned that we can extract almost the intrinsic growth rate alone from a single time - evolution data .accordingly , we are now faced with a need to clarify heterogeneous patterns of transmission and more detailed intrinsic dynamics of influenza . with regard to the latter , primitive epidemiologic questions (_ e.g. _ probability of clinical attack given infection ) remain to be answered for spanish , asian and hong kong influenza .let s summarize what we need to clarify theoretically about pandemic influenza in list : 1. acquired immunity 2 .evolutionary dynamics 3 .multi - host species transmission 4 .asymptomatic transmission 5 . attack rate ( _ i.e. _ pr(onset ) ) 6 .case fatality ( _ i.e. _ pr(death ) ) 7 .generation time and serial interval 8 .latent , incubation , infectious and symptomatic periods with further data 9 .transmission potential with time , space and antigenic types 10 .transmission potential with time and age these issues highlight an importance to quantify the transmission of influenza using not only cases ( _ i.e. _ those followed onset of symptoms ) but also some hint suggesting the infection event .for example , majority of the above listed issues could be reasonably addressed by implementing serological surveys ( _ e.g. _ antibody titers of individuals and , preferably , time - delay delay distribution from infection to seroconversion ) .since the proportion of those who do not experience symptomatic infection ( _ i.e. _ probability of asymptomatic infection ) is not small for influenza , case records can tell us little to address the above mentioned issues , and thus , historical data of spanish influenza may hardly offer further information . by maximizing the utility of observed data, we have to clarify the dynamics of influenza further , and identify key information which characterize the specific mechanisms of spread .ferguson , d.a.t .cummings , s. cauchemez , c. fraser , s. riley , a. meeyai , s. iamsirithaworn and d.s .burke , strategies for containing an emerging influenza pandemic in southeast asia ._ nature _ * 437 * ( 2005 ) 209 - 214 .murray , a.d .lopez , b. chin , d. feehan and k.h .hill , estimation of potential global pandemic influenza mortality on the basis of vital registry data from the 1918 - 20 pandemic : a quantitative analysis ._ lancet _ * 368 * ( 2006 ) 2211 - 2218 .h. markel , h.b .lipman , j.a .navarro , a. sloan , j.r .michalsen , a.m. stern and m.s .cetron , nonpharmaceutical interventions implemented by us cities during the 1918 - 1919 influenza pandemic ._ jama _ * 298 * ( 2007 ) 644 - 654 .h. nishiura , k. dietz and m. eichner , the earliest notes on the reproduction number in relation to herd immunity : theophil lotz and smallpox vaccination ._ journal of theoretical biology _* 241 * ( 2006 ) 964 - 967 .o. diekmann , j.a.p .heesterbeek and j.a.j .metz , on the definition and the computation of the basic reproductive ratio in models for infectious diseases ._ journal of mathematical biology _ * 35 * ( 1990 ) 503 - 522 . c. castillo - chavez , z. feng and w. huang , on the computation of and its role on global stability , in : mathematical approaches for emerging and reemerging infectious diseases : an introduction , i m a volume 125 ( springer - veralg , berlin , 2002 ) pp .229 - 250 .kendall , deterministic and stochastic epidemics in closed populations . in : third berkeley symposium on mathematical statistics and probability 4 ,ed p. newman ( university of california press , new york , 1956 ) pp.149 - 165 .m. lipsitch , t. cohen , b. cooper , j.m .robins , s. ma , l. james , g. gopalakrishna , s.k .chew , c.c .tan , m.h .samore , d. fisman and m. murray , transmission dynamics and control of severe acute respiratory syndrome ._ science _ * 300 * ( 2003 ) 1966 - 1970 .g. chowell , c.e .ammon , n.w .hengartner and j.m .hyman , transmission dynamics of the great influenza pandemic of 1918 in geneva , switzerland : assessing the effects of hypothetical interventions. _ journal of theoretical biology _* 241 * ( 2006 ) 193 - 204 .g. chowell , h. nishiura and l.m .bettencourt , comparative estimation of the reproduction number for pandemic influenza from daily case notification data ._ journal of the royal society interface _ * 4 * ( 2007 ) 155 - 166 .g. chowell , c.e .ammon , n.w .hengartner and j.m .hyman , estimating the reproduction number from the initial phase of the spanish flu pandemic waves in geneva , switzerland ._ mathematical biosciences and engineering _ * 4 * ( 2007 ) 457 - 470 .s. cauchemez , p.y .boelle , g. thomas and a.j .valleron , estimating in real time the efficacy of measures to control emerging communicable diseases . _ american journal of epidemiology _ * 164 * ( 2006 ) 591 - 597 .bettencourt , r.m .ribeiro , g. chowell , t. lant and c. castillo - chavez , towards real time epidemiology : data assimilation , modeling and anomaly detection of health surveillance data streams . in :intelligence and security informatics : biosurveillance .proceedings of the 2nd nsf workshop , biosurveillance , 2007 .lecture notes in computer science .eds f. zeng et al .( springer - verlag , berlin , 2007 ) pp .79 - 90 .h. nishiura , m. schwehm , m. kakehashi and m. eichner , transmission potential of primary pneumonic plague : time inhomogeneous evaluation based on historical documents of the transmission network ._ journal of epidemiology and community health _ * 60 * ( 2006 ) 640 - 645 .d. schenzle , k. dietz and g.g .frosner , antibody against hepatitis a in seven european countries .ii . statistical analysis of cross - sectional surveys ._ american journal of epidemiology _ * 110 * ( 1979 ) 70 - 76 . c. viboud , t. tam , d. fleming , a. handel , m.a . miller and l. simonsen , transmissibility and mortality impact of epidemic and pandemic influenza , with emphasis on the unusually deadly 1951 epidemic ._ vaccine _ * 24 * ( 2006 ) 6701 - 6707 .g. sertsou , n. wilson , m. baker , p. nelson and m.g .roberts , key transmission parameters of an institutional outbreak during the 1918 influenza pandemic estimated by mathematical modelling ._ theoretical biology and medical modelling _ * 3 * ( 2006 ) 38 .v. andreasen , c. viboud and l. simonsen , epidemiologic characterization of the summer wave of the 1918 influenza pandemic in copenhagen : implications for pandemic control strategies ._ journal of infectious diseases _ * * ( 2008 ) in press .mathews , c.t .mccaw , j. mcvernon , e.s .mcbryde and j.m .mccaw , a biological model for influenza transmission : pandemic planning implications of asymptomatic infection and immunity ._ plos one _ * 2 * ( 2007 ) e1220 .g. chowell , l.m.a .bettencourt , n. johnson , w.j .alonso and c. viboud , the 1918 - 1919 influenza pandemic in england and wales : spatial patterns in transmissibility and mortality impact _ proceedings of the royal society b _ * * ( 2008 ) in press .k. dietz , mathematical models for transmission and control of malaria . in : malaria , principles and practice of malariology ,wernsdorfer and i. mcgregor ( churchill livingstone , edinburgh , 1988 ) pp.1091 - 1133 .kermack and a.g .mckendrick , contributions to the mathematical theory of epidemics - i. _ proceedings of the royal society series a _ * 115 * ( 1927 ) 700 - 721 ( reprinted in _ bulletin of mathematical biology _ * 53 * ( 1991 ) 33 - 55 ) .h. nishiura , t. kuratsuji , t. quy , n.c .phi , v. van ban , l.e .long , h. yanai , n. keicho , t. kirikae , t. sasazuki and r.m .anderson , rapid awareness and transmission of severe acute respiratory syndrome in hanoi french hospital , vietnam ._ american journal of tropical medicine and hygiene _ * 73 * ( 2005 ) 17 - 25 .haydon , m. chase - topping , d.j .shaw , l. matthews , j.k .friar , j. wilesmith and m.e .woolhouse , the construction and analysis of epidemic trees with reference to the 2001 uk foot - and - mouth outbreak ._ proceedings of the royal society of london series b _ * 270 * ( 2003 ) 121 - 127 .s. cauchemez , p.y .boelle , c.a .donnelly , n.m .ferguson , g. thomas , g.m .leung , a.j .hedley , r.m .anderson and a.j .real - time estimates in early detection of sars . _emerging infectious diseases _ * 12 * ( 2006 ) 110 - 113 .metz , the epidemic in a closed population with all susceptibles equally vulnerable ; some results for large susceptible populations and small initial infections ._ acta biotheoretica _ * 27 * ( 1978 ) 75 - 123 .separate roles of the latent and infectious periods in shaping the relation between the basic reproduction number and the intrinsic growth rate of infectious disease outbreaks ._ journal of theoretical biology _ * * ( 2008 ) in press .a. imahorn , epidemiologische beobachtungen ueber die grippe - epidemie 1918 i m oberwallis .inaugural - dissertation zur erlangung der doktorwuerde der medizinischen fakultaet der universitaet zuerich ( universitaet zuerich , zurich , 2000 ) ( in german ) .h. nishiura , epidemiology of a primary pneumonic plague in kantoshu , manchuria , from 1910 to 1911 : statistical analysis of individual records collected by the japanese empire ._ international journal of epidemiology _ * 35 * ( 2006 ) 1059 - 1065 .cowling , l.m . ho and g.m .leung , effectiveness of control measures during the sars epidemic in beijing : a comparison of the rt curve and the epidemic curve ._ epidemiology and infection _ * * , ( 2007 ) in press .marques , o.p .forattini and e. massad , the basic reproduction number for dengue fever in sao paulo state , brazil : 1990 - 1991 epidemic ._ transactions of the royal society of tropical medicine and hygiene _ * 88 * ( 1994 ) 58 - 59 .galvani , x. lei and n.p .jewell , severe acute respiratory syndrome : temporal stability and geographic variation in case - fatality rates and doubling times ._ emerging infectious diseases _ * 9 * ( 2003 ) 991 - 994 .de jong , o. diekmann and j.a.p .heesterbeek , how does transmission of infection depend on population size ? in : epidemic models : their structure and relation to data .d. mollison ( cambridge university press , cambridge , 1995 ) pp.84 - 94 .h. nishiura and g. chowell , household and community transmission of the asian influenza a ( h2n2 ) and influenza b viruses in 1957 and 1961 ._ southeast asian journal of tropical medicine and public health _ * 38 * ( 2007 ) in press .s. cauchemez , f. carrat , c. viboud , a.j .valleron and p.y .boelle , a bayesian mcmc approach to study transmission of influenza : application to household longitudinal data . _statistics in medicine _ * 23 * ( 2004 ) 3469 - 3487 .nelson , l. simonsen , c. viboud , m.a .miller , j. taylor , k.s .george , s.b .griesemer , e. ghedin , n.a .sengamalay , d.j .spiro , i. volkov , b.t .grenfell , d.j .lipman , j.k .taubenberger and e.c .holmes , stochastic processes are key determinants of short - term evolution in influenza a virus. _ plos pathogens _ * 2 * ( 2006 ) e125 .t. colton , t. johnson and d. machin d ( eds ) .proceedings of the conference on quantitative methods for studying aids , held in blaubeuren , germany , june 14 - 18 , 1993 ._ statistics in medicine _ * 13 * ( 1994 ) 1899 - 2188 .h. inaba and h. nishiura , on the dynamical system formulation of the type reproduction number for infectious diseases and its application to the asymptomatic transmission model . _ mathematical biosciences _ * * ( 2007 ) submitted .woolhouse , c. dye , j.f .etard , t. smith , j.d .charlwood , g.p .garnett , p. hagan , j.l .hii , p.d .ndhlovu , r.j .quinnell , c.h .watts , s.k .chandiwana and r.m .anderson , heterogeneities in the transmission of infectious agents : implications for the design of control programs ._ proceedings of the natinal academy of science u s a _ * 94 * ( 1997 ) 338 - 342 .duerr , m. schwehm , c.c .leary , s.j. de vlas and m. eichner , epidemic size and probability in populations with heterogeneous infectivity and susceptibility ._ epidemiology and infection _ * 135 * ( 2007 ) 1124 - 1132 .keeling , s.p .brooks and c.a .gilligan , using conservation of pattern to estimate spatial parameters from a single snapshot . _ proceedigns of the national academy of science u s a _ * 101 * ( 2004 ) 9155 - 9160 .duerr , s.o .brockmann , i. piechotowski , m. schwehm and m. eichner , influenza pandemic intervention planning using influsim : pharmaceutical and non- pharmaceutical interventions ._ bmc infectious diseases _ * 7 * ( 2007 ) 76 .+ of the estimates to different assumptions for the serial interval was examined ; pandemic waves were simultaneously fitted ; epidemic was observed in a community with closed contact ( i.e. military camp ) .[ tablewaves ]
this article reviews quantitative methods to estimate the basic reproduction number of pandemic influenza , a key threshold quantity to help determine the intensity of interventions required to control the disease . although it is difficult to assess the transmission potential of a probable future pandemic , historical epidemiologic data is readily available from previous pandemics , and as a reference quantity for future pandemic planning , mathematical and statistical analyses of historical data are crucial . in particular , because many historical records tend to document only the temporal distribution of cases or deaths ( i.e. epidemic curve ) , our review focuses on methods to maximize the utility of time - evolution data and to clarify the detailed mechanisms of the spread of influenza . + first , we highlight structured epidemic models and their parameter estimation method which can quantify the detailed disease dynamics including those we can not observe directly . duration - structured epidemic systems are subsequently presented , offering firm understanding of the definition of the basic and effective reproduction numbers . when the initial growth phase of an epidemic is investigated , the distribution of the generation time is key statistical information to appropriately estimate the transmission potential using the intrinsic growth rate . applications of stochastic processes are also highlighted to estimate the transmission potential using similar data . critically important characteristics of influenza data are subsequently summarized , followed by our conclusions to suggest potential future methodological improvements . * pacs classifications : * viral diseases ( 87.19.xd ) ; population dynamics ( 87.23.cc ) ; stochastic models in biological physics ( 87.10.mn ) * keywords : * influenza ; pandemic ; epidemiology ; basic reproduction number ; model .
after several decades of research , planets in stellar binary systems now constitute a well - established observational result .previous examples include p - type orbits , when the planet is found to orbit both binary components , as well as s - type orbits with the planet orbiting only one of the binary components with the second component acting as a perturbator ; see , e.g. , and for selected observational results and data .another topic of significant importance , especially concerning the astrobiological community , are studies of circumstellar and circumbinary habitability .previous work focused on the traditional concept of and subsequent studies , where habitability is defined based on the principal possibility that liquid water is able to exist on the surface of an earth - type planet possessing a co/h / n atmosphere ; for more sophisticated concepts see , e.g. , , and references therein .other relevant investigations concern studies of orbital stability , especially for ( hypothetical ) earth - type planets in stellar habitable zones .this type of work has been pursued for single as well as multi - planetary and multi - stellar systems ; see , e.g. , , , , for early contributions as well as , e.g. , and for more recent work .some of these efforts resulted in stability catalogs of the habitable zones of the planetary systems known at the time . [ paper i and ii ] forwarded a concise approach for the investigation of habitable regions in stellar binary systems , which forms the basis for binhab ( see sect . 2 ) . in sect . 3, we will give applications to s / st - type habitability for binaries of low mass stars .our conclusions and outlook are presented in sect .the method as conveyed has previously been given in paper i and ii ; thus in the following we will focus on the most decisive concepts , which include : ( 1 ) the consideration of a joint constraint comprising orbital stability and a habitable region for a putative system planet through the stellar radiative energy fluxes ( radiative habitable zone " ; rhz ) needs to be met .( 2 ) the treatment of conservative , general and extended zones of habitability for the various systems , referred to as chz , ghz , and ehz , respectively , following the approach given by and subsequent work .( 3 ) the providing of a combined formalism , based on solutions of a fourth - order polynomial , for the assessment of both s - type and p - type habitability .in particular , mathematical criteria are presented for which kind of system s - type and p - type habitability is realized .following paper i , five different cases of habitability are identified , which are : s - type and p - type habitability provided by the full extent of the rhzs ; habitability , where the rhzs are truncated by the additional constraint of planetary orbital stability ( referred to as st and pt - type , respectively ) ; and cases of no habitability at all . regarding the treatment of planetary orbital stability ,the formulae of are utilized .binhab is suitable for both circular and elliptical stellar binary components , the topic of paper ii .figure 1 conveys the flow diagram .note that for s - type orbits , the orbital stability criterion operates as an upper limit of orbital stability ( see sect .3 ) , whereas for p - type orbits , it operates as a lower limit of planetary orbital stability . binhab allows to consider general binary systems , including systems containing evolved stars .surely , for main - sequence stars , there is an intimate coupling between the input parameter t and r ( ) , the stellar effective temperature and radius , on the one hand , and m , the stellar mass , on the other hand , which is however not required for the usage of the code .the simulations of paper i are largely based on data given in . to make binhab publicly accessible, we created a website through which stellar system conditions could be entered , and the equations pertinent to binhab would then be used to find whether any habitable zone or zones exist , and if so , what type . a virtual server was set up and hosted by the ut arlington it department at http://physbinhab.uta.edu , as a dedicated server for the binhab website .it allows the user to enter binary system parameters ( i.e. , semi - major axis and eccentricity ) , separate stellar parameters for the two stars ( i.e. , temperature , luminosity , and mass ) , and the type of habitable zone the user would like to look for ( i.e. , chz , ghz , or ehz ) .background information on the different types of habitable zones is also given .the output states whether there were any habitable zones found , and if so , what type ( s , st , p , or pt ) and the inner and outer limit of any zones .the website itself uses html ( hypertext markup language ) to interface with the user , by displaying information , accepting user input parameters , and displaying the results of the binhab calculations .the user s input is passed to php ( php : hypertext preprocessor ) code , which also checks the validity of the input .if any of the parameters are out of range , the php code creates a warning message , which is passed to the html code and displayed to the user , explaining what needs to be adjusted . if all parameters are acceptable , the php code saves them to a new input file , runs the binhab fortran binary on the input file , and collects the output from the binary . the output is then checked , formatted , and passed to the html code , for display to the user .the use of php allows for much more processing of the input than is possible with html alone and is necessary for running the fortran binary .fortran is used for implementing the binhab algorithms , rather than php , for a variety of reasons , including that it is much more widely used in the scientific community and that as a compiled language , it executes much faster than the interpreted php code .additionally , the fortran code and input files are stored in a segment of the server directory structure that is inaccessible to the website , but accessible to the php code .hence , the fortran code is not viewable or downloadable from the web , as it can only be accessed by the php code , which runs on the server , and therefore is not directly viewable from the internet . in order to make the website easier to find, we successfully submitted the url to the two largest search engines : google and bing ( which also drives the yahoo ! search engine ) .in order to illustrate the capacity of the method , we convey studies of s / st - type habitability for binaries of low - mass main - sequence stars . herewe focus on stars of masses 0.75 m , 0.65 m , and 0.50 m , corresponding to spectral types of k2v , k6v , and m0v .this type of work is motivated by the observational finding that stars with masses below about 0.8 m constitute nearly 90% of all stars in the milky way ( e.g. , * ? ? ?* ; * ? ? ?as an example we focus on s - type and st - type habitable zones in binaries consisting of these types of stars .in particular , we investigate the role of the eccentricity of the binary system on the width of the s / st - type habitable zones ( if existing ) for systems with au as examples while focusing on results for the ghz ( see fig . 2 ) .the following aspects are identified : for all stellar mass combinations , stellar habitable zones are found for eccentricities below 0.20 .for the majority of models pursued , s - type habitability is truncated due to the orbital stability requirement of the putative planet resulting in st habitability classification ( see table 1 ). stellar pairs with masses of 0.75 m exhibit the broadest rhzs due to their relatively high luminosities ; however , in this case the relatively large stellar masses lead to a significant truncation of circumstellar habitability , resulting in relatively narrow habitable zones . on the other hand ,stellar pairs with masses of 0.50 m show s / st - type habitable regions up to binary eccentricities of 0.65 ( see table 2 ) , albeit the widths of the habitable zone are relatively small compared to other primary secondary mass combinations , especially for cases of small binary eccentricities. starting at a well - defined eccentricity , the width of the habitable zones decreases linearly as a fraction of the binary eccentricity for all mass combinations , encompassing both systems of equal and nonequal stellar masses , owing to the truncation criterion .consistently smaller widths for the habitable regions are identified for models based on chzs relative to ghzs , as expected .lccccccc model & 0.0 & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 + m m , m m & st & st & st & ... & ... & ... & ... + m m , m m & st & st & st & ... & ... & ... & ... + m m , m m & st & st & st & ... & ... & ... & ... + m m , m m & s & st & st & st & st & ... & ... + m m , m m & s & st & st & st & st & st & ... + m m , m m & s & s & s & s & st & st & st + lcc model & chz & ghz + m m , m m & 0.12 & 0.20 + m m , m m & 0.15 & 0.23 + m m , m m & 0.19 & 0.27 + m m , m m & 0.43 & 0.48 + m m , m m & 0.46 & 0.51 + m m , m m & 0.62 & 0.65 +we provided a short description of the features and capacities of the numerical tool binhab hosted at the university of texas at arlington .it considers a joint constraint including orbital stability and a habitable region for a putative system planet through the stellar radiative energy fluxes , among various other desirable features .although the code has previously mostly been used to investigate binary systems consisting of main - sequence stars , it is highly flexible ; it can also be utilized for the calculation of habitable zones for systems containing a subgiant , giant , or supergiant . concerning the latter ,binhab has already been used to study the habitability of earth - mass planets and moons in the kepler-16 system , known to host a circumbinary saturn - mass planet .ultimately , it is our goal to expand the developed methods to multiple stellar systems , which are of notable interest to the scientific community .
the aim of this contribution is to introduce the numerical tool binhab , a publicly accessible code , available at the university of texas at arlington , that allows the calculation of s - type and p - type habitable zones of general binary systems .
this work is supported by temasek laboratories at national university of singapore through the dsta project pod0613356 .
we propose a method to investigate modular structure in networks based on fitted probabilistic model , where the connection probability between nodes is related to a set of introduced local attributes . the attributes , as parameters of the empirical model , can be estimated by maximizing the likelihood function of the observed network . we demonstrate that the distribution of attributes provides an informative visulization of modular networks on low - dimensional space , and suggest the attribute space can be served as a better platform for further network analysis . networks are widely used to model complex systems with many interactive units . usually , each node in such a network represents a distinct individual , and a link is established based on certain measurement of interaction between particular pair of nodes . on the microscopic level , the underlying system is fully described by state of each unit defined by several local properties , which also determines the interactions among them through complicated coupling . thus , the network would be completely determined if local properties of nodes and interactive functions were known . in many applications however , the representative network is the only available data . it is the purpose of the researchers to infer these crutial information on the underlying system through network analysis . for instance , if the interaction between two units mainly depends on their local properties , then it is reasonable to expect that the units which have similar connection patterns in network representation , will share some common features in their local properties , and therefore may have similar functioning in the underlying system . although very friutful , the applicability of this type of analysis is limited by the gap existing between the network - level description and the underlying system . this is due to the fact that in most situations , the representitive network only models relationships or interactions among units , but not the associated properties or states of units which determine interactions . so for example , a minor change on the observed network may not be caused necessarily by small perturbations on local properties of units in the underlying system , since interactions may depend on local states in a highly nonlinear way . similarly , the evolution of the unerlying system due to continuous change of local states may result in abrupt changes on the network structure , such as group merge or split . to better deal with these problems , a necessary step is to bridge these two deffierent description levels . however it is generally impossible to completely reconstruct the intrinsic properties of nodes based solely on the strucure of a given network , since the mechanism determining a link may be very complicated . in this paper , we develope a method to describe the system in a middle level between the representitive network and the microscopic description through some simplifications . we regard the representitive network as one particular realization of an emprical probabilistic model . in this model , each pair of nodes has certain probability to be connected , and the connection probabilities depends on the distance between local attributes of the corresponding nodes through some function . the attributes , act as model parameters , can be estimated according to some statistical criterion to best interpret the observed connections . since each data point in the space of attributes is associated with distinct node , their configuration provides an alternative representation of the observed network . the advantages of introducing this new representation are twofolds . on the one hand , the configuration of the attributes provides an informative projection of the network on a low - dimensional space . the established attribute space can be taken as new platform for further network analysis , where the attribute vectors which have no difference with conventional data source , allow many well developed clustering techniques to be applied directly . on the other hand , the introduced local attributes are closely related to the unknown properties of the unerlying system by the common observed network . it may be easier to model evolution of the underlying system as changes of attributes , and study the final influences on network structure through the empiral probability model . in this paper , we mainly focus on studying modular strucutres in networks . after describing technique details of modeling approach , we apply the proposed method to some artificial and real - world networks to demonstrate its usefulness on network visualzing and structure analysis . we also discuss a possible way to extend the proposed technique to deal with more complicated multi - layered modular network . assume there is an imaginary probabilistic system with nodes , which is characterized by a connection probability matrix , where represents the probability of connection between node and . the given network described by adjancency matrix is regarded as a realization of this imaginary system . in particular , is treated as a random variable and its observed value is determined by a bernoulli trial according to the probability . for each node in the imaginary system , there associates a set of local attributes denoted by , where the dimension . the connection probability is then assumed to depend only on the local attributes and , and can be written as . obviously , this is a great simplication , and should be regarded as first order approximation . the function tying connection probability and local attributes and should be chosen depending on the problem at hand . in our study , we are mainly interested in undirectional unweighted modular networks . considering the symmetric constraint , we choose a function which depends only on the distance . will be closed to when approaching to zeros , and decays rapidly with increasing . a natrual choise of is in guassian form : , where , defined as is used as a rescaling factor to compensate the distortion when projecting the high dimensional network into a low - dimensional euclidian space due to the inhomogeneity of the degree distribution of the network . the connection probability thus becomes . therefore , we have under this framework , the logarithmic likelihood function of observing particular network is and naturally , attributes which maximize above likelihood function are desired . these maximum likelihood estimates can be obtained by minimizing following steps below : 1 . starting from arbitary initials , 2 . caculate the derivatives , 3 . search in this direction to get , so that gives smallest , 4 . set , and go back to step 2 . 5 . quit if stopping conditions are met . it should be noded that the explicitly casted functional relationship of attributes and connection probabilities may not be the true undelrying mechanism . this approach nevertheless , can be applied as a useful technique to explore the desired structure in the network , providing the fitted model is good . [ cols="^,^ " , ] c + ( c ) c + ( a ) + + ( b ) + [ fig : fig7 ] american college football teams network has many clusters involved . the fitted model nevertheless successfully capture this complicated structure . the amazing representation of the network in atribute space ( shown in figure 5(a ) ) not only manifests clustering structures , but also suggests that several wandering points may not be clearly classfied . it turns out that these nodes belongs to specific group ( ia independents ) and can not be classifed as one cluster consistently . furthermore , the particular node in the group of conference usa which is classifed as member of the group of western athletic is not due to the flaw of the method but caused by the network construction . the attributes representation also suggests some groups may be further divided into smaller subgroups such as the mid american group . this observation is well supported by the connection patterns of nodes in this group . these useful information can not be easily obtained by conventional network clustering algorithms , and show the advantages of the proposed modeling approach . the dolphins social network is another widely studied example . it can be divided into two big clusters . one of the cluster may be further divided into three small group as studied in . the analysis results for dolphins social network are shown in figure 6 . the distribution of the attributes correctly reflects large clusters ( as shown in figure 6(b ) ) , but also indicates that there may exists or smaller clusters in the left cluster . the interesting observation is that the attributes of the nodes corresponding to three small clusters suggested in are indeed well grouped . also , the surrogate network generated by the fitted model ( as shown in figure 6(c ) ) shows striking similarity with the observed network . decaying behaviors of fitting errors when the optimization procedure iterated unveil some common feature , which can be seen from the subplots ( upper right ones ) in all examples studied as shown in figure 4(c ) , figure 5(c ) and figure 6(c ) . it can be divided into two segments , a rapid decrease phase and a gradual change phase . after closely monitoring the movement of the points in attribute space step by step , we find that the points are always firstly arranged according to globe structure of the network , and then fine adjusted within each cluster to generate better configuration . this may partially explained why the suboptimal solutions due to gradient based algorithm also give out good configuration in general . although modular structures are most commonly observed and studied , there are many different other structures . in our study , we choose gaussian function to relate local attributes to connection probability , which is particular useful to capture clustering structure . the proposed modeling approach nevertheless is flexible enough to deal with other structures providing they can be well defined . for example , for a bipartite network , the function , where is similar to what used in the paper , may be a better choise . the proposed modeling approach can be easily extended to deal with more complicated grouping structure than simple modular structure . let us consider a simple exention of overlapping two clustering structure . for example , consider a group of students who make friends based on different factors such as personality or avocation . now even if each friendship network based on any single factor were well clustered , the observed overall network may have structure quite different from the simple modular network . we call this kind of network multi - layered modular network . one such example of -layered network is shown in figure 7(a ) . this particular network is generated by the following way . suppose the nodes are numbered from to . we first generate two modular networks separately . the first modular network has two clusters , one containing node to node and the other containing node to node . the connection probabilities are for nodes in same cluster and in different clusters . the second modular network also has two clusters , one containing node to node , and the other containing nodes to and nodes to . the connection probabilities are slightly different , which are for nodes in same cluster and for nodes in different clusters . the final network is generated by stacking two modular networks together , and removing repeated links . to explore such multi - layered structure by the proposed modeling approach , the most straight way is to extend the dimension of the attributes vector . let ^t$ ] the new extended attributes . for each layer , the connection probability can be written as and . the connection probability of the observed network will be . following the same procedure described above , we get the attributes representation in extended attribute space . the distribution of the attributes are shown in figure 7(b ) . interestingly , the hidden clustering structure are unveiled in different subspace of the extended attribute space . in summary , we proposed a probabilistic modeling approach to analyze networks . under this framework , the observed network can be regarded as a measurement of certain probabilistic system , where the connection probability of any pair of nodes depends on the properly rescaled distance between the introduced local attributes of the corresponding nodes . it is remarkable that the configuration of the optimally estimated attributes well represents the intrinsic structure of the observed network , thus provides an very informative way to visualize networks in low - dimensional space . it can be more effective to make further network structure analysis based on the attribute vectors instead of observed network directly . the modeling approach can be easily extended to deal with more complicated structures such as multi - layered clustered network .
undirected formation control has been one of the most studied subjects in multi - agent systems .the formation control model is described by two characteristics : one is an undirected graph describing the pattern of interaction , and the other is a set of scalar functions each of which describes the interaction law between a pair of adjacent agents .a detailed description is given below let be an undirected connected graph of vertices .let be the set of vertices adjacent to vertex .we then consider the motion of agents in by each with is a scalar function describing how adjacent agents and interact with each other .we require that be identical with for all , in other words , interactions among agents are reciprocal . in its most general form ,each could be a function of , and possibly the time variable as well . in this case , we have proved in that if is connected , then system , treated as a centralized control system , is approximately path - controllable over an open dense subset of the configuration space . yet ,if we regard system as a decentralized control system , i.e , each agent only accesses part of the information , then there is a restriction on what variables each can depend on .for example , it is often assumed that each agent knows agent if and only if is its neighbor , i.e , . then in this caseeach is at most a function of and , and possibly the time . over the last decade ,there have been many solid works about using system to achieve decentralized formation control .questions about the level of interaction laws that are necessary for organizing such systems , questions about system convergence , questions about counting and locating stable equilibria , and questions about the issues of robustness and etc .have all been treated to some degree ( see , for example , ) .for example , a popular decentralized algorithm , known as the krick s law , is that we assume each agent measures the mutual distance between himself and its neighbors .the control law is then given by for all where is the prescribed distance between and in the target formation . by following this decentralized algorithm , systemwill then be a gradient system with respect to the potential function .in fact , it has been shown that if each is a continuous function depending only on the distance , then the resulting system is always a gradient system .however , the associated potential function often has multiple local minima , and in some cases , the number of local minima has an exponential growth with respect to the number of agents ( see , for example , ) .also , we note that if each depends only on the mutual distance as is the case if we adopt the krick s law for , then how a configuration is embedded into the euclidean space is not relevant , only the shape of the formation matters . in other words , if a configuration is an equilibrium associated with system , then any rotation or translation of the configuration will also be an equilibrium . in any of such case, the group of rigid motions is introduced to describe this phenomena .two configurations will be recognized as the same target formation if they are in the same orbit with respect to the group action . in this paper , as in many earlier work on this problem , we will investigate system , treated as a decentralized formation control system , by equipping it with a set of new control laws .what distinguishes this paper from others is that in addition to the shape of the target configuration , we also emphasize its euclidean embedding . to be more precise ,we let and be two configurations with the same centroid , i.e , , and we distinguish and in the sense that these two configurations are recognized as the same target formation if and only if .we impose the condition that and have the same centroid because of the fact that the centroid of a configuration in an undirected formation control system is invariant along the evolution regardless of the choice of the control laws .the decentralized control law is then designed for each agent so that the solution of the control system may converge to the target configuration . in particular , we show that there is a quadratic lyapunov function associated with system whose unique local ( global ) minimum point is the target configuration .but we also note ( and we will see later in the paper ) that there may exist a continuum of equilibria for system , thus a solution of system may fail to converge to the global minimum point . to fix this problem , we then modify the formation control laws by adding noise terms .this is an application of simulated annealing to formation control systems .simulation results then show that sample paths of the modified stochastic system approach the global minimum point .the rest of this paper is organized as follows . in section 2, we will first specify what information each agent knows , or in other words what variables can depend on .then we will introduce the decentralized formation control law , and establish the convergence of system . in section 3, we will explore one of the limitations of this formation control model by showing that there may exist a continuum of equilibria of system .for simplicity , we will only focus on trees as a special type of network topologies . in this special case ,we show that there is a simple condition for determining whether a configuration is an equilibrium or not , and thus there is a geometric characterization of the set of equilibria of system .the existence of continuum of equilibria poses a problem about the convergence of system to the target configuration . in section 4, we will focus on fixing this problem by applying the technique of simulated annealing to the algorithm .simulation results then show that a typical sample path will converge to the target configuration .in this section , we will introduce the decentralized formation control law and show that system , when equipped with this control law , converges to the set of equilibria .but before that , we need to be clear about what we mean by a decentralized formation control system .so in the first part of this section , we will specify what information each agent knows , i.e , what variables each can depend on .also we will specify how the information of the target configuration is distributed among agents .we first introduce the underlying space of system . as interactions among agents are reciprocal , the centroid of a configuration is always invariant along the evolution in an undirected formation control system , so we may as well assume that the centroid of a configuration is located at the origin .the configuration space , as the underlying space of system , is then defined by it is clear that is a euclidean space of dimension . in this paper, we assume that is the target configuration , i.e , each is the target position for agent .we will now specify what information each agent can access . in this paper ,if is an edge of , we then assume that * agent knows ; * agent is able to measure at any time ; consequently , if is an edge of , then we require that each scalar function depend only on , and possibly the time variable .suppose is an edge of , we then let the control law be defined as where is the standard inner - product of two vectors , and is the standard euclidean norm of a vector .we note that in this definition , if we exchange roles of vertex and vertex , then will be identical with .this is consistent with our assumption that the formation control system is undirected .the main result of this section we will prove is about the convergence of system as stated below .let be the decentralized control law defined by expression .then there is a quadratic lyapunov function associated with system defined as let be a solution of system , then the derivative is zero if and only if is an equilibrium . this proof is done by explicit computation .we check that it is then clear that the time derivative is zero if and only if each is zero which implies that is an equilibrium . .there may exist multiple equilibria of system .in fact , as we will see in the next section that in the case is a tree graph there exists a continuum of equilibria . nevertheless , there is only one local ( and also global ) minimum point of the potential function which is .it thus suggests that we apply simulated annealing to this formation control law as we will discuss in the last section of the paper .+ _ remark ii_. notice that the potential function approaches infinity as goes to infinity . on the other hand ,we have so each solution of system has to remain in a bounded set , and thus converges to the set of equilibria . in other words, no agent escapes to infinity along the evolution .in this section , we will explore the set of equilibria of system .the main purpose of doing this is to illustrate one of the limitations of this formation control law .it is well - known that if is a lyapunov function for a dynamical system and is a unique equilibrium , then is stable and all solutions of the system will converge to .however , this is not the case here , i.e , the target configuration will not be the unique equilibrium of system . as we will see in this section there may exist a continuum of equilibria of system . for simplicity, we will only focus on the case where the interaction pattern is a tree graph .we focus on this special class of interaction patterns because in this case there is a simple condition telling us whether a configuration is an equilibrium or not .in particular , we will use this condition to characterize the set of equilibria of system in a geometric way .a path in a graph is a finite sequence of edges which connects a sequence of vertices .a simple path then refers to a path which does not have repeated vertices , and a circle refers to a path without repeated vertices or edges , other than the repetition of the starting and ending vertices .an undirected graph is a * tree * if any two vertices of are connected by a unique simple path , i.e , there is no circle in .each tree graph can be inductively built up starting with one vertex , and then at each step , we join a new vertex via one new edge to an existing vertex .this , in particular , implies that each tree graph has a leaf , i.e , a vertex of degree one .an example of a tree graph is given in figure [ tree ] .vertices and edges .vertices are labeled with respect to an inductive construction .for any two vertices in the graph , there is a unique path connecting them .the four vertices , , and are leaves of the tree graph above . ] in this part , we show that if the graph is a tree graph , then the set of equilibria associated with system can be characterized by a simple condition stated below .[ equic ] let be a tree graph , then is an equilibrium associated with system if and only if for all .it is clear that if each is zero , then is an equilibrium .we now prove the other direction , and the proof is done by induction on the number of agents .+ _ base case_. suppose , then .so if is an equilibrium , then either or , but they both imply .+ _ inductive step_. assume the lemma holds for with , and we prove for the case .since each tree graph has at least one leaf , we may assume that vertex is a leaf of and it joins the graph via edge .suppose is an equilibrium , then we must have then by the same arguments we used for proving in the base case , we conclude that .now let and let be the subgraph induced by , i.e , for any two vertices and in , the pair is an edge of if and only if it is an edge of .it is clear that is also a tree graph .let be a sub - configuration of consisting of agents , then is also an equilibrium under with .this holds because is an equilibrium and meanwhile , so the agent does nt attract or repel any agent in . by induction, we have for all .this then , combined with the condition , establishes the proof . in this part, we will use the equilibrium condition to characterize the set of equilibria of system .[ conte ] let be a tree graph .let be the set of equilibria of system with defined by expression .then there is a diffeomorphism of given by as a product of copies of the unit sphere in .the proof of the theorem will again carried out by induction on the number of agents .so we first prove for the case , and the inductive step will be given after that .+ _ base case_. we show that theorem [ conte ] holds in the case .suppose is an equilibrium , then by lemma [ equic ] , we have , this then implies the set of equilibria associated with system is characterized by equation , together with the condition that .let be a subset of defined by then it is clear that is diffeomorphic to . to see this, we define a map by it is clear that the map is a diffeomorphism .we will now show that the set is itself a sphere in .let and let be the sphere of radius centered at in , i.e , it is clear by computation that in fact , if lies inside , then and if lies outside , then this then completes the proof of the base case .+ _ inductive step_. we will now use induction to prove theorem [ conte ] .we assume that the theorem holds for with , and we prove for the case .we again assume that vertex is a leaf of and it joins the graph via edge .let be the subgraph of induced by vertices , then is a tree graph .let be a sub - configuration of consisting of agents .let be a subset of defined by the equilibria set is then characterized by the condition that is an equilibrium , together with the condition that .since these two conditions are independent of each other , there is a diffeomorphism of given by we may translate each in so that the centroid of is zero after translation . since is a tree graph , by induction the set is diffeomorphic to , and hence is diffeomorphic to .one may ponder at this point whether the existence of continuum of equilibria is a consequence of the fact that a tree graph is not a rigid graph .however , it is not the case . for example , if we consider three agents evolving on a plane with being the complete graph , one can then show that the set of equilibria is diffeomorphic to a disjoint union of two circles .though at this moment we do not have a statement about the set of equilibria in the most general case , the tree - graph cases , as well as the three - agents example suggest that it may be inevitable for system to possess a continuum of equilibria .in the previous section , we have showed that there may exist a continuum of equilibria of system under the proposed formation control law .this certainly affects the efficacy of the algorithm because there may exist a solution of system which converges to an equilibrium other than the target configuration . in this section, we will focus on fixing the problem . in view of the fact that there is only one local ( global ) minimum of the quadratic lyapunov function which is the target configuration , we attempt to apply simulated annealing , as a heuristic method , to the decentralized formation control algorithm .in particular , if we add an appropriate noise term to each , the resulting stochastic system is then described by where the are independent standard wiener processes , and is a scalar function of time and defined by with and positive constants .as decays along time , the impact of the noise tends to zero as goes to infinity . with these noise terms, we expect that the centroid of the stochastic formation control system is still invariant along the evolution because otherwise the entire configuration may drift to some place which is neither predictable nor controllable .fortunately this is the case here as stated in the next theorem .the centroid of the configuration is invariant along the evolution of the stochastic formation control system described by expression .let be a function defined by let be the -th coordinate of agent .we need to show that for all .let be a vector collecting the -th coordinates of all the agents , i.e , by defining vector , we can rewrite the system equation in a matrix form as where is a symmetric matrix of zero - column / row - sum with the -th , , entry defined by and each is also a symmetric matrix of zero - column / row - sum defined by where is the standard basis of .we now apply the it rule , and get the stochastic differential equation for as follows notice that for any , we have where is a vector of all ones , and so then all inner - products in equation vanish .this completes the proof .we now give some examples of this stochastic formation control system , and illustrate how sample paths of this stochastic system evolve over time .+ * examples*. consider five agents , , , and evolving in .let be the target configuration given by we will work with two network topologies , one is a star graph which is a special type of tree graph and the other is a circle .details are described below + _ 1 .star as the network topology_. we assume that is a star graph with defined by we then pick an initial condition given by in figure [ spstar ] , we show how the value of the quadratic lyapunov function evolves over time .the smooth curve ( the green one ) refers to the solution of system where there is no noise term added into the control law . as this solution converges to an equilibrium ,so we see from the figure that the green curve converges to a constant line along the evolution .also it is clear that the solution , with the initial condition given by equation , does not converge to the target configuration . on the other hand, the ragged curve refers to the solution of system where we have added noise terms into it . in the simulation ,we have chosen and .we see from the figure that approaches zero , in a stochastic way , along time which implies that approaches . evolves over time with / without noise term under the condition that is a star graph , with the initial condition given by expression . ]+ _ 2 . circle as the network topology_. we assume that is now a circle with defined by we adopt the same initial condition given by expression . figure [ spcircle ] shows how evolves over time .similarly , we see that if there is no noise term , then the solution of system does not converge to the target configuration . on the other hand , the sample path approaches along the evolution .the two parameters and are again chosen to be and respectively .evolves over time with / without noise term under the condition that is a circle , with the initial condition given by expression . ]these two examples have demonstrated that simulated annealing can be used to modify the formation control law in order to achieve global convergence to the target configuration .more provable facts are needed at this moment for this heuristic algorithm .in this paper , we have proposed a decentralized formation control law for agents to converge to a target configuration in the physical space . in particular, there is a quadratic lyapunov function associated with the formation control system whose unique local ( also global ) minimum point is the target configuration .we then focussed on one of the limitations of this formation control model , i.e , there may exist a continuum of equilibria of the system , and thus there are solutions of system which do not converge to the target configuration . to fix this problem, we then applied the technique of simulated annealing to the formation control law , and showed that the modified stochastic system preserves one of the basic properties of the undirected formation control system , i.e , the centroid of the configuration is invariant along the evolution over time .we then worked on two simple examples of the stochastic system .simulations results showed that sample paths approach to the target configuration .the author here thanks prof .ali belabbas , prof .tamer baar , as well as the reviewers of the earlier draft of this work for their comments on this paper .helmke , u. , & anderson , b. d. ( 2013 , october ) . equivariant morse theory and formation control . " in communication , control , and computing ( allerton ) , 2013 51st annual allerton conference on ( pp .1576 - 1583 ) .ieee .sun , z. , mou , s. , anderson , b. d. , & morse , a. s. ( 2014 ) . formation movements in minimally rigid formation control with mismatched mutual distances . " in the 53rd ieee conference on decision and control ( cdc 2014 ) .
in this paper , we investigate a decentralized formation control algorithm for an undirected formation control model . unlike other formation control problems where only the shape of a configuration counts , we emphasize here also its euclidean embedding . by following this decentralized formation control law , the agents will converge to certain equilibrium of the control system . in particular , we show that there is a quadratic lyapunov function associated with the formation control system whose unique local ( global ) minimum point is the target configuration . in view of the fact that there exist multiple equilibria ( in fact , a continuum of equilibria ) of the formation control system , and hence there are solutions of the system which converge to some equilibria other than the target configuration , we apply simulated annealing , as a heuristic method , to the formation control law to fix this problem . simulation results show that sample paths of the modified stochastic system approach the target configuration .
to demonstrate our method of selecting autonomous nodes we consider two er graphs with average degree and of autonomous nodes ( ) .first we consider a method based on the degree of the node and later we compare with the method based on the betweenness . under a sequence of random failures ,the networks are catastrophically fragmented when close to of the nodes fail , as seen in fig .[ fig::orderparam ] . for a single er , with the same average degree, the global connectivity is only lost after the failure of of the nodes .figure [ fig::orderparam ] also shows ( line ) the results for choosing as autonomous nodes in both networks the fraction of the nodes with the highest degree and coupling the remaining ones at random . with this strategy ,the robustness can be improved and the corresponding increase of is about , from close to to close to .also the order of the transition changes from first to second order .further improvement can be achieved if additionally the coupled nodes are paired according to their position in the ranking of degree , since interconnecting similar nodes increases the global robustness . in the inset of fig .[ fig::orderparam ] we see the dependence on of the relative robustness for the degree strategy compared to the random case . for the entire range of the proposed strategy is more efficient and a relative improvement of more than is observed when still of the nodes are coupled .two types of technological challenges are at hand : either a system has to be designed robust from scratch or it already exists , constrained to a certain topology , but requires improved robustness . in the former case , the best procedure is to choose as autonomous the nodes with highest degree in each network and couple the others based on their rank of degree . for the latter , rewiring is usually a time - consuming and expensive process , and the creation of new autonomous nodes may be economically more feasible .the simplest procedure consists in choosing as autonomous both nodes connected by the same inter - network link .however , in general , the degree of a pair of coupled nodes is not , necessarily , correlated . in fig .[ fig::orderparam ] we compare between choosing the autonomous pairs based on the degree of the node in network or in network . when pairs of nodes are picked based on their rank in the network under the initial failure ( network ) , the robustness almost does not improve compared to choosing randomly . if , on the other hand , network is considered , the robustness is significantly improved , revealing that this scheme is more efficient .this asymmetry between and network is due to the fact that we attack only nodes in network , triggering the cascade , that initially shuts down the corresponding -node .the degree of this -node is related to the number of nodes which become disconnected from the main cluster and consequently affect back the network . therefore , the control of the degree of vulnerable -nodes is a key mechanism to downsize the cascade . on the other hand , when a hub is protected in network it can still be attacked since the initial attack does not distinguish between autonomous and non - autonomous nodes . in fig .[ fig::robustness](a ) we compare four different criteria to select the autonomous nodes : betweenness , degree , k - shell , and random choice , for two coupled er networks . in the betweenness strategy , the selected autonomous are the ones with highest betweenness .the betweenness is defined as the number of shortest paths between all pairs of nodes passing through the node . in the k - shell strategy , the autonomous are the ones with highest k - shell in the k - shell decomposition . the remaining nodes , for all cases , have been randomly inter - linked . since ernetworks are characterized by a small number of k - shells , this strategy is even less efficient than the random strategy for some values of , while the improved robustness for degree and betweenness strategies is evident compared with the random selection . while in the random case , for , a significant decrease of the robustness with is observed , in the degree and betweenness cases , the change is smoother and only significantly drops for higher values of .a maximum in the ratio occurs for , where the relative improvement is above . since in random networks ,the degree of a node is strongly correlated with its betweenness , their results are similar .many real - world systems are characterized by a degree distribution which is scale free with a degree exponent . in fig .[ fig::robustness](b ) we plot as a function of for two coupled scale - free networks ( sf ) with nodes each and . similar to the two coupled er , this system is also significantly more resilient when the autonomous nodes are selected according to the highest degree or betweenness . for values of the robustness is similar to that of a single network ( ) since the most relevant nodes are decoupled .a peak in the relative robustness , ( see inset of fig . [ fig::robustness]b ) , occurs for where the improvement , compared to the random case , is almost .betweenness , degree , and -shell , have similar impact on the robustness since these three properties are strongly correlated for sf . from fig .[ fig::robustness ] , we see that , for both sf and er , the robustness is significantly improved by decoupling , based on the betweenness , less than of the nodes . studying the dependence of the robustness on the average degree of the nodes we conclude that for average degree larger than five , even autonomous nodes are enough to achieve more than of the maximum possible improvement . for the cases discussed in fig .[ fig::robustness ] , results obtained by selecting autonomous nodes based on the highest degree do not significantly differ from the ones based on the highest betweenness .this is due to the well known finding that for erds - rnyi and scale - free networks , the degree of a node is strongly correlated with its betweenness . however ,many real networks are modular , i.e. , composed of several different modules interconnected by less links , and then nodes with higher betweenness are not , necessarily , the ones with the largest degree .modularity can be found , for example , in metabolic systems , neural networks , social networks , or infrastructures . in fig .[ fig::fourer ] we plot the robustness for two coupled modular networks .each modular network was generated from a set of four erds - rnyi networks , of nodes each and average degree five , where an additional link was randomly included between each pair of modules . for a modular network , the nodes with higher betweenness are not necessarily the high - degree nodes but the ones bridging the different modules .figure [ fig::fourer ] shows that the strategy based on the betweenness emerges as better compared to the high degree method .another example that shows that betweenness is superior to degree is when we study coupled random regular graphs . in random regular graphs all nodes have the same degree and are connected randomly .figure [ fig::rr ] shows the dependence of the robustness on the degree of coupling , for two interconnected random regular graphs with degree .the autonomous nodes are selected randomly ( since all degrees are the same ) or following the betweenness strategy . though all nodes have the same degree and the betweenness distribution is narrow ,selecting autonomous nodes based on the betweenness is always more efficient than the random selection .thus , the above two examples suggest that betweenness is a superior method to chose the autonomous nodes compared to degree .the vulnerability is strongly related to the degree of coupling ._ have analytically and numerically shown that , for random coupling , at a critical coupling , the transition changes from continuous ( for ) to discontinuous ( for ) . in fig .[ fig::diagram ] we see the two - parameter diagram ( vs ) with the tricritical point and the transition lines ( continuous and discontinuous ) for the random ( inset ) and the degree ( main ) strategies . as seen in fig .[ fig::diagram ] , when autonomous nodes are randomly selected , about autonomous nodes are required to soften the transition and avoid catastrophic cascades , while following the strategy proposed here only a relatively small amount ( ) of autonomous nodes are needed to avoid a discontinuous collapse . above the tricritical point , the jump increases with the degree of coupling , lending arguments to the paramount necessity of an efficient strategy for autonomous selection , given that the fraction of nodes which can be decoupled is typically limited .the dependence of on the average degree is shown in fig .[ fig::qtk ] .the ratio between the tricritical coupling for degree and random strategies increases with decreasing .for example , for the fraction of autonomous nodes needed to soften the transition with the random selection is six times the one for the degree strategy . as in ref . , following the theory of riedel and wegner , we can characterize the tricritical point .two relevant scaling fields are defined : one tangent ( ) and the other perpendicular ( ) to the critical curve at the tricritical point . in thesecoordinate axes the continuous line is described by , where the tricritical crossover exponent for degree and random strategies .the tricritical order parameter exponent , , can be evaluated from , giving for both strategies . since these two exponents are strategy independent ( see fig .[ fig::crossover ] ) , we conjecture that the tricritical point for degree and random selection are in the same universality class .here , we propose a method to chose the autonomous nodes in order to optimize the robustness of coupled networks to failures .we find the betweenness and the degree to be the key parameters for the selection of such nodes and we disclose the former as the most effective for modular networks .considering the real case of the italian communication network coupled with the power grid , we show in fig .[ fig::italy ] that protecting only the four communication servers with highest betweenness reduces the chances of catastrophic failures like that witnessed during the blackout in 2003 . when this strategy is implemented the resilience to random failures or attacksis significantly improved and the fraction of autonomous nodes necessary to change the nature of the percolation transition , from discontinuous to continuous , is significantly reduced .we also show that , even for networks with a narrow diversity of nodes like erds - rnyi graphs , the robustness can be significantly improved by properly choosing a small fraction of nodes to be autonomous . as a follow - upit would be interesting to understand how correlation between nodes , as well as dynamic processes on the network , can influence the selection of autonomous nodes . besides , the cascade phenomena and the mitigation of vulnerabilities on regular lattices and geographically embedded networks are still open questions .we consider two coupled networks , and , where a fraction of -nodes fails . the cascade of failures can be described by the iterative equations , \\\alpha_n & = & p \left ( 1 - q_{\beta , n } \left [ 1 - s_{\beta_{n-1}b}(\beta_{n-1 } ) \right ] \right ) , \nonumber\end{aligned}\ ] ] where and are , respectively , the fraction of and surviving nodes at iteration step ( not necessarily in the largest component ) , and ( , ) is the fraction of nodes in the largest component in network given that nodes have failed .this can be calculated using generating functions . ( ) is the fraction of nodes in network ( ) which have fragmented from the giant cluster ; and changes as the iterative procedure advances . as proposed by parshani _ , when autonomous nodes are randomly selected and the degree of coupling is the same in and , the set of eqs .[ iteration ] simplifies to , \\\alpha_n & = & p \left ( 1 - q \left [ 1 - s_b(\beta_{n-1 } ) \right ] \right ) , \nonumber\end{aligned}\ ] ] where is the degree of coupling .the degree distribution of the networks does not change in the case of random failures and , therefore , is simplified to , which can be calculated as where is the generating function of the degree distribution of network , and satisfies the transcendental equation when autonomous nodes are selected following the degree strategy , the fraction of dependent nodes changes with the iteration step and the set of eqs . [ iteration ] no longer simplifies .we divide the discussion below into three different parts : the degree distribution , the largest component , and the coupling ( fraction of dependent nodes ) .the networks a and b are characterized by their degree distributions , and , which are not necessarily the same .the developed formalism applies to any arbitrary degree distribution .we start by first splitting the degree distribution into two parts , the component corresponding to the low - degree dependent nodes , , and the component corresponding to the high - degree autonomous ones , . to accomplish this ,one must determine two parameters , and , from the relations and where is the initial degree of coupling and is the fraction of nodes with degree that are dependent , i.e. , coupled with nodes in the other network .one can then write and in the model , a fraction of -nodes are randomly removed . if , at iteration step , nodes survive , nodes are necessarily autonomous and the remaining ones , , are dependent nodes .one can then show that the degree distribution of network , under the failure of nodes , , is given by while the fraction of surviving links is with the degree distribution and the fraction of surviving links one can calculate the size of the largest component as where satisfies the self consistent equation to calculate the fraction ( and ) one must first calculate the degree distribution of the nodes in the largest component .this is given by the fraction of nodes in the largest component that are autonomous is then given by so that the fraction of autonomous nodes from the original network remaining in the largest component is while the total fraction of autonomous nodes is given by the fraction of nodes disconnected from the largest component that are autonomous is then given by so that the fraction of dependent nodes which have fragmented from the largest component is for simplicity , here we consider that and are constant and do not change during the iterative process .in fact , this is an approximation as the degrees of the autonomous nodes are expected to change when their neighbors fail .however , in spite of shifting the transition point , this consideration does not change the global picture described here . where is the number of node failures , the size of the largest connected cluster in a network after failures , and is the total number of nodes in the network .this definition corresponds to the area under the curve of the fraction of nodes , in the largest connected cluster , as a function of the fraction of failed nodes ( shown in fig .we extend this definition to coupled systems by performing the same measurement , given by eq .( [ eq::def.robustness ] ) , only on the network where the random failures occur , namely , network .we acknowledge financial support from the eth risk center , from the swiss national science foundation ( grant no .200021 - 126853 ) , and the grant number fp7 - 319968 of the european research council .we thank the brazilian agencies cnpq , capes and funcap , and the grant cnpq / funcap .sh acknowledges the european epiwork project , the israel science foundation , onr , dfg , and dtra .
natural and technological interdependent systems are highly vulnerable due to dependencies which pose a huge integrative risk due to the recently discovered abrupt collapse under failure . mitigating the risk by partial disconnection endangers their functionality . here we propose a systematic strategy of establishing a minimum number of autonomous nodes that guarantee a smooth transition in the robustness . our method which is based on betweenness is tested on various examples including the famous electrical blackout of italy . we show that we can reduce the necessary number of autonomous nodes by a factor of five compared to the random choice which for practical purposes means the recovery of functionality . we also find that the transition to abrupt collapse follows tricritical scaling characterized by a set of exponents which is independent on the protection strategy . interconnected complex networks are ubiquitous in today s world . they control infrastructures of modern society ( energy - communication - transportation ) , the financial system or even the human body . unfortunately , they are much more fragile than uncoupled networks as recently recognized through the finding that the robustness changes from a second order transition in uncoupled systems to first order in interdependent systems . the obvious mitigation strategy consists in partially decoupling the networks by the creation of autonomous nodes . too much disconnection however risks endangering the functionality of the system . the question which we will address here is how to reduce fragility without losing functionality and we will in fact answer this question by developing an explicit algorithm based on betweenness that enables to avoid the abrupt collapse with a minimum number of autonomous nodes . communication servers . in a ) all communication servers are coupled while in b ) four servers have been decoupled following the strategy proposed here . the coupling between the networks was established based on the geographical location of the nodes , such that each communication server is coupled with the closest power station . [ fig::italy ] ] fraction of -nodes in the largest connected cluster , , as a function of the fraction of randomly removed nodes from network a , for two coupled er ( average degree ) with of the nodes connected by inter - network links ( ) . it is seen that robustness can significantly be improved by properly selecting the autonomous nodes . we start with two fully interconnected er and decouple of their nodes according to three strategies : randomly ( line ) , the ones with highest degree in network ( line ) and in network ( line ) . we also include the case where autonomous nodes in both networks are chosen as the ones with highest degree and all the others are interconnected randomly ( line ) . the inset shows the dependence of the relative robustness of the degree strategy on the degree of coupling compared with the random case . results for the degree have been obtained with the formalism of generation functions ( see _ methods _ ) . ] dependence of the robustness , , on the degree of coupling , , for two , interconnected , ( a ) er ( average degree ) and ( b ) sf with degree exponent . applying our proposed strategy is applied , the optimal fraction of autonomous nodes is relatively very small . autonomous nodes are chosen in four different ways : randomly ( ) , high degree ( ) , high betweenness ( ) , and high k - shell ( ) . the insets show the relative improvement of the robustness , for the different strategies of autonomous selection compared with the random case . results have been averaged over configurations of two networks with nodes each . for each configuration we averaged over sequences of random attacks . , title="fig : " ] + dependence of the robustness , , on the degree of coupling , , for two , randomly interconnected modular networks with nodes each . the modular networks were obtained from four erds - rnyi networks , with nodes each and average degree five , by randomly connecting each pair of modules with an additional link . autonomous nodes are selected in three different ways : randomly ( blue triangles ) , higher degree ( black dots ) , and higher betweenness ( red stars ) . in the inset we see the relative enhancement of the robustness , for the second and third schemes of autonomous selection compared with the random case . results have been averaged over configurations and sequences of random attacks to each one . ] dependence of the robustness , , on the degree of coupling , , for two , randomly interconnected random regular graphs with nodes each , all with degree four . autonomous nodes are selected in two different ways : randomly ( blue triangles ) and higher betweenness ( red stars ) . in the inset the relative enhancement of the robustness is shown for the betweenness compared to the random case . results have been averaged over configurations and sequences of random attacks to each one . ] ) under random attack . the horizontal axis is the degree of coupling and the vertical one is so that is the fraction of initially removed nodes . the size of the jump in the fraction of -nodes in the largest connected cluster is also included ( curve ) . the dashed curve stands for a discontinuous transition while the solid one is a critical line ( continuous transition ) . the two lines meet at a tricritical point ( tp ) . autonomous nodes are selected based on the degree ( main plot ) and randomly ( inset ) . results have been obtained with the formalism of generating functions . [ fig::diagram ] ] dependence on the average degree for two coupled er , showing that the fraction of autonomous nodes to smoothen out the transition is significantly reduced with the proposed strategy when compared with the random case . autonomous nodes are selected following two different strategies : randomly ( red squares ) and high degree ( black circles ) . [ fig::qtk ] ] -nodes in the largest connected cluster on the scaling field along the direction perpendicular to the transition line at the tricritical point . the slope is the tricritical exponent related with the order parameter . autonomous nodes in the two coupled er have been selected randomly ( red line ) and following the ranking of degree ( black line ) . [ fig::crossover ] ] buldyrev _ et al . _ proposed a percolation framework to study two coupled networks , and , where each -node is coupled to a -node , via bi - directional links , such that when one fails the other can not function either . the removal of a fraction of -nodes may trigger a domino effect where , not only their counterparts in fail , but all nodes that become disconnected from the giant cluster of both networks also fail . this causes further cascading of failures , yielding an abrupt collapse of connectivity , characterized by a discontinuous ( first order ) percolation transition . parshani _ et al . _ showed that damage can be mitigated by decreasing the degree of coupling , but only if a significant fraction ( ) of nodes is decoupled , the transition changes from discontinuous to continuous . the coupling is reduced by randomly selecting a fraction of nodes to become autonomous and , therefore , independent on the other network . for the coupling between power stations and communication servers , for example , autonomous power stations have alternative communication systems which are used when the server fails and an autonomous server has its own energy power supply . we propose a method , based on degree and centrality , to identify these autonomous nodes that maximize the system robustness . we show that , with this scheme , the critical coupling increases , i.e. , the fraction of nodes that needs to be decoupled to smoothen out the transition is much smaller ( close to compared to ) . significant improvement is observed for different coupled networks including for erds - rnyi graphs ( er ) where such improvement in the robustness was unexpected given their narrow degree distribution . to demonstrate the strength of our approach , in fig . [ fig::italy ] we apply the proposed strategy to the real coupled system in italy and show that by only protecting four servers the robustness is significantly improved ( details in the figure caption ) . we consider a pair of networks , and , where a fraction ( degree of coupling ) of -nodes are coupled with -nodes . to be functional , nodes need to be connected to the giant cluster of their network . when an -node fails , the corresponding one in can not function either . consequently , all nodes bridged to the largest cluster through these nodes , together with their counterpart in the other network , become also deactivated . a cascade of failures occurs with drastic effects on the global connectivity . this process can also be treated as an epidemic spreading . as explained in the section _ methods _ , to study the resilience to failures , we follow the size of the largest connected cluster of active -nodes , under a sequence of random irreversible attacks to network . extending the definition proposed for single networks , we quantify the robustness as the area under the graph of the largest cluster as function of the fraction of node failures ( see _ methods _ ) . to follow the cascade triggered by the failure of a fraction of -nodes , similar to , we solve the iterative equations , , \\ \alpha_n & = & p\left(1-q_{\alpha , n}\left[1-s_{\beta_{n-1}b}\left(\beta_{n-1}\right)\right]\right ) , \end{aligned}\ ] ] with the initial condition , where and are the fraction of and surviving nodes at iteration step and is the fraction of such nodes in the giant cluster . is the fraction of dependent nodes in network fragmented from the largest cluster ( see _ methods _ for further details ) .
the nervous system is an extremely complex system comprising nerve cells ( or neurons ) and gial cells . by electrical and chemical synapses of different polarity neuronsform a great variety large - scale networks . therefore, modeling of brain s key functional properties is associated with study of collective activity of complex neurobiological networks .dynamical modeling approach is effective tool for the analysis of this kind of networks .first of all this approach takes building dynamical models of single neurons . on the one hand, such models should describe large quantity of various dynamical modes of neural activity ( excitable , oscillatory , spiking , bursting , etc . ) .this complexity is associated with the large number of voltage - gated ion channels of neurons .it takes employment of complex nonlinear dynamical systems given by differential equations .the canonical representative of this type of models is hodgkin - huxley system .it describes dynamics of the transport through membrane of neuron in detail . on the other hand , to model neural network consisting of the large number of interconnected units it is necessary to create simplified models for single neuron to avoid problems that are induced by high dimension and nonlinearity .for example , one which is commonly used in simulations is integrate - and - fire model .it represents one - dimensional nonlinear equation with some threshold rule .that is , if the variable of the model crosses a critical value , then it is reset to new value and the neuron is said to have fired . to solve the contradiction between the requirements of complexity and simplicity of neuron models phenomenological models were introduced .they describe basic properties of neuron dynamics , but these models do not take into account the large number of voltage - gated ion channels of neurons . as a rule they involve generalized variable which mimic the dynamic of some number of ionic currents at the same time .the examples of this type models are fitzhugh - nagumo , hindmarsh - rose , morris - lecar .they have the form of differential equations systems .however , there is another class of phenomenological models of the neural activity .these are discrete - time models in form of point maps . in the last decadethis kind of neural models has attracted much attention .for example using a map - based approach rulkov et . have studied dynamics of one- and -two dimensional large - scale cortical networks .it has been found that such map - based models produce spatiotemoral regimes similar to those exhibited by hodgkin - huxley -like models .neuron oscillatory activity can take a variety of forms .one of the most interesting oscillatory regimes is spiking - bursting oscillations regime , which is commonly observed in a wide variety of neurons such as hippocampal pyramidal neurons , thalamic neurons , pyloric dilator neurons etc .a burst is a series of three or more action potential usually riding on a depolarizing wave .it is believed that the bursting oscillations play crucial role in informational transmitting and processing in neurons , facilitate secretion of hormones and drive a muscle contraction .this oscillation can be regular or chaotic depending on the concentration of neuromodulators , currents and other control parameters .another interesting oscillatory regime is an oscillation of membrane potential below the excitation threshold , so - called subthreshold oscillation .for example , these oscillations with close to 10 hz frequency are observed in olivo - cerebellar system providing highly coordinated signals concerned with the temporal organization of movement execution ( see more discussion in the conclusion ) .the best known spiking - bursting activity model is the hindmarsh - rose system .it is three - dimensional ode - based system involving two nonlinear functions .spiking - bursting dynamics of map - based models has recently been investigated by cazelles et.al , rulkov , shilnikov and rulkov , tanaka .a piecewise linear two - dimensional map with a fast - slow dynamics was introduced in .it was shown that depending on the connection ( diffusively or reciprocally synoptically ) , the model demonstrates several modes of cooperative dynamics , among them phase synchronization .two dimensional map is used for modeling of spiking - bursting neural behavior of neuron .this map contains one fast and one slow variable .the map is piecewise nonlinear and has two lines of discontinuity on the phase plane .modification of this model is presented in .the further advancement of rulkov model is presented in .a quadratic function has been introduced in the model .using these modifications authors obtained the dynamical regimes of subthreshold oscillation , corresponding to the periodical oscillation of neuron s transmembrane potential below the excitation threshold .in the dynamics of two coupled piece - wise linear one - dimensional monostable maps is investigated .the single map is associated with poincar section of the fitzhugh - nagumo neuron model .it is found that a diffusive coupling leads to the appearance of chaotic attractor .the attractor exists in an invariant region of phase space bounded by the manifolds of the saddle fixed point and the saddle periodic point .the oscillations from the chaotic attractor have a spike - burst shape with anti - phase synchronized spiking .a map - based neuron model involving quasi - periodic oscillation for generating the bursting activity has been suggested in .izhikevich and hoppenstead have classified map - based one- and two - dimensional models of bursting activity using bifurcation theory .our goal here is to introduce a new map - based model for replication of many basic modes of neuron activity. the greater part of our paper deals with regimes that mimic chaotic spiking - bursting activity of one real biological neuron .we construct a discontinuous two - dimensional map based on well - known one - dimensional lorenz - type map and a discrete version of the fitzhugh - nagumo model .this is the system of two ode : where is the membrane potential of the neuron and is the recovery variable describing ionic currents , is a cubic function of and is constant stimulus .this model takes into account the excitability and regular oscillations of neuron , but not spiking - bursting behavior .we shall introduce a discontinuity in the discrete version for this purpose .we find conditions under which this two - dimensional map has an invariant region on the phase plane , containing chaotic attractor .in addition we show that depending on the values of the parameters , our model can produce phasic spiking and subthreshold oscillations also .the paper is organized as follows . in sec .[ sec : model ] we describe the map - based model . then in sec .[ sec : oneddyn ] we study one - dimensional dynamics in the case when the recovery variable is fixed . in sec .[ sec : twoddyn ] we analyze the relaxation two - dimensional dynamics of the model . then in sec .[ sec : invr ] we find an invariant region bounding the chaotic attractor in the phase plane of the model . in sec .[ sec : othermodes ] we observe other modes of neural activity which could be simulated by using this model .let be a map of the form where the -variable describes the evolution of the membrane potential of the neuron , the - variable describes the dynamics of the outward ionic currents ( the so - called recovery variable ) .the functions and are of the form where the parameter defines the time scale of recovery variable , the parameter is a constant external stimulus , the parameters and ) control the threshold property of the bursting oscillations . herewe have chosen this linear piece - wise approximation of in order to obtain a simple hyperbolic map for chaotic spiking - bursting activity .however , any cubic function can be also used .the map is discontinuous map and is the discontinuity line of .we consider only those trajectories ( orbits ) which do not fall within a discontinuity set , where is the union of points of discontinuity of and its derivative . besides, we assume that , then for any and the map is one to one . we restrict consideration of the dynamics of the map to the following parameter region note that under such conditions we have .this condition is very important for forming chaotic behavior of the map as we shall see bellow . for convenience ,we rewrite the map in the following form where , are the maps us start with the dynamics of the map when the parameter . in this casethe map is reduced to a one dimensional map : where is a constant and it plays the role of a new parameter .the map ( [ eq : onedmap ] ) can be rewritten as where .let us fix the parameters , , , and consider the dynamics of the map ( [ eq : onedmapfl ] ) in the parameter plane .we restrict our study of the map to the following parameter region where .these conditions allow to obtain interesting properties of the map ( [ eq : stpmp ] ) .let us find the conditions on the parameter values for which the map acts like a lorenz - type map .for that we require that ( see fig . [pic : kl ] ) it follows from ( [ eq : cndlrnc0 ] ) the following condition on the parameter : where the inequalities ( [ eq : cnd1dbeta ] ) and ( [ eq : cndlrnc1 ] ) define on the plane the region ( see fig.[pic : bd_dbeta ] ) .let us take the parameters and inside the region , and let us consider the plane . in this plane the inequalities ( [ eq : cnd1dbeta ] ) ,( [ eq : cnd1dy0 ] ) and ( [ eq : cndlrnc1 ] ) are satisfied simultaneously in region . in this plane the boundary of of the three lines ( fig .[ pic : bd_betay0 ] ) consider the dynamics of the map for .this region is separated on four subregions by the bifurcation lines corresponding to different dynamics of the map . the line coincides with appearance of an unstable fixed point through crossing of the discontinuity point .line corresponds to the fold ( tangent ) bifurcation of the fixed point ( see fig . [pic : kl](a , d ) ) .line corresponds to the condition note that for there exists a bifurcation corresponding to appearance of homoclinic orbit to the unstable fixed point .the dynamics of the map corresponding to subregions is shown in fig.[pic : kl ] . if the trajectories of the map tend to stable fixed point for any initial conditions different from an unstable fixed point ( fig.[pic : kl ] ( a ) , ( b ) ) .if the map has invariant interval , where for parameters the map exhibits bistable property , that is there exists two attractors , one is a stable fixed point and the second is an invariant set of the interval whose basins of attraction are separated by an unstable fixed point ( fig.[pic : kl](c ) ) . for thereexists the interval ( fig.[pic : kl ] ( d ) ) which attract all trajectories of the map . check that the map on the acts like a lorenz - type map .the map will be a lorenz - type if 1 .[ lst : cndlrnc1 ] the derivative for any ; 2 .[ lst : cndlrnc2 ] the set of preimages of the point of discontinuity , is dense in ; 3 .[ lst : cndlrnc3 ] .one can see that ( [ lst : cndlrnc1 ] ) and ( [ lst : cndlrnc3 ] ) are satisfied . according to condition ( [ lst : cndlrnc2 ] ) is satisfied if for the map on the interval we have and inequality ( [ eq : cndlrnciii ] ) is obviously satisfied .therefore the map on the interval acts like a lorenz - type map .the possible structure of the invariant set of interval is controlled by value .let us find conditions under which the map is strongly transitive .recall that a lorenz - type map is strongly transitive if for any subinterval there is such that under the condition ( [ eq : cndlrnciii ] ) the sufficient condition for strong transitivity on the interval are ( ) where are such that they satisfy the following conditions \ ] ] , g_3^{n_2 + 1}(c ) \in [ b , d).\ ] ] now let us find condition for the parameter values of the map under which .consider the condition ( [ eq : cndn1 ] ) .let us take , where .it is clear that ( [ eq : cndn1 ] ) holds if the parameter satisfies the following conditions let us require that ( [ eq : cnd : strtransn1rngy0 ] ) for is satisfied for from inequalities ( [ eq : cnd1dbeta ] ) , ( [ eq : cndlrnc1 ] ) and the definitions of the boundaries and , it follows that this requirement holds if similarly , for we get by the same argument as indicated above we obtain that for inequalities ( [ eq : cnd : strtransn2rngy0 ] ) hold if the conditions ( [ eq : cndn2 ] ) are satisfied .for example , let us fix , that is . in this casethe map is strongly transitive and therefore it follows from the theorem 3.1.1 . of the periodic points are dense in .we note that all of these periodic points are unstable and is a chaotic attractor .fig.[pic : kl ] ( c),(d ) illustrates the dynamics of the map on the interval for regions and respectively .in this section consider the case and . this case corresponds to instability of the unique fixed point . since parameter is sufficiently small ,the dynamic of the map is a relaxation similarly by to the case of ode ( [ eq : modelfhn ] ) .the distinctive characteristic of these systems is two time and velocity scales , so - called `` fast '' and `` slow '' motions .basically fast motions are provided by `` frozen '' system in which slow variables are regarded as a parameters , and it is assumed that small parameter of the system equals to zero .slow motions with size of order of the small parameter are given by evolution of `` frozen '' variable . in case of the map , is the fast variable and is the slow one .let us study the fast and slow motions in our system .the fast motions of the model ( [ eq : stpmp ] ) is approximately described by the map ( [ eq : onedmap ] ) . as indicated above , the dynamics of the map ( [ eq : onedmap ] )can be both , regular and chaotic according to the parameter value ( fig.[pic : kl ] ) .consider now under conditions ( [ eq : cnd1dbeta ] ) , ( [ eq : cndlrnc1 ] ) slow motions of the map on the phase plane in the region separated by the following inequalities in the case the motions of the map have slow features within thin layer ( thickness is of the order , ) near invariant line where directly from the map it can be obtained that is invariant line not only for but for also .one can see that the dynamics on the line is defined by one - dimensional linear map it is clear that the map ( [ eq : smonedmap ] ) has stable fixed point .therefore for the trajectories on with initial conditions moves to the line .all trajectories from layer behave in the same way .let us now consider the stability of the slow motions from relatively to the fast ones . since in the case each point of the is stable fixed point of the fast map ( [ eq : onedmap ] ) then invariant curve is stable with respect to the fast motions .it is follows from the previous description that when is small enough , the structure of the partition of the phase -plane into trajectories does nt significantly change with respect to case of equations ( [ eq : onedmap ] ) , ( [ eq : smonedmap ] ) .the trajectories of the map are close to the trajectories of ( [ eq : smonedmap ] ) within the layer of the slow motions near and to the trajectories of ( [ eq : onedmap ] ) outside these layer .therefore , the motions of the map are also formed by the slow - fast trajectories .let the initial conditions of belong to neighborhood of the invariant curve .any of these trajectories moves within the layer of the slow motions down to the neighborhood of the critical point : , and continue their motions according to the fast motions ( see fig .[ pic : kl](a ) ) , along . since the trajectory of the map with initial condition tends to invariant interval ( see fig . [pic : kl ] ( d ) ) .therefore the fast motions of the map with initial conditions falls into some region ( see fig [ pic : chatt ] ) , if , where is the parallelogram : in other words the region is one parametrical family - indexed of invariant intervals . as is attractor , then is also two - dimensional attracting region .since the map has interval for , then a trajectory involving the map belongs to the region as long as its variable do not culminate approximately to the value corresponding to the line ( fig .[ pic : bd_betay0 ] ) . at the same timevariable is slowly increasing for as .thus , within the region the variable continues to increase and variable evolution is close to chaotic trajectory of the map . over line map has stable fixed point which attracts all trajectories ( see fig.[pic : kl ] ( b ) ) . hence if the magnitude of the variable becomes about then trajectory of the map returns into neighborhood of . then the process is repeated . as a result of these slow - fast motionsthe attractor of the system phase plane appears as in ( fig.[pic : chatt](a ) ) . to characterize the complexity of the attractor a we calculated numerically its fractal dimension . atappears that takes non - integer values between 1.35 and 1.9 ( fig.[pic : chatt](b ) ) .therefore is chaotic attractor .for the parameter values from fig .[ pic : chatt ] ( b ) maximum of the fractal dimension is accomplished then .let us prove that the system ( [ eq : stpmp ] ) has an attractor for different values and let us find conditions under which the map has an invariant region . to do that , we construct some ring - like region .denote by the outer boundary and by the inner boundary of the .the is an invariant region if from the conditions and follows that .its should be fulfilled if 1 .[ lst : cndinv1 ] the vector field of the map at the boundary and is oriented inwards to ; 2 .[ lst : cndinv2 ] the images of the boundaries , and the discontinuity line belong to .we construct boundaries and in the form of some polygons .taking into account the condition ( [ lst : cndinv1 ] ) and analyzing the vector field of the map at the lines with uncertain slope we have found the shape of and ( see fig .[ pic : pm_invrg ] ( a ) ) .the equations of the boundaries of and are presented in the appendix .analysis of the position of the images on the phase plane show that the condition ( [ lst : cndinv2 ] ) holds if ( see section [ sec : oneddyn ] ) and inequalities } { j - j_{min } } \right\}\ ] ] are satisfied ( the parameters , and have been introduced in appendix ) .fig.[pic : pm_invrg](b),(c ) illustrates the transformation of by the action of the map under conditions ( [ eq : cndinvepsj ] ) .the inequality ( [ eq : cndinvepsj ] ) determine the parameter region in the parameter plane ( fig .[ pic : bdinvrg ] ( a ) ) . since the trajectories with initial conditions execute rotation motion around the fixed point forming some attractors .we calculated numerically fractal dimensional ( fig.[pic : bdinvrg](b ) ) in terms of .its shows that is chaotic attractor .the possible structure of the attractor in the phase plane is shown in fig.[pic : pm_bursts](a ) .[ pic : pm_bursts](b ) illustrates time evolution of the variable corresponding to chaotic attractor .it shows chaotic spiking - bursting neural activity .[ pic : bdinvrg ] ( b ) shows that fractal dimension , on average , tends to decrease with increasing .there is a critical value , , for which fractal dimension has a minimal value .the mechanism of this decreasing can be accounted for by the different types of the dynamics of the variable for different .as the parameter increases , the velocity of the variable is expected to climb .therefore `` life time '' of the trajectories in the strip corresponding to lorenz - map dynamics is reduced . as a result ,the chaotical motions are reduced .at previous sections it was shown that system ( [ eq : stpmp ] ) allows to simulate spiking - bursting behavior of the neuron . herewe show that other regimes of the neural activity ( phasic spiking and burstings threshold excitation , subthreshold oscillation , tonic spiking and chaotic spike generation ) can be obtained by using the map also . to do that we neglect the first inequality in ( [ eq : condd ] ) , inequality ( [ eq : cnd1dbeta ] ) and condition .studying response of the neurons to the influence of external stimulus is one of important task of neuroscience associated with the problem of information transmission in neural system .usually external stimulus is represented as the injection of electrical current into the neuron .let us suppose that the neuron is not excited initially , that is , it is in steady state ( rest ) . in the model ( [ eq : stpmp ] ) such state of neuron corresponds to stable fixed point .consider the response of the system ( [ eq : stpmp ] ) to pulse type stimulus .we assume that the duration of each pulse is small enough ( see fig . [pic : pm_ex2t](a ) ) and its action is equal to the instantaneous changing of the variable on the pulse amplitude .besides , we suppose here that and therefore the dynamics of the system ( [ eq : stpmp ] ) is a relaxation. for this parameter region the system ( [ eq : stpmp ] ) has two thresholds .the first threshold is determined ( see fig . [pic : pm_ex2t](c ) ) by the thin layer of the slow motions near the following invariant line where analogously , the second threshold is defined ( see fig . [pic : pm_ex2t](c ) ) as the thin layer of the slow motions near invariant line where denote by the and ( ) the values of the variables and after stimulation respectively .let be the trajectory of the system ( [ eq : stpmp ] ) with this initial conditions .in other words is response of the system ( [ eq : stpmp ] ) to pulse input .\(i ) if the amplitude of stimulus is not enough ( fig .[ pic : pm_ex2t](a),(i ) ) for overcoming the first threshold , then the maximum of the response will be about amplitude of the stimulus .therefore , in this case the generation of the actions potential does not take place .\(ii ) let us increase the amplitude of the stimulus as it breaks the first threshold but at the same time it is not enough for overcoming of the second threshold ( fig .[ pic : pm_ex2t](c),(ii ) ) . in this casethe fast motions of the map will be close to the fast motions of the nap on interval for . and so , the trajectory of the map perform some number of irregular oscillations around discontinuity line ( fig .[ pic : pm_ex2t](c)(ii ) ) .after that , the trajectory within layer near tends to fixed point ( fig .[ pic : pm_ex2t](c)(ii ) ) .such trajectory forms the region of phasic bursting activity with irregular number of spikes ( fig .[ pic : pm_ex2t](b),(ii ) ) .\(iii ) if the amplitude of the stimulus ( fig .[ pic : pm_ex2t](a),(iii ) ) is enough for overcoming the second threshold , then the point belongs to the region of attractor of the invariant line , where with therefore trajectory tends to thin layer of slow motions near invariant line .it moves within thin layer to the neighborhood of the point ( fig .[ pic : pm_ex2t](c),(iii ) ) and its motions continue along fast motions .these motions are close to the trajectories of the map for ( see fig . [pic : kl](a ) ). therefore the trajectory tends to the layer near stable invariant line .after that the trajectory moves within thin layer near and it tends to the fixed point ( fig . [ pic : pm_ex2t](c),(iii ) ) . in this case trajectory corresponds to phasic spike ( fig .[ pic : pm_ex2t](b),(iii ) ) .let us consider dynamics of the map under following conditions on the parameters . one can see from the jacobian matrix that in this case the fixed point has a complex - conjugate multipliers .this point is stable for and unstable for .therefore the piece - wise map produces neimark - sacker like bifurcation ( in classical case of neimark - sacker bifurcation the map is smooth ) .the fixed point is surrounded for by an isolated stable attracting close curve ( fig .[ pic : pm_so](a ) ) .the oscillations corresponding to the occur under the threshold of excitability of the neuron and therefore it is called in neuroscience , subthreshold oscillations ( fig .[ pic : pm_so](b ) ) .let us consider again the relaxation ( ) dynamics of the map in the case , that is when fixed point is unstable .additionally we assume that the parameters of the map has satisfied the following conditions in this case the invariant line separate the fast motions on two flows ( fig .[ pic : pm_cts](a ) ) in the neighborhood of the discontinuity line .the first flow forms the trajectory performing the chaotic oscillations near the line ( fig . [pic : pm_cts](a ) ) .their dynamics are close to the dynamics of the map on interval for .the second flow consists of the trajectories overcoming the second threshold ( fig .[ pic : pm_cts](a ) ) .it moves to neighborhood of the stable invariant line .after that these trajectory tends to the stable invariant line and the described process is repeated .these trajectories form chaotically switching flow from one to other . as a resultis the appearance of a chaotic attractor on phase plane ( fig .[ pic : pm_cts](a ) ) .the fractal dimension .the attractor determines chaotic regime of spiking activity over the background of the subthreshold oscillations ( fig .[ pic : pm_cts](b ) ) .let the parameters of the map satisfy the same conditions as in the case of previous subsection [ sec : closeinvcurve ] with exception of inequality ( [ eq : cndforkmn ] ) . in this casethe parameter is small enough .therefore the trajectories with initial conditions from neighborhood of do not change direction of motion when they intersect the discontinuity line .this leads to the appearance on the phase plane of the motions between layer near and .such dynamics leads to forming close invariant curve ( fig .[ pic : pm_ts](a ) ) .so there exists only one attractor on the phase plane formed by this invariant closed curve .this determines tonic spiking regimes of neural activity ( fig .[ pic : pm_ts](b ) ) .a new phenomenological model of neural activity is proposed .the model can reproduce basic activity modes such as spiking , chaotic spiking - bursting , subthreshold oscillations etc . of real biological neurons .the model is a discontinous two - dimensional map based on the discrete version of the fitzhugh - nagumo system and dynamical properties the lorenz - like map .we have shown that the dynamics of our model display both regular and chaotic behavior . we have studied the underlying mechanism of the generation of chaotic spiking - bursting oscillations .sufficient condition for existence chaotic attractors in the phase plane are obtained . in spite of idealization ,the dynamical modes which are demonstrated in our model are in agreement with the neural activity regimes experimentally found in real biological systems .for example , subthreshold oscillations ( see fig . [pic : pm_so](b ) ) is a basic regime of inferior olive ( i.o . )neurons .inferior olive neurons belong to the olivo - cerebellar network which plays a key role in organization of vertebrate motor control .it is also typical for i.o .neuron a spiking regime over the chaotic subthreshold oscillations ( see fig . [pic : pm_cts ] ) .the spiking - bursting activity is significant for many types of neurons , in particular in hippocampal pyramidal cell and thalamic cells .the table summarizes results on gallery of behavior of neural activity showed by our model .we hope that our model will be useful to understand the mechanism of neural pattern formation in large networks . & * the regimes of neuronal activity * + & + , , ( spike ) ( bursts ) & phasic spikes and bursts + & + , & subthreshold oscillations + & + inequalities ( [ eq : cndinvepsj ] ) . & chaotic bursting oscillations + & + , & chaotic spiking + & + , .& tonic spiking +this work was supported partly by university paris 7-denis diderot and in part by the russian foundation for basic research ( grant 06 - 02 - 16137 ) and leading scientific schools of the russian federation ( scientific school 0 7309.2006.2 ) .the boundary is given by with + f(d ) } { m_0},\ ] ] the boundary is given by with _ e.r .kandel , j.h .schwartz , t.m jessell , _ principles of neural science , prentice - hall int .inc . , 1991 .rabinovich , p. varona , a.i .selverston , h.d.i .abarbanel , _ dynamical principles in neuroscience , reviews of modern physics , 78(4 ) ( 2006 ) , 1213 . _ h.r .wilson , j.d.cowan_ excitatory and inhibitory interaction in localized population of model neurons . biophys .j. , 12 ( 1972 ) , pp 1 - 24 ._ r. fitzhugh _ mathematical models of the threshold phenomena in the nerve membrane . bull math .v.17 , pp .257 - 287 ( 1955 ) _ j.l .hindmarsh , r.m . rose _, a model of neuronal bursting using three coupled first order differential equations : philos .trans r. soc .london , ser .b221 , 87 - 102 ( 1984 ) . _ c. morris , h. lecar _ , voltage oscillations in the barnacle giant muscle fiber .j. v25 , 1981 , p. 87chialvo , _ generic excitable dynamics on the two - dimensional map , chaos solitons fract . 5 ( 1995 ) 461 - 480 ._ o. kinouchi , m. tragtenberg , _ modeling neurons by simple maps , int. j. bifurcaton chaos 6 ,n 12 a ( 1996 ) , 2343 - 2360 . _ s.kuva , g.lima , o.kinouchi , m.tragtenberg , a. roque , _ a mimimal model for excitable and bursting element .neurocomputing 38 - 40 , pp .255 - 261 ( 2001 ) ._ g. de vries , _ bursting as an emergent phenomenon in coupled chaotic maps , phys .e 64 ( 2001 ) 051914 .rulkov , i.timofeev , m. bazhenov , _ oscillations in large - scale cortical networks : map - based model .j. of computational neuroscince 17 ( 2004 ) , 203 - 223 .traub , j.g.r .jefferys , m.a .whittington , _ fast oscillations in cortical circuits . the mit press , massachusettes , 1999 ._ llinas r. and yarom , y. , _ oscillatory properties of guinea - pig inferioir olivary neurines and their pharmacological modulation : an in vitro study . j. physol ., lond . , 315 , 569 - 84 , 1986 . _ llinas r. , _ i of vortex. from neurones to self.the mit press , massachusettes , 2002 . _b. cazelles , m. courbage , m. rabinovich , _ anti - phase regularization of coupled chaotic maps modelling bursting neurons .europhysics letters , 56 ( 4 ) , pp .504 - 509 ( 2001 ) .rulkov , _ regularization of synchronized chaotic bursts , phys .86 , 183 - 186 ( 2001 )_ n.f . rulkov ._ modeling of spiking - bursting neural behavior using two - dimensional map .rev.e , v.65 , p. 0.41922shilnikov , n.f ._ origin of chaos in a two dimensional map modeling spiking - bursting neural activity .j. bifurc .chaos , v.13 , n11 , pp .3325 - 3340 ( 2003 ) .shilnikov , n.f .rulkov . _ subthreshold oscillations in a map - based neron model .physics letters a 328 , pp .177 - 184 ( 2004 ) ._ h. tanaka _ design of bursting in two - dimensional discrete - time neron model.physics letters a 350 , pp .228 - 231 ( 2006 ) ._ m.courbage , v.b .kazantsev , v.i .nekorkin , v. senneret ._ emergence of chaotic attractor and anti - synchronization for two coupled monostable neurons .chaos 12 , pp .1148 - 1156 ( 2004 ) .izhikevich , f. hoppensteadt , _ classification of bursting mappings .j. bifurcation and chaos , v.14 , n11 , pp 3847 - 3854 , ( 2004 ) .afraimovich , sze - bi hsu ._ lectures on chaotic dynamical systems , american mathematical society .press , 354 p. ( 2003 ) _afraimovich , l.p ._ strange attractors and quasiattractors . in book `` nonlinear dynamics and turbulence '' ( eds .barenblatt , g. iooss , d.d .joseph , pitam , boston , 1983 , pp .1 - 34 _ v.i .arnold , v.s .afraimovich , yu . s. ilyashenko , l.p ._ bifurcation theory , dyn .sys . v. encyclopaedia mathematics sciences , springer , berlin , 1994 _ bernardo l.s . , foster r.p . _ oscillatory behaviour in the inferior olive neurons : mechanism , modulation , cell agregates .brain res . bull .1986 , v.17 , p.773 r.s.k .wang and d.a .prince afterpotential generation in hippocampal pyramidal cells .j. neurophysiol .v45 , 1981 , p.86 m. deschenes , j.p .roy and m. steriade thalamic bursting mechanism : an inward slow current revealed by membrane hyperpolarization .brain res . 239 , 1982 , p.289 . with for regions : ( a ) - , , ( b ) - , , ( c ) - , , ( d ) - , .,title="fig:",scaledwidth=75.0% ] with for regions : ( a ) - , , ( b ) - , , ( c ) - , , ( d ) - , .,title="fig:",scaledwidth=75.0% ] + on the phase plane ; ( b ) waveform of relaxation spike - bursting oscillations generated by the map values : . ,title="fig:",scaledwidth=75.0% ] + on the phase plane ; ( b ) waveform of relaxation spike - bursting oscillations generated by the map .parameter values : . , title="fig:",scaledwidth=75.0% ]+ on the parameter plane , parameter values : .( b ) fractal dimension of the attractor versus parameter , , title="fig:",scaledwidth=75.0% ] + on the parameter plane , parameter values : .( b ) fractal dimension of the attractor versus parameter , , title="fig:",scaledwidth=75.0% ] + on the phase plane .( b ) spike - bursting oscillations generated by the map . parametersvalue , title="fig:",scaledwidth=70.0% ] + on the phase plane .( b ) spike - bursting oscillations generated by the map .parameters value , title="fig:",scaledwidth=70.0% ] + ) to positive pulse : ( a ) - three different amplitude of the stimulus ; ( b ) - the behavior of variable ( membrane potential of neuron ) ; ( c ) - the phase plane .parameter values : , title="fig:",scaledwidth=70.0% ] + ) to positive pulse : ( a ) - three different amplitude of the stimulus ; ( b ) - the behavior of variable ( membrane potential of neuron ) ; ( c ) - the phase plane .parameter values : , title="fig:",scaledwidth=70.0% ] + .( b ) subthreshold oscillations generated by map .parameter values : , title="fig:",scaledwidth=70.0% ] + .( b ) subthreshold oscillations generated by map .parameter values : , title="fig:",scaledwidth=70.0% ] + .( b ) chaotic spiking against the background subthreshold oscillations .parameter values : ,title="fig:",scaledwidth=70.0% ] + .( b ) chaotic spiking against the background subthreshold oscillations .parameter values : ,title="fig:",scaledwidth=70.0% ] +
we propose a discrete time dynamical system ( a map ) as phenomenological model of excitable and spiking - bursting neurons . the model is a discontinuous two - dimensional map . we find condition under which this map has an invariant region on the phase plane , containing chaotic attractor . this attractor creates chaotic spiking - bursting oscillations of the model . we also show various regimes of other neural activities ( subthreshold oscillations , phasic spiking etc . ) derived from the proposed model . * the observed types of neural activity are extremely various . a single neuron may display different regimes of activity under different neuromodulatory conditions . a neuron is said to produce excitable mode if a `` superthreshold '' synaptic input evokes a post - synaptic potential in form of single spikes , which is an order of magnitude larger than the input amplitude . while a `` subthreshold '' synaptic input evokes post - synaptic potentials of the same order . under some conditions a single spike can be generated with arbitrary low frequency , depending on the strength of the applied current . it is called spiking regime . an important regime of neural activity is bursting oscillations where clusters of spikes occur periodically or chaotically , separated by phases of quiescence . other important observed regimes are phasic spikes and bursts , subtreshold oscillations and tonic spiking . understanding dynamical mechanisms of such activity in biological neurons has stimulated the development of models on several levels of complexity . to explain biophysical membrane processes in a single cell , it is generally used ionic channel - based models . the prototype of those models is the hodgkin - huxley system which was originally introduced in the description of the membrane potential dynamics in the giant squid axon . this is a high dimensional system of nonlinear partial differential equations . another class of neuron models are the phenomenological models which mimic qualitatively some distinctive features of neural activity with few differential equations . for example , the leaky integrate - and - fire model , hindmarsh - rose and fitzhugh - nagumo model etc . a new important subclass of phenomenological models is the map - based systems . basically such models are designed with the aim of simulating collective dynamics of large neuronal networks . the map - based models possess at least the same features of ordinary differential equations ( ode ) models , and have more simple intrinsic structure offering an advantage in describing more complex dynamics . in order to model basic regimes of neural activity we design new family of maps that are two - dimensional and based on discrete fitzhugh - nagumo system in which we introduce heaviside step function . the discontinuity line determines the excitation threshold of chaotic spiking - bursting oscillations . for some domain of the parameters , we found on phase plane an invariant bounded region containing chaotic attractor with spiking - bursting activity . the interesting fact is that the dynamical mechanism , leading to chaotic behavior of our two - dimensional map is induced by one - dimensional lorenz - like map . we demonstrate also that our model can display rich gallery of regimes of neural activity such as chaotic spiking , subthreshold oscillations , tonic spiking etc.all these modes play important role in the information processing in neural systems . *
the dynamo model proposed by roberts 1972 has been chosen as the starting point for an experimental demonstration of homogeneous fluid dynamo at the forschungszentrum karlsruhe . the fluid velocity field considered by roberts which is of particular interest in this contextis given by here a cartesian coordinate system is used .the flow pattern is sketched in fig .[ fig : roberts ] . is the length of the diagonal of a cell in the and the parameter , which is a constant , determines the of the flow and so the helicity of the velocity field .roberts has demonstrated that a flow of this kind is capable of dynamo action .he investigated , however , only magnetic fields which show the same periodicity in and as the flow pattern .these fields , which we call here harmonic fields " , possess parts which do not depend on and but only on or , in other words , they have infinite wave lengths in the and directions .as roberts himself pointed out the considered flow allows also non decaying magnetic fields with finite wave lengths in all directions . for a particular casesuch fields were investigated by tilgner and busse , who called them subharmonic " , and in a more general frame by plunian and rdler . despite the finite dimensions of the karlsruhe experimental device many estimates concerning excitation conditions etc .have been made on the basis of findings about harmonic magnetic fields .it is , however , of high interest to compare these results with such derived from results on subharmonic fields . in this paperwe start with the basic equations of the roberts dynamo problem and some general consequences ( section [ robdyn ] ) , present some findings on its harmonic solutions ( section [ harm ] ) and explain a mean field approach to the dynamo problem on that level ( section [ mfappr ] ) .after that we turn to subharmonic solutions and give some results for them ( section [ sec : sub model ] ) .we then deal with the karlsruhe experiment , derive in the framework of a mean field approach and under simplifying assumptions on the dynamo module an excitation condition and compare it with a corresponding result of the subharmonic analysis ( section [ sec : applkar ] ) .finally summarize the main consequences of our findings ( section [ sec : conclusion ] ) .to discuss the roberts dynamo problem in some detail we consider the induction equation governing the magnetic field , assuming that it applies in all infinite space .we use its dimensionless form with instead of we have introduced here dimensionless coordinates defined by , instead of the dimensionless velocity defined by , and we measure the time in units of . further is the magnetic reynolds number defined by with being the magnetic diffusivity of the fluid . for a steady flowas envisaged here we may expect solutions varying like in time , where the real part of is the dimensionless growth rate . in this case an eigenvalue problem for with the eigenvalue parameter occurs .furthermore , since the flow is -independent , can be assumed to possess the form where is a complex vector field independent of , and a dimensionless wave number with respect to the -direction .when inserting ( [ eq : bb ] ) into ( [ inductiondim ] ) we find the and components of ( [ eq : bharm ] ) are equations for and which do not contain .they constitute the mentioned eigenvalue problem . after solving it, we can calculate from ( [ eq : divbharm ] ) without any integration .we may easily conclude from ( [ eq : bharm ] ) that the results for for any can be inferred from those for with the help of the relation as long as we deal with direct solutions of the roberts dynamo problem ( up to section [ sec : sub model ] ) we therefore restrict our attention to the solutions with .in the discussion of the karlsruhe dynamo experiment ( section [ sec : applkar ] ) we admit also other values of .as mentioned above , roberts solved the relevant equations only for magnetic fields with the same periods in and as the flow pattern .we consider first this case only , in which we speak of harmonic solutions ". then must have the same periodicity in and as the flow pattern . solving the eigenvalue problem defined by ( [ eq : bharm ] ) roberts found growth rates , that is real parts of , in its dependence on for values of up to 64 as shown in fig .[ fig : p ] .@ + & the imaginary parts of proved to be equal to zero ( numerically always close to zero ) , attesting that the dynamo instability is an absolute one .this implies that the magnetic field geometry is stationary while the intensity in general varies in time .soward has shown that in the limit of large the order of the maximum of the dimensionless growth rate , , is given by .it occurs at a wave number for which .that is , as .thus the roberts dynamo proves to be a slow one .this applies not only in the context of harmonic solutions , for it turns out that other solutions never grow faster than the fastest of the harmonic ones . results for finite obtained in our calculations are given in fig . [ fig : p ] and in table [ tab : maxpkrm ] ; see also .note that the quantities and given in table [ tab : maxpkrm ] approach constant values as grows and thus illustrate the mentioned asymptotic laws ..maximum growth rates , the corresponding wave numbers and the quantities and for various and . [ cols="<,^,^,^,^,^,^,^,^,^,^",options="header " , ]in order to check a simple mean - field theory of the karlsruhe dynamo experiment we are interested in solutions of the induction equation ( [ inductiondim ] ) with period lengths exceeding those of the flow pattern , which we call subharmonic solutions " .we consider the case in which the period lengths of are larger by an integer factor than those of the flow pattern .as mentioned above , this problem has already been investigated by tilgner & busse for a few special values of and later by plunian and rdler .we focus our attention again on the induction equation ( [ inductiondim ] ) governing the magnetic field in all space .we use again ( [ eq : bb ] ) but consider no longer as a field with the same periodicity in and as the flow pattern .instead we put , where has now the same periodicity as the flow pattern and and are subharmonic wave numbers in the and directions . in that sensewe look for solutions of the induction equation of the form with being a complex periodic vector field with the same period length in and direction as the flow pattern , the real vector , and again a complex quantity ; for more details see .we restrict our attention here to the case .then the period lengths of the magnetic field are just times that of the flow pattern. the harmonic solutions discussed above correspond to the limit . inserting ( [ eq : bdef ] ) in ( [ inductiondim ] ) we find the system ( [ eq : b2 ] )defines an eigenvalue problem with being the eigenvalue parameter in his numerical calculations . ] .it has been solved numerically .marginal values of versus for are shown in fig .[ fig : marginal ] for different values of and again . for are both a critical and a critical below which dynamo action is not possible . in general no longer real , that is , we have no longer stationary but moving field structures .we point out that relation ( [ ki ] ) allows us the calculation of for arbitrary from that for . @ + &the essential piece of the karlsruhe dynamo experiment is the dynamo module " , a cylindrical container with both radius and height somewhat less than 1 m , through which liquid sodium is driven by external pumps . by means of a system of channels with conducting walls , constituting 52 spin generators " , a helical motion is organized .the flow pattern is similar to that defined by ( [ new1 ] ) .the 52 spin generators correspond to 26 periodic units of the flow pattern such as the one shown in fig .[ fig : roberts ] . the arrangement of the pumps allows to vary the parameters , or , and independently from each other . in order to give an estimate for the self excitation condition ofthe experimental device a simple mean field theory has been developed .the mean magnetic field defined as above is assumed to satisfy the equations ( [ inducmean ] ) and ( [ eq : electromotive force ] ) inside the dynamo module and to continue in some way in outer space .of course , in this context can no longer be independent on and , and can no longer have the simple forms ( [ kh09 ] ) , ( [ kh11 ] ) or ( [ kh17 ] ) . relying on some traditional conceptit was assumed that the variations of in space are sufficiently weak so that in a given point can be represented by and its first spatial derivatives in this point .together with the symmetry properties of the roberts flow this leads to where and are constants depending on and , and is again the unit vector in -direction . as in ( [ kh17 ] )the term with describes the anisotropic acting in the only .the terms with and can be interpreted by introducing an anisotropic mean - field diffusivity different from the molecular magnetic diffusivity .finally the term with describes a part of depending on derivatives of which can not be expressed by and therefore not be interpreted as a contribution to a modified diffusivity .several results have been derived on the dependence of on the fluid flow , and also such on , and . for a field depending on and and having no we have , and the last three terms on the right hand side of ( [ meanemf ] ) can be written in the form . as to be expected , in this special case the structures of ( [ kh17 ] ) and ( [ meanemf ] ) coincide , and we have .our above remark on the in ( [ meanemf ] ) explains why the interpretation of the in ( [ kh17 ] ) as a contribution to a mean - field diffusivity is not compelling .the assumption on small variations of in space means in particular that does not change markedly across a spin generator . in that sensethe usage of ( [ meanemf ] ) in a theory of the dynamo module can only be justified for a very large number of spin generators within the module .quite a few solutions of the equation ( [ inducmean ] ) for , applied to the dynamo module , with according to ( [ meanemf ] ) and various boundary conditions have been calculated . in most cases ,however , no other contribution to than the , that is , only the first term on the right hand side of ( [ meanemf ] ) was taken into account .contributions with higher than first derivatives of have never been considered . by these and other reasonsa check of the results of the simple mean field theory on a way that avoids the mentioned shortcomings seems very desirable .for this purpose we deal now with a very simple model of the dynamo module .we consider no longer a cylindrical module but instead a rectangular dynamo box " with a quadratic basis area in the and denote the edge lengths of the box in this plane by and its hight by .thinking of the shape of the real dynamo module we put .we will study the excitation condition for a mean magnetic field which satisfies the equation ( [ inducmean ] ) and the relation ( [ eq : electromotive force ] ) in all space and is periodic in and with the period length and in with the period length .this periodicity means that the dynamo box contains just a half wave " of the field . for the sake of simplicity we use ( [ eq : electromotive force ] ) in its reduced form containing no other induction effect than the will then compare this excitation condition with that for a subharmonic whose longest wave lengths show the same periodicity , that is , which fits in the same sense to the dynamo box . in this contextwe put so that an area of in the contains just 100 period units , that is , 200 cells of the flow pattern , consequently the basis area of the dynamo box 50 cells , which have to be compared with the 52 spin generators in the real dynamo module . means , and with we arrive at . instead of a realistic boundary condition for the dynamo module we use here in fact the condition of periodic continuation of the magnetic fields both on the mean field and the subharmonic level. such a condition might be in general problematic but seems acceptable for the comparison which we have in mind . from fig .[ fig : alpha ] we see that the value of for can , except for small , not be inferred from and only .this suggests that there will be discrepancies between the excitation conditions obtained with a mean field theory which ignores contributions to with higher than first - order spatial derivatives and those derived from the subharmonic analysis .as already explained we assume for our mean field consideration that equation ( [ inducmean ] ) and the reduced form of ( [ meanemf ] ) , that is , apply in all space with constant .we may represent as a sum of a poloidal and a toroidal part , with two defining scalars and . inserting this in ( [ inducmean2 ] ) and dropping unimportant constants we find the special periodic solution of ( [ inducmean2 ] ) which we are looking foris obtained with the ansatz where and are constants , and the parameters specified above and , which will prove to be real , is again the growth rate .when inserting this in ( [ system ] ) we arrive at two linear homogeneous equations for and .the requirement that they allow non - trivial solutions leads to growing are possible in the case of the upper sign of the last term if is sufficiently large .the excitation condition reads in the representations of results on on which we now rely the latter is given in its original dimension so that it corresponds with our dimensionless .furthermore these results are given in terms of the two magnetic reynolds numbers and for the flow in the -plane and in -direction , respectively .these are connected with our and by the marginal states of the dynamo , in which neither grows nor decays , are given by pairs of and , or by the corresponding neutral curve in the .we may represent the result for arbitrary and by using the modified magnetic reynolds number defined by instead of .note that is no longer determined by alone but also by .[ fig : f / k const ] shows a in which curve ( a ) gives just the result of our mean field calculation .clearly dynamo action requires that exceeds a critical value .it appears , however , to be possible for any if only is sufficiently large .let us now compare this result with that for a corresponding subharmonic solution of the induction equation . in fig .[ fig : f / k const ] the curve ( b ) is the neutral one for the subharmonic solution with the values of and specified above . clearly dynamo action requires now not only that exceeds a critical value but also that lies above such a value .in addition for each given allowing dynamo action the marginal value of derived in the subharmonic approach is higher than that concluded from the mean field approach . in the range of between 1.2 and 2 , which corresponds to the actual situation in the karlsruhe experiment ,the deviation is larger than 20 % . of course it will become smaller in a comparison with a mean field model which involves also the induction effects connected with first derivatives of indicated in ( [ meanemf ] ) ;see . buteven then the mean field approach underestimates the requirements for self excitation .we have dealt with several aspects of the roberts dynamo problem and derived some results which are of interest for the karlsruhe dynamo experiment .although a rectangular dynamo box was considered , there are good reasons to assume that the main conclusions apply as well to the real experimental device with a cylindrical dynamo module . in the framework ofthe simple mean field theory of the experiment self excitation seems possible for arbitrary values of the magnetic reynolds number describing the flow perpendicular to the axes of the spin generators if only the magnetic reynolds number for the axial flow is sufficiently large .an analysis based on subharmonic solutions revealed that a dynamo is only possible if both and exceed critical values .apart from this it was found that the simple mean - field theory underestimates the excitation condition of the dynamo .this discrepancy of the mean - field results with those obtained with subharmonic solutions can not be completely removed by taking into account the effect of the mean field diffusivity .busse , f.h . ,mller , u. , stieglitz , r. & tilgner , a. , a two - scale homogeneous dynamo : an extended analytical model and an experimental demonstration under development ._ magnetohydrodynamics _ * 32 * , 235248 ( 1996 ) .rdler , k .- h . ,apstein , e. , rheinhardt , m. & schler , m. , contributions to the theory of the planned karlsruhe dynamo experiment - supplements and corrections ._ report astrophysical institute potsdam _ ( 1997 ) .rdler , k .- h . ,apstein , e. & schler , m. , the in the karlsruhe dynamo experiment .3rd int . conf . on `` transfer phenomena in magnetohydrodynamics and electro - conducting flows '' ,aussois , france , vol ., 9 - 14 ( 1997 ) .rdler , k .- h . ,rheinhardt , m. , apstein , e. , & fuchs , h. , on the mean field theory of the karlsruhe dynamo experiment .ii . back reaction of the magnetic field on the fluid flow ._ magnetohydrodynamics , this volume_.
two different approaches to the roberts dynamo problem are considered . firstly , the equations governing the magnetic field are specified to both harmonic and subharmonic solutions and reduced to matrix eigenvalue problems , which are solved numerically . secondly , a mean magnetic field is defined by averaging over proper areas , corresponding equations are derived , in which the induction effect of the flow occurs essentially as an anisotropic alpha - effect , and they are solved analytically . in order to check the reliability of the statements on the karlsruhe experiment which have been made on the basis of a mean - field theory , analogous statements are derived for a rectangular dynamo box containing 50 roberts cells , and they are compared with the direct solutions of the eigenvalue problem mentioned . some shortcomings of the simple mean field theory are revealed .
a recent new scientist article deals with errors in courts due to `` bad mathematics '' , advocating the use of the so - called bayesian methods to avoid them .although most examples of resulting `` rough justice '' come from real life cases , the first `` probabilistic pitfall '' is taken from crime fiction , namely from a `` 1974 episode of the cult us television series '' _ columbo _ , in which a `` society photographer has killed his wife and disguised it as a bungled kidnapping . ''the pretended mistake happens in the concluding scene , when `` the hangdog detective [ ] induces the murderer to grab from a shelf of 12 cameras the exact one used to snap the victim before she was killed . ''according to the article author ( or to experts on which scientific journalists often rely on ) the question is that `` killer or not , anyone would have a 1 in 12 chance of picking the same camera at random .that kind of evidence would never stand up in court . ''then a sad doubt is raised , `` or would it ?in fact , such probabilistic pitfalls are not limited to crime fiction . '' being myself not particularly fond of this kind of entertainment ( perhaps with a little exception of the columbo series , that i watch casually ) , i can not tell how much crime fiction literature and movies are affected by `` probabilistic pitfalls '' .instead , i can give firm witness that scientific practice is plenty of mistakes of the kind reported in ref. , that happen even in fields the general public would hardly suspect , like frontier physics , whose protagonists are supposed to have a skill in mathematics superior to police officers and lawyers .but it is not just a question of math skill ( complex calculations are usually done without mistakes ) , but of _ probabilistic reasoning _( what to calculate ! ) .this is a quite old story .in fact , as david hume complained 260 years ago , _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` the celebrated monsieur leibniz has observed it to be a defect in the common systems of logic , that they are very copious when they explain the operations of the understanding in the forming of demonstrations , but are too concise when they treat of probabilities , and those other measures of evidence on which life and action entirely depend , and which are our guides even in most of our philosophical speculations . ''_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ it seems to me that the general situation has not improved much .yes , ` statistics ' ( a name that , meaning too much , risks to mean little ) is taught in colleges and universities to students of several fields , but distorted by the ` frequentistic approach ' , according to which one is not allowed to speak of probabilities of causes .this is , in my opinion , the original sin that gives grounds for a large number of probabilistic mistakes even by otherwise very valuable scientists and practitioners ( see e.g. chapter 1 of ref . ) .going back to the `` shambling sleuth columbo '' , being my wife and my daughter his fans , it happens we own the dvd collections of the first seven seasons .it occurred then i watched with them , not much time ago ( perhaps last winter ) , the ` incriminated ' , superb episode _ negative reaction _ , one of the best performances of peter falk playing the role of the famous lieutenant .however , reading the mentioned new scientist article , i did not remember i had a ` negative reaction ' from the final scene , although i use and teach bayesian methods for a large variety of applications .did i overlook something ?i watched again the episode and i was again convinced columbo s last move was a conclusive checkmate . ) .therefore , rather than chess , the name of the game is _ poker _ , and columbo s bluff is able to induce the murderer to provide a crucial piece of evidence to finally incriminate him .] then i have invited some friends , all with physics or mathematics degree and somewhat knowledgeable of the bayesian approach , to enjoy an evening together during the recent end of year holidays in order to let them make up their minds whether columbo had good reasons to take paul galesco , magnificently impersonated by dick van dyke , in front of the court ( bayes or not , we had some fun ) .the verdict was unanimous : columbo was fully absolved or , more precisely , there was nothing to reproach the story writer , peter s. fischer .the convivial after dinner jury also requested me to write a note on the question , possibly with a short , self - contained introduction to the ` required math ' .not only to ` defend columbo ' or , more properly , his writer , but , and more seriously , to defend the bayesian approach , and in particular its applications in forensic science .in fact , we all deemed the beginning paragraphs of the new scientist article could throw a bad light on the rest of the contents .imagine a casual reader of the article , possibly a lawyer , a judge or a student in forensic science , to which the article was virtually addressed , and who might have seen _ negative reaction_. most likely he / she considered legitimate the charges of the policemen against the photographer .the ` negative reaction ' would be that the reader would consider the rest of the article a support of dubious validity to some ` strange math ' that can never substitute the human intuition in a trial . ]not a good service to the ` bayesian cause ' .( imagine somebody trying to convince you with arguments you hardly understand and who begins asserting something you consider manifestly false . ) in the following section i introduce the basic elements of bayesian reasoning ( subsection [ ss : jl ] can be skipped on first reading ) , using a toy model as guiding example in which the analysis of ref . ( `` 1 in 12 '' , or , more precisely `` 1 in 13 '' ) holds . section [ sec : columbo_priors ]shows how such a kind of evidence would change columbo s and jury s opinion .then i discuss in section [ sec : finale ] why a similar argument does not apply to the clip in which columbo finally frames galesco , and why all witnesses of the crucial actions ( including tv watchers , with the exception of the author of ref . and perhaps a few others ) and an hypothetical court jury ( provided the scene had been properly reported ) had to be absolutely positive the photographer killed his wife ( or at least he knew who did it in his place ) . the rest of the paper might be marginal , if you are just curious to know why i have a different opinion than ref . , although i agree on the validity of bayesian reasoning .in fact , at the end of the work , this paper is not the ` short note ' initially planned .the reason is that the past months i had many discussions on some of the questions treated here with people from several fields .i have realized once more that it is not easy to put the basic principles at work if some important issues are not well understood .people are used to solving their statistical problems with ` ad hoc ' formulae ( see appendix h ) and therefore tend to add some ` bayesian recipes ' in their formularium .it is then too high the risk that one looks at simplified methods bayesian methods require a bit more thinking and computation that others ! that are even advertised as ` objective ' . or one just refuses to use any math , on the defense of pure intuition .( by the way , this is an important point and i will take the opportunity to comment on the apparent contradictions between intuition and formal evaluation of beliefs , defending both , but encouraging the use of the latter , superior to the former in complex situations see in particular appendix c ) .so , to conclude the introduction , this document offers several levels of reading : * if you are only interested to columbo s story , you can just jump straight to section [ sec : finale ] . *if you also ( or regardless of columbo ) want to have an opportunity to learn the basic rules of bayesian inference , subsections [ ss : bayes_theorem ] , [ ss : bayes_theorem_2 ] and [ ss : many_pieces ] , based on a simple master example , have been written on the purpose. then you might appreciate the advantage of logarithmic updating ( section [ ss : jl ] ) and perhaps see how it applies to the aids example of appendix f. + * if you already know the basics of the probabilistic reasoning , but you wonder how it can be applied into real cases , then section [ sec : real_life ] should help , together with some of the appendices . *if none of the previous cases is yours ( you might even be an expert of the field ) , you can simply browse the document .perhaps some appendices or subsections might still be of your interest .* finally , there is the question of the many footnotes , which can break the pace of the reading .they are not meant to be necessarily read sequentially along with the main text and could be skipped on a first fast reading ( in fact , this document is closer to an hypertext than to a standard article . ) enjoy !let us leave aside columbo s cameras for a while and begin with a different , simpler , stereotyped situation easier to analyze .imagine there are two types of boxes , , that only contain white balls ( ) , and , that contain one white balls and twelve black ( incidentally , just to be precise , although the detail is absolutely irrelevant , we have to infer from columbo s words , _ `` you did nt touch any of these twelve cameras .you picked up that one '' _ , the cameras were thirteen ) .you take at random a box and extract a ball .the resulting color is _white_. you might be interested to evaluate the probability that the box is of type , in the sense of stating in a quantitative way how much you believe this hypothesis .in formal terms we are interested in , knowing that and , a problem that can be sketched as [ here ` ' stands for ` given ' , or ` conditioned by ' ; is the general ( ` background ' ) status of information under which this probability is assessed ; ` ' or ` ' after ` ' indicates that both conditions are relevant for the evaluation of the probability . ]a typical mistake at this point is to confuse with , or , more often , with , as largely discussed in ref .hence we need to learn how to turn properly into using the rules of probability theory .the ` probabilistic inversion ' can _ only _ and appendix h. ] be performed using the so - called _bayes theorem _ , a simple consequence of the fact that , given the _ effect _ and some _ hypotheses _ concerning its possible cause , the joint probability of and , conditioned by the _ background information _ represents all we know about the hypotheses and the effect considered .writing in all expressions could seem a pedantry , but it is nt .for example , if we would just write in these formulae , instead of , one might be tempted to take this probability equal to one , because the observed event is a well established fact , that has happened and is then certain .but it is not this certainty that enters these formulae , but rather the probability ` that fact could happen ' in the light of ` everything we knew ' about it ( ` ' ) . ] , can be written as where ` ' stands for a logical ` and ' . from the second equality of the last equation we get that is one of the ways to express bayes theorem . ]valid if we deal with a class of incompatible hypotheses [ i.e. and .in fact , in this case a general rule of probability theory [ eq . ( [ eq : rul6 ] ) in appendix a ] allows us to rewrite the denominator of eq .( [ eq : phi|e ] ) as . in this note , dealing only with two hypotheses , we prefer to reason in terms of probability ratios , as shown in eq .( [ eq : bayes_factor ] ) . ] since a similar expression holds for any other hypothesis , dividing member by member the two expressions we can restate the theorem in terms of the relative beliefs , that is the initial ratio of beliefs ( ` odds ' ) is updated by the so - called _ bayes factor _ , that depends on how likely _ each _ hypothesis can produce that effect . , the probabilities of the effects and have usually nothing to do with each other . ] introducing and , with obvious meanings , we can rewrite eq .( [ eq : bayes_factor ] ) as note that , if the initial odds are unitary , than the final odds are equal to the updating factor ._ bayes factors can be interpreted as odds due only to an individual piece of evidence , the two hypotheses were considered initially equally likely_. and [ fn : jaynes ] , as well as appendix h. ) ] this allows us to rewrite as , where the tilde is to remind that _ they are not properly odds _ , but rather ` _ pseudo - odds _ ' .we get then an expression in which all terms have _ virtually _ uniform meaning : if we have only two hypotheses , we get simply .if the updating factor is unitary , then the piece of evidence does not modify our opinion on the two hypotheses ( no matter how small can numerator and denominator be , as long as their ratio remains finite and unitary ! see appendix g for an example worked out in details ) ; when vanishes , then hypothesis becomes impossible ( `` _ it is falsified _ '' ) ; if instead it is infinite ( i.e. the denominator vanishes ) , then it is the other hypothesis to be impossible .( the undefined case means that we have to look for other hypotheses to explain the effect . , but an absolute truth , i.e. , depends on which class of hypotheses is considered . stated in other words , in the realm of probabilistic inference_ falsities can be absolute , but truths are always relative_. ] ) applying the updating reasoning to our box game , the bayes factor of interest is as it was remarked , this number would give the required odds _ if _ the hypotheses were initially equally likely . but how strong are the initial relative beliefs on the two hypotheses ? ` unfortunately ' , we can not perform a probabilistic inversion if we are unable to assign somehow _ prior probabilities _ to the hypotheses we are interested in . and appendix h. [ fn : nopriors ] ] indeed , in the formulation of the problem i on purpose passed over the relevant pieces of information to evaluate the prior probabilities ( it was said that `` there are two types of boxes '' , not `` there are two boxes '' ! ) .if we specify that we had boxes of type and of the other kind , then the initial odds are and the final ones will be from which we get ( just requiring that the probability of the two hypotheses have to sum up to one and are generic , complementary hypotheses we get , calling the bayes factor of versus and the initial odds to simplify the notation , the following convenient expressions to evaluate the probability of : ] ) if the two hypotheses were _ initially _ considered equally likely , then the evidence makes 13 times more believable than , i.e. , or approximately 93% . on the other hand , if was _ a priori _ much less credible than , for example by a factor 13 , just to play with round numbers , the same evidence made and equally likely . instead, if we were initially in strong favor of , considering it for instance 13 times more plausible than , that evidence turned this factor into 169 , making us 99.4% confident _ highly confident _ , some would even say ` practically sure ' ! that the box is of type .imagine now the following variant of the previous toy experiment .after the white ball is observed , you put it again in the box , shake well and make a second extraction .you get white the second time too . calling and the two observations , we have now : ) , although we are dealing now with more complex events and complex hypotheses , logical and of simpler ones . moreover , eq . ( [ eq : seq:2 ] )is obtained from eq .( [ eq : seq:1 ] ) making use of the formula ( [ eq : joint_prob ] ) of joint probability , that gives and an analogous formula for . note also that , going from eq .( [ eq : seq:2 ] ) to eq .( [ eq : seq:3 ] ) , has been rewritten as to emphasize that the probability of a second white ball , conditioned by the box composition and the result of the first extraction , depends indeed only on the box content and not on the previous outcome ( ` extraction after re - introduction ' ) . ] that , using the compact notation introduced above , we can rewrite in the following enlighting forms .the first is [ eq . ( [ eq : seq:4 ] ) ] that is , _ the final odds after the first inference become the initial odds of the second inference _ ( and so on , if there are several pieces of evidence ) .therefore , beginning from a situation in which was thirteen times more credible than is exactly equivalent to having started from unitary odds updated by a factor 13 due to a piece of evidence .the second form comes from eq .( [ eq : seq:3 ] ) : i.e. ) follows from eq .( [ eq : seq:3a ] ) because a bayes factor can be defined as the ratio of final odds over the initial odds , depending on the evidence .therefore ] _ bayes factors _ due to _ independent _ , that we have used above to turn eq .( [ eq : seq:2 ] ) into eq .( [ eq : seq:3 ] ) and that can be expressed , in general terms as i.e. , _ under the condition of a well precise hypothesis _ ( ) , the probability of the effect does not depend on the knowledge of whether has occurred or not . note that , in general , although and are independent given ( they are said to be _ conditionally independent _ ) , they might be otherwise _ dependent _ , i.e. .( going to the example of the boxes , it is rather easy to grasp , although i can not enter in details here , that , if we do not know the kind of box , the observation of changes our opinion about the box composition and , as a consequence , the probability of see the examples in appendix j ) ] pieces of evidence _ multiply_.that is , two independent pieces of evidence ( and ) are equivalent to a single piece of evidence ( ` ' ) , whose bayes factor is the product of the individual ones . in our case . in general , if we have several hypotheses and several _ independent _ pieces of evidence , , , , , indicated all together as , then eq .( [ eq : bayes_factor ] ) becomes \times o_{i , j}(i)\ , , \label{eq : product_odds}\end{aligned}\ ] ] i.e. where stand for ` product ' ( analogous to for sums ) . the remark that bayes factors due to independent pieces of evidence multiply together and the overall factor finally multiplies the initial odds suggests a change of variables in order to play with additive quantities .this can be done taking the logarithm of both sides of eq .( [ eq : product_odds ] ) , that then become & = & \sum_{k=1}^n \log_{10}[\tilde o_{i , j}(e_k , i ) ] + \log_{10}[o_{i , j}(i)]\ , , \label{eq : sum_bf}\end{aligned}\ ] ] respectively , where the base 10 is chosen for practical convenience because , as we shall discuss later , what substantially matters are powers of ten of the odds . introducing the new symbol jl, we can rewrite eq .( [ eq : sum_bf ] ) as or where \label{eq : jl_alle}\\ \mbox{jl}_{i , j}(i ) & = & \log_{10}\left[o_{i , j}(i ) \right]\label{eq : jl_0}\\ \delta\mbox{jl}_{i , j}(e_k , i ) & = & \log_{10}\left[\tilde o_{i , j}(e_k , i)\right ] \label{eq : deltajl_k } \\ \delta\mbox{jl}_{i ,j}(\mvec{e},i ) & = & \sum_{k=1}^n \delta\mbox{jl}_{i , j}(e_k , i)\ , .\label{eq : sum_deltajl_k}\end{aligned}\ ] ] the letter ` l ' in the symbol is to remind _logarithm_. but it has also the mnemonic meaning of _ leaning _ , in the sense of ` inclination ' or ` propension ' . the ` j ' is for _ judgment_. therefore ` jl ' stands for _ judgement leaning _ , that is an inclination of the judgement , an expression i have taken the liberty to introduce , using words not already engaged in probability and statistics , because in these fields many controversies are due to different meanings attributed to the same word , or expression , by different people ( see appendices b and g for further comments ) .jl can then be visualized as the indicator of the ` justice balance ' to which refers stands for ` guilty ' . ]( figure [ fig : jl ] ) , that displays zero if there is no unbalance , but it could move to the positive or the negative side depending on the weight of the several arguments pro and con .the role of the evidence is to vary the jl indicator by quantities s equal to base 10 logarithms of the bayes factors , that have then a meaning of _ weight of evidence _ , an expression due to charles sanders peirce ( see appendix e ) .but the judgement is rarely initially unbalanced .this the role of , that can be considered as a a kind of _ initial weight of evidence _ due to our prior knowledge about the hypotheses and [ and that could even be written as , to stress that it is related to a 0-th piece of evidence ] to understand the rationale behind a possible uniform treatment of the prior as it would be a piece of evidence , let us start from a case in which you now _ absolutely nothing_. for example have to state your beliefs on which of my friends , dino or paolo , will first run next rome marathon .it is absolutely reasonable you assign to the two hypotheses equal probabilities , i.e. , or ( your judgement is perfectly balanced ) .this is because in brain these names are only possibly related to italian males .nothing more .( but nowadays search engines over the web allow to modify your opinion in minutes . ) as soon as you deal with _ real _ hypotheses of your interest , things get quite different .it is in fact very rare the case in which the hypotheses tell you not more than their names .it is enough you think at the hypotheses ` rain ' or ` not rain ' , the day after you read these lines in the place where you live . in generalthe information you have in your brain related to the hypotheses of your interest can be considered the initial piece of evidence _ you _ have , usually different from that somebody else might have ( this the role of in all our expressions ) .it follows that prior odds of 10 ( ) will influence your leaning towards one hypothesis , exactly like unitary odds ( ) followed by a bayes factor of 10 ( ) .this the reason they enter on equal foot when `` balancing arguments '' ( to use an expression la peirce see the appendix e ) pro and against hypotheses ..a comparison between probability , odds and judgement leanings [ cols="^,^,^,^,^,^,^ " , ] as we see from this table , and as we better understand from figure [ fig : two_normal ] , numbers large in module are in favor of , and very large ones are in its strong favor . instead, the numbers laying in the interval defined by the two points marked in the figure by a cross provide evidence in favor of .however , while individual pieces of evidence in favor of can only be weak ( the maximum of is about 0.3 , reached around , namely , to be precise , for which reaches 0.313 ) , those in favor of the alternative hypothesis can be sometimes very large .it follows then that one gets easier convinced of rather than of .we can check this by a little simulation .we choose a model , extract 50 random variables and analyze the data as if we did nt know which generator produced them , although considering and equally likely .we expect that , as we go on with the extractions , the pieces of evidence accumulate until we _ possibly _ reach a level of practical certainty .obviously , the individual pieces of evidence do not provide the same , and also the sign can fluctuate , although we expect more positive contributions if the points are generated by and the other way around if they came from .therefore , as a function of the number of extractions the accumulated weight of evidence follows a kind of _ asymmetric random walk _( imagine the jl indicator fluctuating as the simulated experiment goes on , but drifting ` in average ' in one direction ) .figure [ fig : simulazioni ] shows 200 inferential stories , half per generator .we see that , in general , we get practically sure of the model after a couple of dozens of extractions .but there are also cases in which we need to wait longer before we can feel enough sure on one hypothesis .it is interesting to remark that the leaning in favor of each hypothesis grows , _ in average _ , linearly with the number of extractions .that is , a little piece of evidence , which is in average positive for and negative for , is added after each extraction . however , around the average trend , there is a large varieties of individual inferential histories .they all start at for , but in practice there are no two identical ` trajectories ' .all together they form a kind of ` fuzzy band ' , whose ` effective width ' grows also with the number of extractions , _ but not linearly_. the widths grows as the square root of .. we can also evaluate the _ uncertainty of prevision _, quantified by the standard deviation .we get for the two hypotheses & = & 0.15 \\\sigma[\delta\mbox{jl}_{1,2}(h_1 ) ] & = & 0.24 \\ u_r[\delta\mbox{jl}_{1,2}(h_1 ) ] & = & 1.6 \end{array}\right . \hspace{0.5cm } & & \hspace{0.5 cm } \left\{\begin{array}{rcl } \mbox{e}[\delta\mbox{jl}_{1,2}(h_2)]&=&-0.38 \\ \sigma[\delta\mbox{jl}_{1,2}(h_2)]&=&0.97 \\u_r[\delta\mbox{jl}_{1,2}(h_2 ) ] & = & 2.6 \end{array}\right.\end{aligned}\ ] ] where also the _ relative uncertainty _ has been reported , defined as the uncertainty divided by the absolute value of the prevision .the fact that the uncertainties are relatively large tells clearly that we _ do not expect _ that a single extraction will be sufficient to convince us of either model .but this does not mean we can not take the decision because the number of extraction _ has been _ too small .if a very large fluctuation provides a of ( the table in this section shows that this is not very rare ) , we have already got a very strong evidence in favor of . repeating what has been told several time , what matters is the cumulated judgement leaning .it is irrelevant if a jl of comes from ten individual pieces of evidence , only from a single one , or partially from evidence and partially from prior judgement .+ when we plan to make extractions from a generator , probability theory allows us to calculate expected value and uncertainty of : & = & n \times \mbox{e}[\delta\mbox{jl}_{1,2}(h_i ) ] \\\sigma[\delta\mbox{jl}_{1,2}(n , h_i ) ] & = & \sqrt{n } \times \sigma[\delta\mbox{jl}_{1,2}(h_i)]\\ u_r[\delta\mbox{jl}_{1,2}(n , h_i ) ] & = & \frac{1}{\sqrt{n}}\times u_r[\delta\mbox{jl}_{1,2}(h_i)]\,.\end{aligned}\ ] ] in particular , for we get ( ) and ( ) , that explain the gross feature of the bands in figure [ fig : simulazioni ] .] this is the reason why , as increases , the bands tend to move away from the line .nevertheless , individual trajectories can exhibit very ` irregular ' that do not follow the general trend _ are not exceptions _ , being generated by the same rules that produces all of them . ]behaviors as we can also see in figure [ fig : simulazioni ] .some comments on _ likelihood _ are also in order , because the reader might have heard this term and might wonder if and how it fits in the scheme of reasoning expounded here .one of the problems with this term is that it tends to have several meanings , and then to create misunderstandings . in plane english` likelihood ' is `` 1 . the condition of being likely or probable ; probability '' , or `` 2 .something that is probable '' ; but also `` 3 .( mathematics & measurements / statistics ) the probability of a given sample being randomly drawn regarded as a function of the parameters of the population '' .technically , with reference to the example of the previous appendix , the likelihood is simply , where is fixed ( the observation ) and is the ` parameter ' .then it can take two values , and .if , instead of only two models we had a continuity of models , for example the family of all gaussian distributions characterized by central value and ` effective width ' ( standard deviation ) , our likelihood would be , i.e. written in this way to remember that : 1 ) a likelihood is a function of the model parameters and not of the data ; 2 ) _ is not _ a probability ( or a probability density function ) of and .anyway , for the rest of the discussion we stick to the very simple likelihood based on the two gaussians . that is , instead of a double infinity of possibilities , our space of parameters is made only of two points , and .thus the situation gets simpler , although the main conceptual issues remain substantially the same . in principlethere is nothing bad to give a special name to this function of the parameters .but , frankly , i had preferred statistics gurus named it after their dog or their lover , rather than call it ` likelihood . 'was introduced by r. a. fisher with the object of _ avoiding _ the use of bayes theorem '' . ]the problem is that it is very frequent to hear students , teachers and researcher explaining that the ` likelihood ' tells `` how likely the parameters are '' ( this is _ the probability of the parameters ! not the ` likelihood ' _ ) . orthey would say , with reference to our example , `` it is the probability that comes from '' ( again , this expression would be the probability of given , and not the probability of given the models ! ) imagine if we have only in the game : comes with certainty from , although does not yield with certainty .is '' ( note the quote mark of ` likely ' , as in the example of footnote [ fn : james ] ) .but , fortunately we find in http://en.wikipedia.org/wiki/likelihood_function that `` this is not the same as the probability that those parameters are the right ones , given the observed sample .attempting to interpret the likelihood of a hypothesis given observed evidence as the probability of the hypothesis is a common error , with potentially disastrous real - world consequences in medicine , engineering or jurisprudence .see prosecutor s fallacy [ * ] for an example of this . ''( [ * ] see http://en.wikipedia.org/wiki/prosecutor%27s_fallacy . )+ now you might understand why i am particular upset with the name likelihood .[ fn_wiki_likelihood ] ] several methods in ` conventional statistics ' use somehow the likelihood to decide which model or which set of parameters describes at best the data .some even use the likelihood ratio ( our bayes factor ) , or even the logarithm of it ( something equal or proportional , depending on the base , to the weight of evidence we have indicated here by jl ). the most famous method of the series is the _ maximum likelihood principle_. as it is easy to guess from its name , it states that the _ best estimates _ of the parameters are those which maximize the likelihood .all that _ seems _ reasonable and in agreement with what it has been expounded here , but it is not quite so .first , for those who support this approach , likelihoods are not just a part of the inferential tool , they are everything .priors are completely neglected , more or less because of the objections in footnote [ fn : nopriors ] .this can be acceptable , if the evidence is overwhelming , but this is not always the case .unfortunately , as it is now easy to understand , neglecting priors is mathematically equivalent to consider the alternative hypotheses equally likely ! as a consequence of this statistics miseducation ( most statistics courses in the universities all around the world only teach` conventional statistics ' and never , little , or badly probabilistic inference ) is that too many unsuspectable people fail in solving the aids problem of appendix b , or confuse the likelihood with the probability of the hypothesis , resulting in misleading scientific claims ( see also footnote [ fn_wiki_likelihood ] and ref . ) .the second difference is that , since `` there are no priors '' , the result can not have a probabilistic meaning , as it is openly recognized by the promoters of this method , who , in fact , do not admit we can talk about probabilities of causes ( but most practitioners seem not to be aware of this ` little philosophical detail ' , also because frequentistic gurus , having difficulties to explain what is the meaning of their methods , they say they are ` probabilities ' , but in quote marks![multiblock footnote omitted ] ) . as a consequence , the resulting ` error analysis ' , that in human terms means to assign different beliefs to different values of the parameters , is cumbersome . in practicethe results are reasonable only if the possible values of the parameters are initially equally likely and the ` likelihood function ' has a ` kind shape ' ( for more details see chapters 1 and 12 of ref .in most cases ( and practically always in courts ) pieces of evidence are not acquired directly by the person who has to form his mind about the plausibility of a hypothesis .they are usually accounted by an intermediate person , or by a chain of individuals .let us call the report of the evidence provided in a _testimony_. the inference becomes now , generally different from . in order to apply bayes theorem in one of its formwe need first to evaluate .probability theory teaches us how to get it [ see eq .( [ eq : rul4 ] ) in appendix a ] : ( could be due to a true evidence or to a fake one ) .three new ingredients enter the game : * , that is the probability of the evidence to be correctly reported as such . * but the testimony could also be incorrect the other way around ( it could be incorrectly reported , simply by mistake , but also it could be a ` fabricated evidence ' ) , and therefore also is needed . note that the probabilities to lie could be in general asymmetric , i.e. , as we have seen in the aids problem of appendix f , in which the response of the analysis models false witness well . * finally , since enters now directly , the ` nave ' bayes factor , only depending on , is not longer enough . taking our usual two hypotheses , and , we get the following bayes factor based on the _ testified evidence _ ( hereafter , in order to simplify the notation , we use the subscript ` ' in odds and bayes factors , instead of ` ' , to indicate that they are in favor of and against , as we already did in the aids example of appendix f ) : as expected , this formula is a bit more complicate that the bayes factor calculated taking for granted , which is recovered if the lie probabilities vanish { } { \ \ \\tilde o_{h}(e , i)}\,,\end{aligned}\ ] ] i.e. only when we are absolutely sure the witness does not err or lie reporting ( but peirce reminds us that `` absolute certainty , or an infinite chance , can never be attained by mortals '' ) .in order to single out the effects of the new ingredients , eq .( [ eq : bf_lie ] ) can be rewritten as and respectively in the numerator and in the denominator , eq .( [ eq : bf_lie ] ) becomes then can be indicated as , is equal to and , finally , can be written as . ] } { 1 + \lambda(i ) \cdot \left [ \frac { \tilde o_{h}(e , i)}{p(e\,|\,h , i ) } -1 \right ] } \ , , \label{eq : weight_et}\end{aligned}\ ] ] where under the _ condition _ can not be factorized .the effective odds can however be written in the following convenient forms although less interesting than eq .( [ eq : weight_et ] ) .[ fb : peh=0 ] ] and , i.e. _ positive and finite_. the parameter , ratio of the _ probability of fake evidence _ and the _ probability that the evidence is correctly accounted _ , can be interpreted as a kind of _ lie factor_. given the human roughly logarithmic sensibility to probability ratios , it might be useful to define , in analogy to the jl , \,.\end{aligned}\ ] ] let us make some instructive limits of eq .( [ eq : weight_et ] ) .{}{}&\tilde o_{h}(e , i)\\ \tilde o_{h}(e_t , i ) & \xrightarrow [ \mbox{{\footnotesize } } ] { } { } & 1 \\ \tilde o_{h}(e_t , i ) & \xrightarrow [ \mbox{{\footnotesize } } ] { } { } & 1\\ \tilde o_{h}(e_t , i ) & \xrightarrow [ \mbox{{\footnotesize } } ] { } { } & \frac{p(e\,|\,h , i)}{\lambda(i ) } + 1 - p(e\,|\,h , i)\end{aligned}\ ] ] as we have seen , the ideal case is recovered if the lie factor vanishes . instead ,if it is equal to 1 , i.e. , the reported evidence becomes useless .the same happens if vanishes [ this implies that vanishes too , being . however , the most remarkable limit is the last one .it states that , even if is very high , the effective bayes factor can not exceed the inverse of the lie factor : }\,,\end{aligned}\ ] ] or , using logarithmic quantities } \,.\end{aligned}\ ] ] at this point some numerical examples are in order ( and those who claim they can form their mind on pure intuition get all my admiration _if _ they really can ) .let us imagine that would ideally provide a weight of evidence of 6 [ i.e. .we can study , with the help of table [ tab : jl_et ] , + & & + & : & 10 & & & & & & & + & & 6.00 & 6.00 & 6.00 & 6.00 & 6.00 & 6.00 & 6.00 & 6.00 + & & 6.00 & 6.00 & 6.00 & 6.00 & 5.99 & 5.95 & 4.96 & + & & 5.96 & 5.96 & 5.96 & 5.95 & 5.92 & 5.68 & 4.00 & + & & 5.70 & 5.70 & 5.70 & 5.68 & 5.52 & 4.92 & 3.00 & + & & 4.96 & 4.96 & 4.95 & 4.92 & 4.68 & 3.95 & 2.00 & + & & 4.00 & 4.00 & 3.99 & 3.95 & 3.70 & 2.96 & 1.04 & + & & 3.00 & 3.00 & 3.00 & 2.96 & 2.70 & 1.96 & 0.30 & + & & 2.00 & 2.00 & 2.00 & 1.95 & 1.70 & 1.00 & 0.04 & + & & 1.00 & 1.00 & 1.00 & 0.96 & 0.74 & 0.26 & 0.004 & + 0 & & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 + + + & & + & : & 10 & & & & & & & + & & 3.00 & 3.00 & 3.00 & 3.00 & 3.00 & 3.00 & 3.00 & 3.00 + & & 3.00 & 3.00 & 3.00 & 3.00 & 2.99 & 2.95 & 1.96 & + & & 2.96 & 2.96 & 2.96 & 2.95 & 2.92 & 2.68 & 1.04 & + & & 2.70 & 2.70 & 2.70 & 2.68 & 2.52 & 1.93 & 0.30 & + & & 1.96 & 1.96 & 1.96 & 1.92 & 1.68 & 1.00 & 0.04 & + & & 1.00 & 1.00 & 0.99 & 0.96 & 0.74 & 0.26 & 0.004 & + 0 & & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 + + + & & + & : & 10 & & & & & & & + & & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 + & & 1.00 & 1.00 & 1.00 & 1.00 & 0.99 & 0.96 & 0.26 & + & & 0.96 & 0.96 & 0.96 & 0.96 & 0.93 & 0.72 & 0.04 & + & & 0.72 & 0.72 & 0.72 & 0.70 & 0.58 & 0.23 & 0.003 & + & & 0.41 & 0.41 & 0.41 & 0.39 & 0.27 & 0.07 & & + 0 & & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 + how the _ weight of the reported evidence _ depends on the other beliefs [ in this table logarithmic quantities have been used throughout , therefore is the base ten logarithm of the odds in favor of given the hypothesis ; the table provides , for comparisons , also from equal to 3 and 1 ] .the table exhibits the limit behaviors we have seen analytically . in particular , if we fully trust the report , i.e. , then is exactly equal to , as we already know .but as soon as the absolute value of the lie factor is close to , there is a sizeable effect .the upper bound can be the be rewritten as \,,\\ \mbox{or}\hspace{3cm}\mbox { } & & \mbox { } \nonumber\\ \delta\mbox{jl}_h(e_t , i ) & \le & \mbox{min}\,[\delta\mbox{jl}_h(e , i),\,-\mbox{j}\lambda(i ) ] \,,\end{aligned}\ ] ] a relation valid in the region of interest when thinking about an evidence in favor of , i.e. and .this upper bound is very interesting .since minimum conceivable values of for human beings can be of the order of ( to perhaps or , but in many practical applications or can already be very generous ! ) , in practice the effective weights of evidence can not exceed values of about ( i have no strong opinion on the exact value of this limit , my main point is that _ you consider there might be such a practical human limit_. ) this observation has an important consequence in the combination of evidences , as anticipated at the end of section [ ss : agatha ]. should we give more consideration to a single strong piece of evidence , virtually weighing , or 10 independent weaker evidences , each having a of 1 ?as it was said , in the ideal case they yield the same global leaning factor .but as soon as human fallacy ( or conspiracy ) is taken into account , and we remember that our belief is based on and not on , then we realize that is well above the range of jl that we can reasonably conceive .instead the weaker pieces of evidence are little affected by this doubt and when they sum up together , they really can provide a of about 10 .let us go back to our toy model of section [ sec:1in13 ] and let us complicate it just a little bit , adding the possibility of incorrect testimony ( but we also simplify it using uniform priors , so that we can focus on the effect of the _ uncertain evidence _ ) .for example , imagine you do not see directly the color of the ball , but this is reported to you by a collaborator , who , however , might not tell you always the truth .we can model the possibility of a lie in following way : after each extraction he tosses a die and reports the true color only if the die gives a number smaller than 6 . using the formalism of appendix i, we have the resulting _ belief network _ , .since the connections between the _ nodes _ of the resulting _ network _ have usually the meaning of probabilistic links ( but also deterministic relations can be included ) , this graph is called a _belief network_. moreover , since bayes theorem is used to update the probabilities of the possible _ states _ of the nodes ( the node ` box ' , with reference to our toy model , has states and ; the node ` ball ' has states and ) , they are also called _ bayesian networks_. for more info , as well as tutorials and demos of powerful packages having also a friendly graphical user interface , i recommend visiting hugin and netica web sites .( my preference for hugin is mainly due to the fact that it is multi - platform and runs nicely under linux . ) for a book introducing bayesian networks in forensics , ref . is recommended . for a monumental probabilistic network on the ` * case that will never end * ' , see ref . ( if you like classic thrillers , the recent paper of the same author might be of your interest ) . ] relative to five extractions and to the corresponding five reports is shown in figure [ fig : bn ] , redrawn in a different way in figure [ fig : bn_mon_0 ] . in this diagramthe _ nodes _ are represented by ` monitors ' that provide the probability of each _ state _ of the _ variable_. the green bars mean that we are in condition of uncertainty with respect to all states of all variable .let us describe the several nodes : * initial box compositions have probability 50% each , that was our assumption . *the probability of white and black are the same for all extractions , with white a bit more probable than black ( 14/26 versus 12/26 , that is 53.85% versus 46.15% ) .* there is also higher probability that the ` witness ' reports white , rather than black , but the difference is attenuated by the ` lie factors . ' and one for . for simplicitywe assume here they have the same value .] in fact , calling and the reported colors we have all probabilities of the network have been updated ( hugin has nicely done the job for us ) .we recognize the 93% of box , that we already know .we also see that the increased belief on this box makes us more confident to observe white balls in the following extractions ( after re - introduction ) .the fact that the witness could lie reduces , with respect to the previous case , our confidence on and on white balls in future extractions . as an exercise on what we have learned in appendix h, we can evaluate the ` effective ' bayes factor that takes into account the testimony .applying eq .( [ eq : weight_et ] ) we get } { 1 + \lambda(i ) \cdot \left [ \frac { \tilde o_{h}(w , i)}{p(w\,|\,h , i ) } -1 \right ] } \\ & = & 13\times \frac{5}{17 } = 3.82\,,\end{aligned}\ ] ] or , about a factor of two smaller than , that was 1.1 ( this mean we need two pieces of evidence of this kind to recover the loss of information due to the testimony ) .the network gives us also the probability that the witness has really told us the truth , i.e. , that is _ different _ from , the reason being that white was initially a bit more probable than black .the most interesting thing that comes from the result of the network is how the probabilities that the two witness lie change .first we see that they are the same , about 95% , as expected for symmetry . butthe surprise is that the probability the the first witness said the truth has increased , passing from 85% to 95% .we can justify the variation because , in qualitative agreement with intuition , if we have concordant witnesses , _ we tend to believe to each of them more than what we believed individually_. once again , the result is , perhaps after an initial surprise , in qualitative agreement with intuition .the important point is that intuition is unable to get quantitative estimates .again , the message is that , once we agree on the basic assumption and we check , whenever it is possible , that the results are reasonable , it is better to rely on automatic computation of beliefs .this last information reduces the probability of , but does not falsify this hypothesis , as if , instead , we had _ observed _ black .obviously , it does also reduce the probability of white balls in the following extractions .the other interesting feature concerns the probability that each witness has reported the truth .our belief that the previous two witnesses really saw what they said is reduced to 83% .but , nevertheless we are more confident on the first two witnesses than on the third one , that we trust only at 76% , although the lie factor is the same for all of them .the result is again in agreement with intuition : if many witnesses state something and fewer say the opposite , _ we tend to believe the majority _, if we initially consider all witnesses equally reliable .but a bayesian network tells us also how much we have to believe the many more then the fewer .let us do , also in this case the exercise of calculating the effective bayes factor , using however the first formula in footnote [ fb : peh=0 ] : the effective odds can be written as i.e. } = { 13}/{61 } = 0.213 $ ] , smaller then 1 because they provide an evidence against box ( ) .it is also easy to check that the resulting probability of 75.7% of can be obtained summing up the three weights of evidence , two in favor of and two against it : , i.e. , that gives a probability of of 3.1/(1 + 3.1)=76% . only in this casewe become certain that the box is of the kind , and the game is , to say , finished .but , nevertheless , we still remain in a state on uncertainty with respect to several things .the first one is the probability of a white ball in future extractions , that , from now becomes 1/13 , i.e. 7.7% , and does not change any longer .but we also remain uncertain on whether the witnesses told us the truth , because what they said is not incompatible with the box composition .but , and again in qualitative agreement with the intuition , we trust much more whom told black ( 1.6% he lied ) than the two who told white ( 70.6% they lied ) . another interesting way of analyzing the final network is to consider the probability of a black ball in the five extractions considered .the fourth is one , because we have seen it .the fifth is 92.3% ( ) because we know the box composition . but in the first two extractions the probability is smaller than it ( 70.6% ) , while in the third is higher ( 98.4% ) .that is because in the two different cases we had an evidence respectively against and in favor of them .
triggered by a recent interesting new scientist article on the too frequent incorrect use of probabilistic evidence in courts , i introduce the basic concepts of probabilistic inference with a toy model , and discuss several important issues that need to be understood in order to extend the basic reasoning to real life cases . in particular , i emphasize the often neglected point that degrees of beliefs are updated not by ` bare facts ' alone , but by all available information pertaining to them , including how they have been acquired . in this light i show that , contrary to what claimed in that article , there was no `` probabilistic pitfall '' in the columbo s episode pointed as example of `` bad mathematics '' yielding `` rough justice '' . instead , such a criticism could have a ` negative reaction ' to the article itself and to the use of bayesian reasoning in courts , as well as in all other places in which probabilities need to be assessed and decisions need to be made . anyway , besides introductory / recreational aspects , the paper touches important questions , like : role and evaluation of priors ; subjective evaluation of bayes factors ; role and limits of intuition ; ` weights of evidence ' and ` intensities of beliefs ' ( following peirce ) and ` judgments leaning ' ( here introduced ) , including their uncertainties and combinations ; role of relative frequencies to assess and express beliefs ; pitfalls due to ` standard ' statistical education ; weight of evidences mediated by testimonies . a small introduction to bayesian networks , based on the same toy model ( complicated by the possibility of incorrect testimonies ) and implemented using hugin software , is also provided , to stress the importance of formal , computer aided probabilistic reasoning . # 1
energy markets have been liberalised worldwide in the last two decades .since then we have witnessed the increasing importance of such commodity markets which organise the trade and supply of energy such as electricity , oil , gas and coal .closely related markets include also temperature and carbon markets .there is no doubt that such markets will play a vital role in the future given that the global demand for energy is constantly increasing .the main products traded on energy markets are spot prices , futures and forward contracts and options written on them .recently , there has been an increasing research interest in the question of how such energy prices can be modelled mathematically . in this paper , we will focus on modelling energy _prices , which include day - ahead as well as real - time prices .traditional spot price models typically allow for _ mean - reversion _ to reflect the fact that spot prices are determined as equilibrium prices between supply and demand .in particular , they are commonly based on a gaussian ornstein uhlenbeck ( ou ) process , see schwartz , or more generally , on weighted sums of ou processes with different levels of mean - reversion , see , for example , benth , kallsen and meyer - brandis and klppelberg , meyer - brandis and schmidt . in such a modelling framework , the mean - reversion is modelled directly or physically , by claiming that the price change is ( negatively ) proportional to the current price . in this paper, we interpret the mean - reversion often found in commodity markets in a _weak sense _ meaning that prices typically concentrate around a mean - level for demand and supply reasons . in order to account for such a weak form mean - reversion , we suggest to use a modelling framework which allows to model spot prices ( after seasonal adjustment ) directly in _stationarity_. this paper proposes to use the class of volatility modulated lvy - driven volterra ( ) processes as the building block for energy spot price models . in particular , the subclass of so - called lvy semistationary ( ) processes turns out to be of high practical relevance .our main innovation lies in the fact that we propose a modelling framework for energy spot prices which ( 1 ) allows to model deseasonalised energy spot prices directly in _ stationarity _ , ( 2 ) comprises _ stochastic volatility _ , ( 3 ) accounts for the possibility of _ jumps _ and _ spikes _ , ( 4 ) features great flexibility in terms of modelling the _ autocorrelation structure _ of spot prices and of describing the so - called _ samuelson effect _ , which refers to the finding that the volatility of a forward contract typically increases towards maturity .we show that the new class of processes is analytically tractable , and we will give a detailed account of the theoretical properties of such processes .furthermore , we derive explicit expressions for the forward prices implied by our new spot price model .in addition , we will see that our new modelling framework encompasses many classical models such as those based on the schwartz one - factor mean - reversion model , see schwartz , and the wider class of continuous - time autoregressive moving - average ( carma ) processes . in that sense , it can also be regarded as a unifying modelling approach for the most commonly used models for energy spot prices .however , the class of processes is much wider and directly allows to model the key special features of energy spot prices and , in particular , the stochastic volatility component .the remaining part of the paper is structured as follows .we start by introducing the class of processes in section [ sectionlss ] .next , we formulate both a geometric and an arithmetic spot price model class in section [ sectionmodel ] and describe how our new models embed many of the traditional models used in the recent literature . in section [ sectionforward ] , we derive the forward price dynamics of the models and consider questions like affinity of the forward price with respect to the underlying spot .section [ emp ] contains an empirical example , where we study electricity spot prices from the european energy exchange ( eex ) .finally , section [ sectionconclusion ] concludes , and the contains the proofs of the main results .throughout this paper , we suppose that we have given a probability space with a filtration satisfying the ` usual conditions , ' see karatzas and shreve , definition i.2.25 .let denote a cdlg lvy process with lvy khinchine representation for , and for , and the lvy measure satisfying and . we denote the corresponding characteristic triplet by . in a next step ,we extend the definition of the lvy process to a process defined on the entire real line , by taking an independent copy of , which we denote by and we define for . throughout the paper denotes such a two - sided lvy process .the class of volatility modulated lvy - driven volterra ( ) processes , introduced by barndorff - nielsen and schmiegel , has the form where is a constant , is the two - sided lvy process defined above , are measurable deterministic functions with for , and and are cdlg stochastic processes which are ( throughout the paper ) assumed to be _ independent _ of . in addition , we assume that is positive .note that such a process generalises the class of convoluted subordinators defined in bender and marquardt to allow for stochastic volatility .a very important subclass of processes is the new class of lvy semistationary ( ) processes : we choose two functions such that and with whenever , then an process is given by note that the name lvy semistationary processes has been derived from the fact that the process is stationary as soon as and are stationary . in the case that is a two - sided brownian motion , we call such processes brownian semistationary ( ) processes , which have recently been introduced by barndorff - nielsen and schmiegel in the context of modelling turbulence in physics .the class of processes can be considered as the natural analogue for ( semi- ) stationary processes of lvy semimartingales ( ) , given by the class of processes can be embedded into the class of ambit fields , see barndorf - nilsen and schmiegel , barndorff - nielsen , benth and veraart . also , it is possible to define and processes for _ singular _ kernel functions and , respectively ; a function ( or ) defined as above is said to be singular if ( or ) does not exist or is not finite . in order to simplify the exposition ,we will focus on the stochastic integral in the definition of an ( and of an ) process only .that is , throughout the rest of the paper , let in this paper , we use the stochastic integration concept described in basse - oconnor , graversen and pedersen where a stochastic integration theory on , rather than on compact intervals as in the classical framework , is presented . throughout the paper , we assume that the filtration is such that is a lvy process with respect to , see basse - oconnor , graversen and pedersen , section 4 , for details .let denote the lvy triplet of associated with a truncation function .according to basse - oconnor , graversen and pedersen , corollary 4.1 , for the process with is integrable with respect to if and only if is -predictable and the following conditions hold almost surely : when we plug in , we immediately obtain the corresponding integrability conditions for the process .[ ex1 ] in the case of a gaussian ornstein uhlenbeck process , that is , when for and , then the integrability conditions above are clearly satisfied , since we have for many financial applications , it is natural to restrict the attention to models where the variance is finite , and we focus therefore on lvy processes with finite second moment . note that the integrability conditions above do not ensure square - integrability of even if has finite second moment .but substitute the first condition in ( [ mart - int - cond ] ) with the stronger condition \,\mathrm{d}s<\infty,\ ] ] then is square integrable . clearly , ] for all .thus , both ( [ mart - int - l2-cond ] ) and ( [ leb - int - l2-cond ] ) are satisfied ( the latter can be seen after using the sufficient conditions ) , and we find that is a square - integrable stochastic process .this section presents the new modelling framework for energy spot prices , which is based on processes . as before , for ease of exposition , we will disregard the drift part in the general process for most of our analysis and rather use with as the building block for energy spot price , see ( [ eqsimpleambit ] ) for the precise definition of all components . throughout the paper ,we assume that the corresponding integrability conditions hold. we can use the process defined in ( [ defyspot ] ) as the building block to define both a geometric and an arithmetic model for the energy spot price .also , we need to account for trends and seasonal effects .let denote a bounded and measurable deterministic seasonality and trend function . in a _geometric _ set up, we define the spot price by in such a modelling framework , the deseasonalised , logarithmic spot price is given by a process .alternatively , one can construct a spot price model which is of _ arithmetic _ type .in particular , we define the electricity spot price by ( note that the seasonal function in the geometric and the arithmetic model is typically not the same . ) for general asset price models , one usually formulates conditions which ensure that prices can only take positive values .we can easily ensure positivity of our arithmetic model by imposing that is a lvy subordinator and that the kernel function takes only positive values .we have formulated the new spot price model in the general form based on a process to be able to account for non - stationary effects , see , for example , burger _et al . _ , burger , graeber and schindlmayr .if the empirical data analysis , however , supports the assumption of working under stationarity , then we will restrict ourselves to the analysis of processes with stationary stochastic volatility . as mentioned in the , traditional models for energy spot pricesare typically based on mean - reverting stochastic processes , see , for example , schwartz , since such a modelling framework reflects the fact that commodity spot prices are equilibrium prices determined by supply and demand .stationarity can be regarded as a weak form of mean - reversion and is often found in empirical studies on energy spot prices ; one such example will be presented in this paper . in order to be able to have a stationary model , the lower integration bound in the definition of the process , and in particular for the process , is chosen to be rather than 0 .clearly , in any real application , we observe data from a starting value onwards , which is traditionally chosen as the observation at time .hence , while processes are defined on the entire real line , we only define the spot price for .the observed initial value of the spot price at time is assumed to be a _ realisation _ of the random variable and , respectively .such a choice guarantees that the deseasonalised spot price is a stationary process , provided we are in the stationary framework . since and processes are driven by a general lvy process , it is possible to account for price jumps and spikes , which are often observed in electricity markets . at the same time , one can also allow for brownian motion - driven models , which are very common in , for example , temperature markets , see , for example , benth , hrdle and cabrera .a key ingredient of our new modelling framework which sets the model apart from many traditional models is the fact that it allows for stochastic volatility .volatility clusters are often found in energy prices , see , for example , hikspoors and jaimungal , trolle and schwartz , benth , benth and vos , koopman , ooms and carnero , veraart and veraart .therefore , it is important to have a stochastic volatility component , given by , in the model .note that a very general model for the volatility process would be to choose an process , that is , and where denotes a deterministic , positive function and is a lvy subordinator . in fact , if we want to ensure that the volatility is stationary , we can work with a function of the form , for a deterministic , positive function . the kernel function ( or ) plays a vital role in our model and introduces a flexibility which many traditional models lack : we will see in section [ secevarcov ] that the kernel function together with the autocorrelation function of the stochastic volatility process determines the autocorrelation function of the process .hence our based models are able to produce various types of autocorrelation functions depending on the choice of the kernel function .it is important to stress here that this can be achieved by using _one _ process only , whereas some traditional models need to introduce a multi - factor structure to obtain a comparable modelling flexibility . also due to the flexibility in the choice of the kernel function, we can achieve greater flexibility in modelling the shape of the samuelson effect often observed in forward prices , including the hyperbolic one suggested by bjerksund , rasmussen and stensland as a reasonable volatility feature in power markets .note that we obtain the modelling flexibility in terms of the general kernel function here since we specify our model directly through a stochastic integral whereas most of the traditional models are specified through evolutionary equations , which limit the choices of kernel functions associated with solutions to such equations . in that context , we note that a or an process can not in general be written in form of a stochastic differential equation ( due to the non - semimartingale character of the process ) . in section [ sectsmcond ] , we will discuss sufficient conditions which ensure that an process is a semimartingale .as already mentioned above , energy spot prices are typically modelled in stationarity , hence the class of processes is particularly relevant for applications . in the following, we will show that many of the traditional spot price models can be embedded into our process - based framework .our new framework nests the stationary version of the classical one - factor schwartz model studied for oil prices . by letting a lvy process with the pure - jump part given as a compound poisson process , cartea and figueroa successfully fitted the schwartz model to electricity spot prices in the uk market .benth and altyt .e benth used a normal inverse gaussian lvy process to model uk spot gas and brent crude oil spot prices .another example which is nested by the class of processes is a model studied in benth in the context of gas markets , where the deseasonalised logarithmic spot price dynamics is assumed to follow a one - factor schwartz process with stochastic volatility .a more general class of models which is nested is the class of so - called carma - processes , which has been successfully used in temperature modelling and weather derivatives pricing , see benth , altyt .e benth and koekebakker , benth , hrdle and lpez cabrera and hrdle and lpez cabrera , and more recently for electricity prices by garca , klppelberg and mller , benth _ et al ._ .a carma process is the continuous - time analogue of an arma time series , see brockwell , brockwell for definition and details .more precisely , suppose that for nonnegative integers where and is a -dimensional ou process of the form with .\ ] ] here we use the notation for the -identity matrix , the coordinate vector ( where the first entries are zero and the entry is 1 ) and ] , and ( for ) being the log - moment generating function of , assuming that the moment generating function of exists .a special choice is the ` constant ' measure change , that is , letting in this case , if under the measure , has characteristic triplet , where is the drift , is the squared volatility of the continuous martingale part and is the lvy measure in the l ' evy khinchine representation , see shiryaev , a fairly straightforward calculation shows that , see shiryaev again , the esscher transform preserves the lvy property of , and the characteristic triplet under the measure on the interval ] , and being the log - moment generating function of . since is a subordinator , we obtain where and denotes the lvy measure associated with .our discussion above on choosing a starting value applies to the measure transform for the volatility process as well , and hence throughout the paper we will work under the assumption that for . note in particular , that this assumption implies that under the risk - neutral probability measure , the characteristic triplets of and only change on the time interval $ ] . on the interval , we have the same characteristic triplet for and as under .choosing , with a constant , an esscher transform will give a characteristic triplet , which thus preserves the subordinator property of under .for the general case , the process will be a time - inhomogeneous subordinator ( independent increment process with positive jumps ) .the log - moment generating function of under the measure is denoted by . in order to ensure the existence of the ( generalised ) esscher transforms , we need some conditions .we need that there exists a constant such that , and where .( similarly , we must have such a condition for the lvy measure of the subordinator driving the stochastic volatility , that is , ) .also , we must require that exponential moments of and exist .more precisely , we suppose that parameter functions and of the ( generalised ) esscher transform are such that the exponential integrability conditions of the lvy measures of and imply the existence of exponential moments , and thus that the esscher transforms and are well defined .we define the probability as the class of pricing measures for deriving forward prices . in this respect , be referred to as the market price of risk , whereas is the market price of volatility risk .we note that a choice will put more weight to the positive jumps in the price dynamics , and less on the negative , increasing the `` risk '' for big upward movements in the prices under .let us denote by the expectation operator with respect to , and by the expectation with respect to .suppose that the spot price is defined by the geometric model where is defined as in ( [ shortnot ] ) . in order to have the forward price well defined ,we need to ensure that the spot price is integrable with respect to the chosen pricing measure .we discuss this issue in more detail in the following .we know that is positive and in general not bounded since it is defined via a subordinator .thus , ( for ) is unbounded as well .supposing that has exponential moments of all orders , we can calculate as follows using iterated expectations conditioning on the filtration generated by the paths of , for : &=&\lambda(t){\mathbb{e}}_{\theta,\eta } \biggl [ { \mathbb{e}}_{\theta,\eta } \biggl[\exp \biggl(\int_{-\infty}^tg(t , s ) \omega_{s- } \,\mathrm{d}l_s \biggr ) \big| \mathcal{g}_t \biggr ] \biggr ] \\ & = & \lambda(t){\mathbb{e}}_{\eta } \biggl[\exp \biggl(\int_{-\infty}^0 \phi_l\bigl(g(t , s)\omega_s\bigr ) \,\mathrm{d}s \biggr ) \exp\biggl(\int_{0}^t\phi^{\theta}_l \bigl(g(t , s)\omega_s\bigr ) \,\mathrm{d}s \biggr ) \biggr ] .\end{aligned}\ ] ] to have that , the two integrals must be finite .this puts additional restrictions on the choice of and the specifications of and .we note that when applying the esscher transform , we must require that has exponential moments of all orders , a rather strong restriction on the possible class of driving lvy processes . in our empirical study ,however , we will later see that the empirically relevant cases are either that is a brownian motion or that is a generalised hyperbolic lvy process , which possess exponential moments of all orders .we are now ready to price forwards under the esscher transform .[ propforward - generalambit ] suppose that .then , the forward price for is given by .\ ] ] as a special case , consider , where is a two - sided standard brownian motion under . in this casewe apply the girsanov transform rather than the generalised esscher transform , and it turns out that a rescaling of the transform parameter function by the volatility is convenient for pricing of forwards . to this end , consider the girsanov transform that is , we set for . supposing that the novikov condition <\infty,\ ] ] holds , we know that is a brownian motion for under a probability having density process suppose that there exists a measurable function such that for all , with furthermore , suppose the moment generating function of exists on the interval .then , for all such that , the novikov condition is satisfied , since by the subordinator property of ( restricting our attention to ) and therefore \leq { \mathbb{e}}\biggl[\exp \biggl(\frac12 \int_{0}^{t^*}\frac{\theta^2(s)}{j(s ) } \,\mathrm{d}s \omega_0^{-2 } \biggr ) \biggr]<\infty.\ ] ] specifying , we have that , and condition ( [ suff - novikov ] ) holds with equality .we discuss the integrability of with respect to . by double conditioning with respect to the filtration generated by the paths of ,we find &=&\lambda(t)\exp \biggl(\int _ { 0}^{t}g(t , s)\theta(s ) \,\mathrm{d}s \biggr ) { \mathbb{e}}_{\theta,\eta } \biggl[{\mathbb{e}}_{\theta,\eta } \biggl[\exp \biggl(\int _{ -\infty}^tg(t , s)\omega_{s- } \, \mathrm{d}w_s \biggr )\big| \mathcal{g}_t \biggr ] \biggr ] \\ &= & \lambda(t)\exp \biggl(\int_{0}^{t}g(t , s ) \theta(s ) \,\mathrm{d}s \biggr){\mathbb{e}}_{\eta } \biggl[\exp \biggl(\frac12\int _ { -\infty}^tg^2(t ,s)\omega^2_s \,\mathrm{d}s \biggr ) \biggr ] .\end{aligned}\ ] ] from collecting the conditions on and for verifying all the steps above , we find that if is integrable on ( recall that for ) and is integrable on for all , then as long as we assume these conditions to hold .we state the forward price for the case and the girsanov change of measure discussed above .[ propforward - bmambit ] suppose that and that is defined by the girsanov transform in ( [ girsanovtransf ] ) .then , for , let us consider an example . in the bns stochastic volatility model, we have . hence , yields , this implies from proposition [ propforward - bmambit ] that the forward price is affine in , the ( square of the ) stochastic volatility .the stochastic volatility model studied in benth is recovered by choosing .suppose for a moment that the stochastic volatility process is identical to one ( i.e. , that we do not have any stochastic volatility in the model ) . in this case, the forward price becomes where for .hence , the logarithmic forward ( log - forward ) price is with for . note that , for , is a -martingale with the property ( for ) in the classical ornstein uhlenbeck case , with , for , we easily compute that and the forward price is explicitly dependent on the current spot price . in the general case ,this does not hold true .we have that , not unexpectedly , since the forward price converges to the spot at maturity ( at least theoretically ) .however , apart from the special time point , the forward price will in general _ not _ be a function of the current spot , but a function of the process .thus , at time , the forward price will depend on whereas the spot price depends on the two stochastic integrals can be pathwise interpreted ( they are both wiener integrals since the integrands are deterministic functions ) , and both and are generated by integrating over the same paths of a brownian motion .however , the paths are scaled by two different functions and .this allows for an additional degree of flexibility when creating forward curves compared to affine structures . in the classical ornstein uhlenbeck case ,the forward curve as a function of time to maturity will simply be a discounting of today s spot price , discounted by the speed of mean reversion of the spot ( in addition comes deterministic scaling by the seasonality and market price of risk ) . to highlight the additional flexibility in our modelling framework of semistationary processes , suppose for the sake of illustration that .then if furthermore , we are in a situation where the long end ( i.e. , large ) of the forward curve is not a constant . in fact, we find for that since is random , we will have a randomly fluctuating long end of the forward curve .this is very different from the situation with a classical mean - reverting spot dynamics , which implies a deterministic forward price in the long end ( dependent on the seasonality and market price of risk only ) .various shapes of the forward curve can also be modelled via different specifications of .for instance , if is a decreasing function , we obtain the contango and backwardation situations depending on the spot price being above or below the mean .if has a hump , we will also observe a hump in the forward curve .for general specifications of we can have a high degree of flexibility in matching desirable shapes of the forward curve .observe that the time - dynamics of the forward price can be considered as correlated with the spot rather than directly depending on the spot . in the ornstein uhlenbecksituation , the log - forward price can be considered as a linear regression on the current spot price , with time - dependent coefficients .this is not the case for general specifications .however , we have that and are both normally distributed random variables ( recall that we are still restricting our attention to ) , and the correlation between the two is obviously , for , the correlation is 1 . in conclusion, we can obtain a weaker stochastic dependency between the spot and forward price than in the classical mean - reversion case by a different specification of the function . in the discussion above, we saw that the choice yielded a forward price expressible in terms of . in the next proposition , we prove that this is the only choice of yielding an affine structure . the result is slightly generalising the analysis of carverhill .[ profforwardaffine ] the forward price in proposition [ propforward - bmambit ] is affine in and if there exist functions and such that and .conversely , if the forward price is affine in and , and and are strictly positive and continuously differentiable in the first argument , then there exists functions and such that and . obviously , the choice of and coming from ou - models , satisfy the conditions in the proposition above .in fact , appealing to similar arguments as in the proof of proposition [ profforwardaffine ] above , one can show that this is the _ only _ choice ( modulo multiplication by a constant ) which is stationary and gives an affine structure in the spot and volatility for the forward price dynamics .in particular , the specification considered in example [ ex - bjerksund - g ] gives a stationary spot price dynamics , but not an affine structure in the spot for the forward price .next , we turn our attention to the risk - neutral dynamics of the forward price .[ propriskneutraldynamicsf ] assume that the assumptions of proposition [ propforward - bmambit ] hold and that is given by the ( simple ) esscher transform .then the risk - neutral dynamics of the forward price is given by where .moreover is a -martingale , where denotes the poisson random measure associated with , and is the lvy measure of under .we observe that the dynamics will jump according to the changes in volatility given by the process .as expected , the integrand in the jump expression tends to zero when , since the forward price must ( at least theoretically ) converge to the spot when time to maturity goes to zero .the forward dynamics will have a stochastic volatility given by .hence , whenever exists , and , we have , when passing to the limit , we have implicitly supposed that we work with the version of having left - continuous paths with right - limits . by the definition of our integral in , where the integrand is supposed predictable , this can be done .thus , we find that the forward volatility converges to the spot volatility as time to maturity tends to zero , which is known as the samuelson effect .contrary to the classical situation where this convergence goes exponentially , we may have many different shapes of the volatility term structure resulting from our general modelling framework . in bjerksund , rasmussen and stensland , a forward price dynamics for electricity contractsis proposed to follow where and are positive constants .they argue that in electricity markets , the samuelson effect is stronger close to maturity than what is observed in other commodity markets , and they suggest to capture this by letting it increase by the rate close to maturity of the contracts .this is in contrast to the common choice of volatility being , resulting from using the schwartz model for the spot price dynamics .there is no reference to any spot model in the bjerksund , rasmussen and stensland model .the constant comes from a non - stationary behaviour , which can be incorporated in the framework .however , here we focus on the stationary case and choose .then we see that we can model the spot price by the process thus , after doing a girsanov transform , we recover the risk - neutral forward dynamics of bjerksund , rasmussen and stensland .it is interesting to note that with this spot price dynamics , the forward dynamics is not affine in the spot .hence , the bjerksund , rasmussen and stensland model is an example of a non - affine forward dynamics .whenever , we do not have that , and thus the bjerksund , rasmussen and stensland model does not satisfy the samuelson effect , either .we end this section with a discussion of option pricing .let us assume that we have given an option with exercise time on a forward with maturity at time .the option pays , and we are interested in finding the price at time , denoted . from arbitrage theory , it holds that , \ ] ] where is the risk - neutral probability . choosing as coming from the esschertransform above , we can derive option prices explicitly in terms of the characteristic function of by fourier transformation .[ optionprice ] let be the probability measure obtained from the esscher transform .let , and suppose that . by applying the definitions of fourier transforms and their inverses in folland , we have that , with is the fourier transform of defined by suppose that . then the option price is given by where and one can calculate option prices by applying the fast fourier transform as long is known .if is not integrable ( as is the case for a call option ) , one may introduce a damping function to regularize it , see carr and madan for details .let us consider the arithmetic spot price model , we analyse the forward price for this case , and discuss the affinity .the results and discussions are reasonably parallel to the geometric case , and we refrain from going into details but focus on some main results . under a natural integrability condition of the spot price with respect to the esscher transform measure , we find the following forward price for the arithmetic model .[ propforward - arithmetic ] suppose that .then , the forward price is given as \int_t^tg(t , s)\mathbb{e}_{\eta } [ \omega_{s } | \mathcal { f}_t ] \,\mathrm{d}s \biggr\}.\ ] ] the price is reasonably explicit , except for the conditional expectation of the stochastic volatility . by the same arguments as in proposition [ profforwardaffine ] , the forward price becomes affine in the spot ( or in ) if and only if for sufficiently regular functions and .in the case , we can obtain an explicit forward price when using the girsanov transform as in ( [ girsanovtransf ] ) .we easily compute that the forward price becomes we note that there is no explicit dependence of the spot volatility except indirectly in the stochastic integral .this is in contrast to the lvy case with esscher transform .the dynamics of the forward price becomes if we furthermore let for some sufficiently regular functions and , we find that hence , the forward curve moves stochastically as the deseasonalised spot price , whereas the shape of the curve is deterministically given by .this shape is scaled stochastically by the deseasonalised spot price .in addition , there is a deterministic term which is derived from the market price of risk .we finally remark that also in the arithmetic case one may derive expressions for the prices of options that are computable by fast fourier techniques .in this section , we will show the practical relevance of our new model class for modelling empirical energy spot prices . herewe will focus on electricity spot prices and we will illustrate that they can be modelled by processes an important subclass of processes . note that the data analysis is exploratory in nature since the estimation theory for or processes has not been fully established yet .we study electricity spot prices from the european energy exchange ( eex ) .we work with the daily phelix peak load data ( i.e. , the daily averages of the hourly spot prices for electricity delivered during the 12 hours between 8 am and 8 pm ) with delivery days from 01.01.2002 to 21.10.2008 .note that peak load data do not include weekends , and in total we have 1775 observations .the daily data , their returns and the corresponding autocorrelation functions are depicted in figure [ eldata ] . before analysing the data, we have deseasonalised the spot prices . here, we have worked with a geometric model , that is , .then where , as suggested in , for example , klppelberg , meyer - brandis and schmidt , which takes weakly and yearly effects and a linear trend into account .in order to ensure that the spikes do not have a big impact on parameter estimation , we have worked with a robust estimation technique based on iterated reweighted least squares .we have then subtracted the estimated seasonal function from the logarithmic spot prices from the time series and have worked with the deseasonalised data for the remaining part of the section .figure [ gdeseas ] depicts the deseasonalised logarithmic prices and the corresponding returns .the class of processes is very rich and hence in a first step we checked whether we can restrict it to a smaller class in our empirical work .we have carried out unit root tests , more precisely the augmented dickey fuller test ( where the null hypothesis is that a unit root is present in the time series versus the alternative of a stationary time series ) ; we obtained a -value which is smaller than 0.01 and , hence , clearly reject the unit root hypothesis at a high significance level . also the phillips perron test led to the same conclusion .hence , in the following , we assume that is an process . next , we study the question which distribution describes the stationary distribution of appropriately . we know that in the absence of stochastic volatility an process is a moving average process driven by a lvy process and hence the integral is itself infinite divisiblewe are hence dealing with a stationary infinitely divisible stochastic process , see rajput and rosiski , sato , barndorff - nielsen for more details .the literature on spot price modelling suggest to use semi - heavy and , in some cases , even heavy - tailed distributions in order to account for the extreme spikes in electricity spot prices , see , for example , klppelberg , meyer - brandis and schmidt and benth _et al . _ who suggested to use the stable distribution for modelling electricity returns .here we focus on a mixture of a normal distribution in the sense of mean - variance mixtures , see barndorff - nielsen , kent and srensen .in particular , we will focus on the generalised hyperbolic ( gh ) distribution , see barndorff - nielsen and halgreen , barndorff - nielsen , barndorff - nielsen , which turns out to provide a good fit to the deseasonalised logarithmic spot prices as we will see in the following .a detailed review of the generalised hyperbolic distribution can be found in , for example , mcneil , frey and embrechts and details on the corresponding implementation in r based on the ghyp package is provided in breymann and lthi .let and let denote a -dimensional random vector . is said to have multivariate generalised hyperbolic ( gh ) distribution if where , , .further , is a one - dimensional random variable , independent of and with generalised inverse gaussian ( gig ) distribution , that is , .the density of the gig distribution with parameters is given by where denotes the modified bessel function of the third kind , and the parameters have to satisfy one of the following three restrictions typically , we refer to as the location parameter , to as the dispersion matrix and to as the symmetry parameter ( sometimes also called skewness parameter ) .the parameters of the gig distribution determine the shape of the gh distribution .the parametrisation described above is the so - called -parametrisation of the gh distribution . however , for estimation purposes this parametrisation causes an identifiability problem and hence we worked with the so - called -parametrisation in our empirical study .note that the -parametrisation can be obtained by from -parametrisation by setting and remain the same , see breymann and lthi for more details . in our empirical study, we work with the one - dimensional gh distribution .that is , and and are scalars rather than a matrix and vectors , respectively .we have fitted 11 distributions within the gh class to the deseasonalised log - spot prices using quasi - maximum likelihood estimation : the asymmetric and symmetric versions of the * generalised hyperbolic distribution ( ghyp ) : , ( ) , * normal inverse gaussian ( nig ) distribution : , ( ) , * student- distribution ( with degrees of freedom ) : , ( ) , * hyperbolic distribution ( hyp ) : , ( ) , * variance gamma distribution ( vg ) : , ( ) , and the gaussian distribution .we have compared these distributions using the akaike information criterion , see table [ aicfit ] , which suggests that the symmetric nig distribution is the preferred choice for the stationary distribution of the deseasonalised logarithmic spot prices .the diagnostic plots of the empirical and fitted logarithmic densities and the quantile quantile plots of the fitted symmetric nig distribution are depicted in figure [ gsymnig ] .we see that the fit is reasonable ..3d1.3d2.3cd2.3cd4.2@ & symmetric & & & & & & & + nig&true&-0.5&0.431&-0.001&0.395&0&1313.14&-653.57 + ghyp&true&-0.183&0.438&-0.001&0.392&0&1314.13&-653.06 + nig&false&-0.5&0.431&-0.003&0.395&0.002&1315.10&-653.55 + ghyp&false&-0.184&0.438&-0.002&0.392&0.002&1316.10&-653.05 + student - t&true&-1.366&0&-0.001&0.458&0&1327.28&-660.64 + student - t&false&-1.365&0&-0.002&0.458&0.002&1329.26&-660.63 + hyp&true&1&0.150&0.000&0.375&0&1331.38&-662.69 + hyp&false&1&0.147&0.003&0.375&-0.003&1333.33&-662.66 + vg&true&0.975&0&0.003&0.379&0&1333.85&-663.92 + vg&false&0.970&0&0.007&0.379&-0.007&1335.42&-663.71 + gaussian&true&&&-0.000&0.395&0&1742.94&-869.47 + in our empirical study , we have seen that the symmetric normal inverse gaussian distribution fits the marginal distribution of the deseasonalised logarithmic electricity prices well .hence , it is natural to ask whether there is a stationary or process with marginal normal inverse gaussian or , more generally , generalised hyperbolic distribution ?the answer is yes , as we will show in the following .note that the following investigation extends the study of barndorff - nielsen and shephard , where the background driving process of an ornstein uhlenbeck process was specified , given a marginal infinitely divisible distribution .let us focus on a particular process given by for constants and for stationary and a standard brownian motion independent of .note that we have introduced a drift term in the process again in order to derive the general theoretical result . for our empirical example, however , it would be sufficient to set as suggested by our estimation results above .the conditional law of given is normal: now suppose that follows an process given by where is a subordinator . then , by a stochastic fubini theorem we find where , the convolution of and .similarly , with .let denote the gamma density with parameters and , that is, now we define for , which ensures the existence of the integral ( [ specialbss ] ) ; then we have hence , if , for , and if , moreover, we obtain in other words, where we define the subordinator with lvy measure by . then then one can easily show that the marginal distribution of does not depend on , and the parameter determines the autocorrelation structure of .it follows that if the subordinator is such that has the generalised inverse gaussian law then the law of is the generalised hyperbolic .is there such a subordinator ?the answer is yes . to see this ,let and note that is infinitely divisible with kumulant function on the other hand , the subordinator ( here assumed to have no drift ) has kumulant function where is the lvy measure of .combining we find that is , the lvy measure of is thus , the question is : does there exist a lvy measure on such that given by ( [ wv ] ) is the lvy measure of the law .that , in fact , is the case since the laws are self - decomposable , cf .halgreen and jurek and vervaat .next , we focus on the autocorrelation structure implied by the choice of the kernel functions which lead to a marginal gh distribution of the process .[ propcorgammakernel ] let be the process defined in the previous subsection with kernel function as defined in ( [ gfct ] ) .in the case when and , we have where and denotes the modified bessel function of the third kind .we have estimated the parameters and using a linear least squares estimate based on the empirical and the theoretical autocorrelation function using the first lags .we obtain and .figure [ gammaacf ] shows the empirical and the corresponding fitted autocorrelation function . and .] note that the estimate implies that the corresponding process is not a semimartingale , see , for example , barndorff - nielsen and schmiegel for details . in the context of electricity prices , this does not need to be a concern since the electricity spot price is not tradeable .we observe that the autocorrelation function induced by the gamma - kernel mimics the behaviour of the empirical autocorrelation function adequately .however , it does not fit the first 10 lags as well as , for example , the carma - kernel which we have fitted in the following subsection , but performs noticeably better for higher lags. the fit could be further improved by choosing to be a gig supou process rather than a gig ou process .then one obtains an even more flexible autocorrelation structure .the recent literature on modelling electricity spot prices has advocated the use of linear models , that is , carma models , as described in detail in section [ sectunifying ] .since carma models are special cases of our general modelling framework , we briefly demonstrate their empirical performance as well .it is well known , see , for example , brockwell , davis and yang , that a discretely sampled process ( for ) has a weak arma( , ) representation .an automatic model selection using the akaike information criterion within the class of ( discrete - time arima ) models suggests that an arma(2,1 ) model is the best choice for our data .we take that result as an indication that a process ( which has a weak arma(2,1 ) representation ) might be a good choice .however , it should be noted that the relation between model selection in discrete and continuous time still needs to be explored in detail . we have estimated the parameters of the kernel function which corresponds to a process using quasi - maximum - likelihood estimation based on the weak arma(2,1 ) representation .diagnostic plots for the estimated model are provided in figure [ gcarma ] .first , we compare the empirical and the estimated autocorrelation function , see figure [ gcarma](a ) .recall that the autocorrelation of is given by ( [ cor ] ) and it simplifies to if either the driving lvy process has zero mean or if the stochastic volatility process has zero autocorrelation .after deseasonalising ( which also includes detrending ) the data , we have obtained data which have approximately zero mean . model . ]the empirical and the estimated autocorrelation function implied by a kernel function match very well for the first 12 lags .higher lags were however slightly better fitted by the gamma kernel used in the previous subsection .figure [ gcarma](b ) depicts the corresponding residuals from the weak arma(2,1 ) representation and figures [ gcarma](c ) and [ gcarma](d ) show the autocorrelation functions of the corresponding residuals and squared residuals .overall , we see that the fit provided by the kernel function is acceptable .note that in addition to estimating the parameters of the function coming from a carma process one can also recover the driving lvy process of a carma process based on recent findings by brockwell , davis and yang .this will make it possible to also address the question of whether stochastic volatility is needed to model electricity spot prices or not .see veraart and veraart for empirical work along those lines in the context of electricity spot prices , whose results suggest that stochastic volatility is indeed important for modelling electricity spot prices .this paper has focused on _ volatility modulated lvy - driven volterra _ ( ) processes as the building block for modelling energy spot prices . in particular , we have introduced the class of _ lvy semistationary _ ( ) processes as an important subclass of processes , which reflect the stylised facts of empirical energy spot prices well .this modelling framework is built on four principles .first , deseasonalised spot prices can be modelled directly in stationarity to reflect the empirical fact that spot prices are equilibrium prices determined by supply and demand and , hence , tend to mean - revert ( in a weak sense ) to a long - term mean .second , stochastic volatility is regarded as a key factor for modelling ( energy ) spot prices .third , our new modelling framework allows for the possibility of jumps and extreme spikes .fourth , we have seen that and , in particular , processes feature great flexibility in terms of modelling the autocorrelation function and the samuelson effect .we have demonstrated that processes are highly analytically tractable ; we have derived explicit formulae for the energy forward prices based on our new spot price models , and we have shown how the kernel function determines the samuelson effect in our model .in addition , we have discussed option pricing based on transform - based methods .an exploratory data analysis on electricity spot prices shows the potential our new approach has and more detailed empirical work is left for future research .also , we plan to address the question of model estimation and inference . it will be important to study efficient estimation schemes for fully parametric specifications of - and , in particular , -based models .proof of proposition [ thmsm ] in order to prove the semimartingale conditions suppose for the moment that is a semimartingale , so that the stochastic differential of exists .then , calculating formally , we find \\[-8pt ] \nonumber & & { } + \int_{-\infty}^tq'(t - s)a_s \,\mathrm{d}s \,\mathrm{d}t,\end{aligned}\ ] ] which indicates that can be represented , for , as clearly , under the conditions formulated in proposition [ thmsm ] , the above integrals are well defined , and , defined by ( [ smrep ] ) , is a semimartingale , and exists and satisfies equation ( [ dy ] ) .a direct rewrite now shows that ( [ dy ] ) agrees with the defining equation ( [ lss ] ) of , and we can then deduce that is a semimartingale .proof of proposition [ thmqv ] the result follows directly from the representation ( [ smrep ] ) and from properties of the quadratic variation process , see , for example , protter .proof of proposition [ propforward - generalambit ] first , write and observe that the first integral on the right - hand side is -measurable. the result follows by using double conditioning , first with respect to the -algebra generated by the paths of and , and next with respect to .proof of proposition [ propforward - bmambit ] by the girsanov change of measure , we have where we set for . by following the argumentation in the proof of proposition [ propforward - generalambit ] ,we are led to calculate the expectation .\ ] ] but , by the stochastic fubini theorem , see , for example , barndorff - nielsen and basse - oconnor , using the adaptedness to of the first integral and the independence from of the second , we find the desired result .proof of proposition [ profforwardaffine ] if it holds that similarly , if , and affinity holds in both the volatility and the spot price .opposite , to have affinity in we must have that for some function , which means that the ratio is independent of . is differentiable in as long as is .furthermore , by definition .thus , by first differentiating with respect to and next letting , it holds that where we use the notation and for the corresponding partial derivatives with respect to the first argument .hence , we must have that and the separation property holds . likewise , to have affinity in the volatility , we must have that must be independent of .denote the ratio by , and differentiate with respect to to obtain hence , for and .differentiating with respect to , and next letting gives whence , and the separation property holds for .the proposition is proved .proof of proposition [ propriskneutraldynamicsf ] let . from proposition [ propforward - bmambit ], we have that for a deterministic function given by note that the process is a ( local ) -martingale for . moreover , from the stochastic fubini theorem it holds that where we note that hence , the result is then a direct consequence of the it formula for semimartingales , see , for example , protter .proof of proposition [ optionprice ] from proposition [ propforward - bmambit ] , we know that we can write the forward price as let now , and suppose that . recall that , with is the fourier transform of defined by .suppose that .hence , we find next , by commuting integration and expectation using dominated convergence and -adaptedness , we obtain \,\mathrm{d}y , \end{aligned}\ ] ] which holds by the stochastic fubini theorem . using the independent increment property of and double conditioning , we reach \\ & = & { \mathbb{e}}_{\eta } \biggl[\exp \biggl(\mathrm{i}y \int_{t}^{\tau } \frac12 h_t(s , s)\,\mathrm{d}u_s-\frac12 y^2 \int_t^{\tau}g^2(t , s ) \omega_{s}^2 \,\mathrm{d}s - \mathrm{i}y \int_t^{\tau } \frac12 g^2(t , s)\omega_s^2\,\mathrm{d}s \biggr ) \big| \mathcal{f}_t \biggr ] \\ & = & { \mathbb{e}}_{\eta } \biggl[\exp \biggl(\mathrm{i}y \int_{t}^{\tau } \frac12 h_t(s , s)\,\mathrm{d}u_s+a \int _t^{\tau}g^2(t , s)\int _ { -\infty}^s i(s , v)\,\mathrm{d}u_v \ , \mathrm{d}s \biggr )\big| \mathcal{f}_t \biggr],\end{aligned}\ ] ] where we define . using the stochastic fubini theorem again ,we get \\ & = & { \mathbb{e}}_{\eta}\biggl[\exp\biggl(\int_{t}^{\tau } \mathrm{i}y\frac12 h_t(s , s)\,\mathrm{d}u_s \\ & & \hspace*{36pt}{}+a \int_{-\infty}^{t}\int_{v}^{\tau } g^2(t , s ) i(s , v ) \,\mathrm{d}s\,\mathrm{d}u_v + a \int _ { t}^{\tau}\int_{v}^{\tau } g^2(t , s ) i(s , v ) \,\mathrm{d}s \,\mathrm{d}u_v\biggr ) \big| \mathcal{f}_t\biggr ] \\ &= & \exp \biggl ( a \int_{-\infty}^{t}\int _ { v}^{\tau } g^2(t , s ) i(s , v ) \ , \mathrm{d}s \,\mathrm{d}u_v \biggr ) \\ & & { } \times { \mathbb{e}}_{\eta } \biggl[\exp \biggl(\mathrm{i}y \int_{t}^{\tau } \frac12 h_t(s , s)\,\mathrm{d}u_s + a \int _{ t}^{\tau}\int_{v}^{\tau } g^2(t , s ) i(s , v ) \,\mathrm{d}s\,\mathrm{d}u_v \biggr)\big | \mathcal{f}_t \biggr ] \\ & = & \exp \biggl ( a \int_{-\infty}^{t}\int _ { v}^{\tau } g^2(t , s ) i(s , v ) \ , \mathrm{d}s \,\mathrm{d}u_v \biggr ) \\ & & { } \times{\mathbb{e}}_{\eta } \biggl[\exp \biggl(\int_{t}^{\tau } \biggl\ { \mathrm{i}y \frac12 h_t(v , v ) + a \int_{v}^{\tau } g^2(t , s ) i(s , v ) \,\mathrm{d}s \biggr\ } \,\mathrm{d}u_v \biggr ) \big| \mathcal{f}_t \biggr].\end{aligned}\ ] ] altogether , we obtain the above expression can be further simplified by noting that then proof of proposition [ propforward - arithmetic ] observe that =(-\mathrm{i})\frac{\mathrm{d}}{\mathrm{d}x } { \mathbb{e}}_{\theta,\eta } \biggl[\exp \biggl ( \mathrm{i}x\int_{-\infty } ^tg(t , s)\omega_{s- } \,\mathrm{d}l_s \biggr ) \big|\mathcal{f}_t \biggr]_{x=0 } .\ ] ] we then proceed as in the proof of proposition [ propforward - generalambit ] , and finally we perform the differentiation and let .proof of proposition [ propcorgammakernel ] we have which is a probability density and hence .now we derive the explicit formula for the autocorrelation function . note that according to gradshteyn and ryzhik , formula 3.383.8 for and , where is the modified bessel function of the third kind .hence , now we apply gradshteyn and ryzhik , formula 8.335.1 , to obtain then since according to gradshteyn and ryzhik , formula 8.486.16 , the result follows .we would like to thank andreas basse - oconnor and jan pedersen for helpful discussions and constructive comments . also , we are a grateful to the valuable comments by two anonymous referees and by the editor .benth is grateful for the financial support from the project `` energy markets : modelling , optimization and simulation ( emmos ) '' funded by the norwegian research council under grant evita/205328 .financial support by the center for research in econometric analysis of time series , creates , funded by the danish national research foundation is gratefully acknowledged by a.e.d . veraart .
this paper introduces the class of _ volatility modulated lvy - driven volterra _ ( ) processes and their important subclass of _ lvy semistationary _ ( ) processes as a new framework for modelling energy spot prices . the main modelling idea consists of four principles : first , deseasonalised spot prices can be modelled directly in stationarity . second , stochastic volatility is regarded as a key factor for modelling energy spot prices . third , the model allows for the possibility of jumps and extreme spikes and , lastly , it features great flexibility in terms of modelling the autocorrelation structure and the samuelson effect . we provide a detailed analysis of the probabilistic properties of processes and show how they can capture many stylised facts of energy markets . further , we derive forward prices based on our new spot price models and discuss option pricing . an empirical example based on electricity spot prices from the european energy exchange confirms the practical relevance of our new modelling framework . ,
the most general problem setting of the wave turbulence theory can be regarded in the form of a nonlinear partial differential equation where and denote linear and nonlinear part of the equation correspondingly , the linear part has the standard wave solutions of the form },\ ] ] where the amplitude may depend on space variables but not on time , and a small parameter shows that only resonant wave interactions are taken into account. the dispersion function can be easily found by substitution of into the linear part of the initial pde , while and resonance conditions have the form for interacting waves with wave - vectors . for most physical applicationsit is enough to regard or , and the most common form of dispersion function is ( for instance , capillary , gravitational and surface water waves , planetary waves in the ocean , drift waves in tokamak plasma , etc . )+ the model of laminated wave turbulence describes two co - existing layers of turbulence - continuous and discrete - which are presented by real and integer solutions of sys.([open ] ) correspondingly .the continuous layer is described by classical statistical methods while for the discrete layer new algorithms have to be developed .it was shown in that an arbitrary integer lattice , each node of the lattice denoting a wave - vector , can be divided into some clusters ( classes ) and there are two types of solutions of sys.([open ] ) : those belonging to the same class and those belonging to different classes .mathematically , a class is described as a set of wave - vectors for which the values of the dispersion function have the same irrationality .for instance , if the dispersion function has the form , then a class is described as follows : where is a natural number and is a square - free integer .physically , it means that waves are interacting over the scales , that is , each two interacting waves generate a wave with a wavelength different from the wave lengths of the two initial waves .interactions between the waves of different classes do not generate new wavelengths but new phases . + in our preceding paper we presented a generic algorithm for computing all integer solutions of sys.([open ] ) within one class .four - wave interactions among 2-dimensional gravitational water waves were taken as the main example , in this case sys.([open ] ) takes form:[prosetdef2 ] ( m_1 ^ 2+n_1 ^ 2)^1/4 + ( m_2 ^ 2+n_2 ^ 2)^1/4=(m_3 ^ 2+n_3 ^ 2)^1/4+(m_4 ^ 2+n_4 ^ 2)^1/4 + m_1+m_2=m_3+m_4 + n_1+n_2=n_3+n_4 + and classes are defined as , where , called class index , are all natural numbers containing every prime factor in degree smaller and , called weight , all natural numbers .it can be proven that if all 4 wave - vectors constructing a solution of sys.([prosetdef2 ] ) * do not * belong to the same class , then the only possible situation is following : all the vectors belong to two different classes and the first equation of sys.([prosetdef2 ] ) can be rewritten then as[th2eq3 ] _ 1 + _2=_1+_2 with some and being class indexes . in the present paperwe deal with this two - class case .as in the previous paper , we are going to find all solutions of eq.([prosetdef2 ] ) in some finite domain , i.e. for some .the first case has been studied for , where classes have been encountered .the straightforward approach , not making use of classes , consumes , as for the first case , at least operations and is out of question ( see , sec.3.2.1 for discussion of this point ) .+ straightforward application of classes also does not bring much .the eq.([th2eq3 ] ) is now trivial - but classes are interlocked through linear conditions .even if for each pair of classes we could detect interlocking and find solutions , if any , in operations ( which is probably the case , though we did not prove it ) , the overall computational complexity is at least - i.e. not much less than . for implies pairwise class matches which is outside any reasonable computational complexity limits . + the trouble with this approach - as , for that matter , with virtually any algorithm consuming much more computation time than the volume of its input and output data implies - is , that we perform a lot of intermediary calculations , later discarded .we develop an algorithm performing every calculation with a given item of input data just once ( or a constant number of times ) .first of all we notice that eq.([th2eq3 ] ) can be rewritten as [ interl1 ] ( m_1l^2+n_1l^2)^1/4 = ( m_1r^2+n_1r^2)^1/4 = _ 1 + ( m_2l^2+n_2l^2)^1/4 = ( m_2r^2+n_2r^2)^1/4 = _ 2 + m_1l - m_1r = -m_2l + m_2r + n_1l - n_1r = -n_2l + n_2r + where are two different class indexes and - the corresponding weights . [[ definition . ] ] definition .+ + + + + + + + + + + for any two decompositions of a number into a sum of two squares ( see ( [ interl1 ] ) ) the value is called -_deficiency _ , is called -_deficiency _ and - _ deficiency point_. + we immediately see that for two interacting waves their deficiencies must be equal : . for a given weight , every two decompositions of into a sum of two squares yield , in general , four deficiency points with .consider unsigned decompositions .assuming the four points are and four ( symmetrical ) points in each of the other three quadrants of the plane .[ [ definition.-1 ] ] definition .+ + + + + + + + + + + the set of all deficiency points of a class for a given weight , , is called its _ -deficiency set_. the set of all deficiency points of a class , , is called its _ deficiency set_. + the objects defined aboveplay the main role in our algorithm , so we compute as an illustrative example the for the number 50 .the number has three decompositions into sum of two squares , namely , , and nonnegative deficiency points of decomposition pairs are they constitute a subset of the deficiency set , namely , the 12 points with . in each of three other quadrants of the plane therelie 36 more points of this set , symmetrical to the ones shown with respect to the coordinate axes .+ the crucial idea behind the algorithm of this paper is very simple and follows immediately from the exposition above : + * sys.([interl1 ] ) has a solution with vectors belonging to the two different classes if and only if their deficiency sets have a non - void intersection , , i.e. some elements belong to both classes . *calculation of relevant class indexes by a sieve - like procedure , admissible weights and decomposition of into sum of two squares have all been treated in full detail in .one new feature we introduced here is , that immediately after generating the array of class bases we purge away those which , whatever the admissible weight , do not have a decomposition into a a sum of two squares with both . for the problem considered in would be superfluous because virtually all these classes have been anyhow filtered away according to another criterium ( ) which does not apply here . in this waywe exclude 100562 classes from the 384145 which the sieving procedure returns . + evidently for any deficiency point inequalities hold . andif deficiency sets of two classes have a non - void intersection , they also have an intersection over points with non - negative .so we start with declaring a two - dimensional array of type byte which serves storing and processing deficiency sets of the classes .the array is initialized with all zeroes . in the first pass for every class in the main domain we generate its deficiency set .notice that after generating deficiency set of the class for each weight and uniting them we must check for doubles and eventually get rid of them .next , for every deficiency point of the class we increment the value of the corresponding element of the array by 1 , except elements with value 255 whose values are not changed . in the second passwe generate deficiency sets once more and for every point of the deficiency set of a class check the values of the corresponding point of .if all these values are equal to , no waves of the class participate in resonant interactions and the class is discarded from further considerations . + for the problem considered this pass excludes just a few ( ) classes , so the time gain is very modest .however , we include this step into the presentation for two reasons .first of all , it _ had _ to be done as no possibility of reducing the number of classes considered as much as possible and as soon as possible ( before the most time - consuming steps ) may be neglected .second , though giving not much gain for solution of the problem at hand , this elimination techniques may play a major role in further applications of our algorithm . in the third passwe generate a more detailed deficiency set for each class , i.e. for all classes not discarded in the previous pass : for every deficiency point we store .we do * not * discard duplicates as we did in the previous two passes .then we revisit the corresponding points of and to each point whose value is larger than link the structure described above . in the fourth pass we go through the array once more and store every point with value greater than one in an array .we also relink structures linked to deficiency points to corresponding points of the new array .the four passes above leave us with an array of points and to each of these points a list of structures is linked ( no less than two different ) . in general , a linked list is here most appropriate . every combination of two structures linked to the same point and having different yields a solution of sys.([prosetdef2 ] ) . from every solution found ,we obtain four solutions changing signs of in the general case , i.e. for nonzero .+ notice that theoretically we could skip pass 4 and extract solutions directly from the array . however , this is not reasonable for implementation reasons , and pass 4 is not very time - consuming . implementing the algorithm described above, we took a few language - specific shortcuts that will be briefly described here .+ passes 1 and 2 have been implemented one - to - one as described above .however , manipulating linked lists in vba involves considerable overhead and for the problem considered in this paper we do not need the complete functionality of linked lists , i.e. inserting into / deleting from intermediate positions of the list .our main data structure for pass 3 - 5 is a simple two - dimensional array arsolhalves( ) and in a single line of this array we store : * the class base ; * the coordinates of deficiency point ; * the coordinates of two wave vectors belonging to this deficiency point ; * the index in the array arsolhalves of the next line belonging to the same deficiency point . which is demonstrated in fig.[f : arrlista ] below . here , is the number of -vectors linked to all deficiency points to which vectors belonging to two or more classes belong ( for ) .we generate the deficiency set of each class and fill all members of this line of the array except the last one in the process , deficiency point by deficiency point .the last member is filled later and in the following way .+ for this pass we also declare an auxiliary array ardeficiencesprev(1 .. , 1 .. ) initialized with zeroes .having added a new line to arsolhalves , we look up the value ardeficiencesprev( ) . if it is zero ( this deficiency point being visited the first time ) we just assign this point the value of the index of the new line in the array arsolhalves .otherwise we first assign arsolhalves( , 7 ) the value of the current line s index in arsolhalves , then write this number to ardeficiencesprev( ) ( see fig.[f : arrlistb ] ) .+ a numerical example for this procedure is given in table 1 . in this way, the array index of the next `` list '' member is stored with the previous one , except evidently the last one , where the corresponding field stays zero . [ cols="^,^,^,^,^,^,^,^,^",options="header " , ] + table 1 .a few lines of the table containing solution halves for the deficiency point ( beginning and end of the sequence ) .+ consider computational complexity of these steps .for a single class index and weight , generating deficiency points in the first step consumes operations because every number has no more than decompositions into two squares which we combine pairwise to find deficiency points .decompositions themselves can be found in time .there are admissible weights to class index , so the overall complexity for a class can be estimated from above as . merging deficiency points into can be done in time for number of points x , i.e. no more than + taking a rough upper estimate for the number of classes , we obtain an estimate . incrementing the points of linear on the point number of the set and need not be considered for computational complexity separately .the same complexity estimate holds for the second pass .notice that , having enough memory , or using partial data loading / unloading similar to that used in , we could preserve deficiency sets calculated on the first pass and not recalculate them here .however , this would not significally improve the overall computational complexity of the algorithm .+ we can not give an _ a priori _ estimate for the number of classes discarded at the second pass , so we ignore it and hold the initial rough upper estimate for the number of classes in our further considerations . in the third pass , to every point ( no more than of them )we link the values for which this point has been struck .this , as well as linking to the points of is , clearly , linear on the number of points and does not raise the computational complexity .complexity of the fourth pass can be estimated as follows .suppose the worst case , i.e. no classes are discarded at step 2 and every deficiency point is a solution point , i.e. for every no less than two classes have deficiency points with the same .then we must make no more than entries into the new array .we must relink no more than the mean of structures per point , which gives an upper estimate of time for the pass .however , remember that the estimate for the deficiency point number has been made on the assumption that all generate distinct deficiency points . in simple words , for every point linked to structures we obtain less solution points . nowelementary consideration allow us to improve the estimate to time . we did not manage to obtain a reasonable estimate for the computational complexity of the fifth step .for the worst case of all structures grouped at a single point , the estimate is - but this is not realistic .if the number of solution points is and the number of linked deficiencies is bounded by some number , then we can make an estimate .this , however , is also not quite the case as our numerical simulations show ) .however , this last step deals with solution extraction and extracts them in linear time per solution ._ any _ algorithm solving the problem has to extract solutions , so we can be sure that our step 5 is optimal - even without any estimate of its computational complexity . summing up , we obtain the overall upper estimate of computational complexity reached at steps and plus the time needed for solution extraction .our algorithm has been implemented in the vba programming language ; computation time ( without disk output of solutions found ) on a low - end pc ( 800 mhz pentium iii , 512 mb ram ) is about 10 minutes .some overall numerical data is given in the two figures below .the number of solutions for the 2-class - case depending on the partial domain is shown in the fig.[f : solsqsir ] .both curves are almost ideal cubic lines .very probably they _ are _ cubic lines asymptotically - the question is presently under study .+ partial domains chosen in fig.[f : solsqsir ] are of two types : squares , just for simplicity of computations , and circles , more reasonable choice from physical point of view ( in each circle all the wave lengths are ) .the curves in the fig.[f : solsqsir ] are very close to each other in the domain though number of integer nodes in a corresponding square is and in a circle with radius there are only integer nodes .this indicates a very interesting physical phenomenon : most part of the solutions is constructed with the wave vectors parallel and close to either axe or axe .+ on the other hand , the number of solutions in rings ( corresponds to the wavelengths between and ) grows nearly perfectly linearly .of course the number of solutions in a circle is _ not _ equal to the sum of solutions in its rings : a solution lies in some ring if and only if all its four vectors lie in that same ring .that is , studying solutions in the rings only , one excludes automatically a lot of solutions containing vectors with substantially different wave lengths simultaneously , for example , with wave vectors from the rings and .this `` cut '' sets of solutions can be of use for interpreting of the results of laboratory experiments performed for investigation of waves with some given wave lengths ( or frequencies ) only .another important characteristic of the structure of the solution set is multiplicity of a vector which describes how many times a given vector is a part of some solution .the multiplicity histogram is shown in fig.[f : vecmul222 ] .on the axis the multiplicity of a vector is shown and on the axis the number of vectors with a given multiplicity .the histogram of multiplicities is presented in fig .[ f : vecmul222 ] , it has been cut off - multiplicities go very high , indeed the vector ( 1000,1000 ) takes part in 11075 solutions .+ similar histograms computed for different 1-class - cases show that most part of the vectors , for different types of waves , take part in one solution , e.g. they have multiplicity 1 .this means that triads or quartets are , so to say , the `` primary elements '' of a wave system and we can explain its most important energetic characteristics in terms of these primary elements .the number of vectors with larger multiplicities decreases exponentially when multiplicity is growing .the very interesting fact in the 2-class - case is the existence of some initial interval of small multiplicities , from 1 to 10 , with very small number of corresponding vectors .for instance , there exist only 7 vectors with multiplicity 2 . beginning with multiplicity 11 ,the histogram is similar to that in the 1-class - case . +this form of the histogram is quite unexpected and demonstrates once more the specifics of the 2-class - case compared to the 1-class - case . as one can see from the multiplicity diagram in fig .[ f : vecmul222 ] , the major part in 2-class - case is played by much larger groups of waves with the number of elements being of order 40 : each solution consists of 4 vectors , groups contain at least one vector with multiplicity 11 though some of them can take part in the same solution .this sort of primary elements can be a manifestation of a very interesting physical phenomenon which should be investigated later : triads and quartets as primary elements demonstrate periodic behavior and therefore the whole wave system can be regarded as a quasi - periodic one . on the other hand ,larger groups of waves may have chaotic behavior and , being primary elements , define quite different way of energy transfer through the whole wave spectrum .
the model of laminated wave turbulence puts forth a novel computational problem - construction of fast algorithms for finding exact solutions of diophantine equations in integers of order and more . the equations to be solved in integers are resonant conditions for nonlinearly interacting waves and their form is defined by the wave dispersion . it is established that for the most common dispersion as an arbitrary function of a wave - vector length two different generic algorithms are necessary : ( 1 ) one - class - case algorithm for waves interacting through scales , and ( 2 ) two - class - case algorithm for waves interacting through phases . in our previous paper we described the one - class - case generic algorithm and in our present paper we present the two - class - case generic algorithm . pacs numbers : 47.10.-g , 47.27.de , 47.27.t , 02.60.pn = msbm10 )
one of the key questions in cosmology today relates to the still - unsolved problem of what is causing the observed cosmic acceleration .primarily indicated by hubble curves constructed from luminosity distance measurements of type ia supernovae , this ( possibly apparent ) acceleration is just one aspect of the struggle for a consistent picture of the universe ; a picture that would also require an explanation of the gap between the observed clustering matter content of ( e.g. , * ? ? ?* ) and the value of as indicated by cosmic microwave background measurements of spatial flatness , as well as a solution of the `` age problem / crisis '' for matter - only ( i.e. , decelerating ) cosmologies in which the universe appears to be younger than some of its oldest constituents ( e.g. , * ? ? ?* ) , along with explanations of other important issues .the standard approach to solving these problems is to introduce some form of `` dark energy '' the simplest case being the cosmological constant , which can fill the gap via , which possesses negative pressure in order to achieve cosmic acceleration ( e.g. , * ? ? ?* ) , and which ( for non- cases ) recruits some form of internal nonadiabatic pressure ( e.g. , * ? ? ?* ) in order to avoid clustering as matter does .thus the introduction of dark energy ( often using spatially - flat models ) has led to a broadly - consistent `` cosmic concordance '' an empirical outline which has seemed so far to succeed fairly well ( e.g. , * ? ? ?* ) at developing into a consistent cosmological picture .there are serious aesthetic problems with dark energy , however , as is well known ; the most obvious being the problematical introduction of a completely unknown substance as the dominant component of the universe . beyond that , a static ( i.e. , cosmological constant ) form of dark energy suffers from two different fine - tuning problems : one being the issue that is orders of magnitude smaller than what would be expected from the planck scale ; and the other being a coincidence problem " ( e.g. , * ? ? ?* ) , questioning why observers _ today _ happen to live so near the onset of -domination , given .moving to a dynamically - evolving dark energy ( dde ) , however , invites other problems , since the _ self - attractive _ nature of negative - pressure substances ( i.e. , ) means that a dde may cluster spatially , possibly ruining it as a `` smoothly - distributed '' cosmic ingredient .this could potentially be solved through the ad - hoc addition of some form of nonadiabatic support pressure for the dde , but this is a possibility which we have argued against elsewhere on thermodynamically - based cosmological grounds .besides dark energy , various other methods have been used to attempt to explain the observed acceleration , such as employing modified gravity ( e.g. , * ? ? ?* ) , or assuming the existence of an underdense void centered not too far from our cosmic location ( e.g. , * ? ? ?but to avoid the substantial ( and perhaps needless ) complications which arise when assuming departures from general relativity , as well as the non - copernican ` specialness ' implied by a local void , we will instead use the feedback from cosmological structure formation itself as a _trigger for the onset of acceleration a trigger that automatically activates at just the right time for observers to see it , due to the fact that all such observers will have been created by that very same structure formation which generates the observed cosmic acceleration .this approach , known generally as `` backreaction '' , was used by this author in ( henceforth bbi ) to find several clustering models which managed to precisely reproduce the apparent acceleration seen in hubble curves of type ia supernova standard candles , while simultaneously driving a number of important cosmological parameters to within a close proximity of their concordance values including the age of the universe , the matter density required for spatial flatness , the present - day deceleration parameter , and the angular scale of the cosmic microwave background .the ability of our models to achieve these goals , despite the generally pessimistic view of backreaction typically held by researchers currently ( e.g. , * ? ? ?* ) , was due to our adoption of an explicitly causal variety of backreaction , which admits the possibility of substantial backreaction from newtonian - strength perturbations .the standard formalism used for computing backreaction effects , developed through the extensive work of buchert and collaborators ( e.g. , * ? ? ? * ; * ? ? ? * ) , is non - causal in the sense that it drops all ` gravitomagnetic ' ( i.e. , velocity - dependent ) effects , thus rendering it unable to account for metric perturbation information flowing ( at the speed of null rays ) from structures forming in one part of the universe , to observers in another .in similar fashion , typical studies of cosmic structure formation are also non - causal in that they use the poisson equation without time derivatives of the perturbation potential , thus computing metric perturbations from _ local _ matter inhomogeneities only , disregarding all gravitational information coming in from elsewhere in space .the result is a mistaken ( but widespread ) notion that the entire newtonian backreaction can be expressed mathematically as a total divergence , thus ultimately rendering it negligible .but by restoring causality with a `` causal updating '' integral that incorporates perturbations to an observer s metric coming from inhomogeneities all the way out to the edge of their observational horizon , we find ( bbi ) that the sum of such ` innumerable ' newtonian - strength perturbations which increase in number as within a spherical shell at distance from the observer , more than compensating for their weakening with distance adds up to a total backreaction effect that is not only non - negligible ( regardless of the smallness of for most matter flows ) , but is in fact a dominant cosmological effect that is fully capable of reproducing the observed cosmic acceleration in a fully ` concordant ' manner . despite these successes , a major problem with our model in bbi is its utter simplicity : it is clearly a toy model , with the results presented there serving primarily as ` proof - of - principle ' tests , rather than as precision cosmological predictions .though the simplifications of the model are many , one in particular is serious in its consequences , while fortunately being not too difficult to fix : specifically , this is the dropping of what we have termed `` recursive nonlinearities '' .unrelated to _ gravitational _ nonlinearities , or to the nonlinear regime of density perturbations in structure formation , recursive nonlinearities embody the fact that the integrated propagation time of a null ray carrying perturbation information to an observer from a distant virializing structure would itself be affected by all of the other perturbation information that has already come in to cross that ray s path from everywhere else , during all times prior to arrival . in other words ,causal updating is itself slowed by the metric perturbation information carried by causal updating , creating an operationally nonlinear problem .this issue was necessarily neglected in bbi , as that work was devoted to introducing our ` zeroth - order ' approach to causal backreaction .but here we fix this problem , incorporating recursive nonlinearities into a new , ` first - order ' version of our phenomenological model. we will find that this alteration significantly changes our results , causing a profound weakening of the backreaction effects generated by a given level of clustering , as well as significantly damping the long - term effects of information from old perturbations coming in from extreme distances . in order to retain causal backreaction as a viable model for generating the observed cosmic acceleration presuming here that this should indeed be done it will be necessary to re - interpret the meaning of our ( inherently empirical ) ` clumping evolution functions ' to now consider the effects of hierarchical clustering on a variety of cosmic scales . doing this, we will show that a successful alternative concordance can once again be achieved , with the right amount ( and temporal behavior ) of acceleration , and with good cosmological parameters .this paper will be organized as follows : in section [ secrecnonlinform ] , we will re - introduce our original causal backreaction formalism , and then describe the changes implemented in order to incorporate recursive nonlinearities into the model . in section [ secrecnonlinresults ] , we will explore the results of the new formalism , and discuss the implications of the model parameters that are now needed to achieve good data fits .furthermore , we will discuss how the damping effects due to recursive nonlinearities would alter the key factors that determine the ` ultimate ' fate of the universe , as was discussed for our original formalism in bbi , given an acceleration driven by causal backreaction rather than by some form of dark energy .finally , in section [ secsummconclude ] , we conclude with a summary of these ideas and results , highlighting the role of causal backreaction as a fundamental component of cosmological analysis and modeling .we recall here that the basic premise of the formalism developed in bbi is to phenomenologically represent the physical processes of structure formation complex even at the level of newtonian - strength gravitational perturbations in a simple and convenient way .the physics at work within most clustering masses should be as follows : collapsing overdensities stabilize themselves and halt their collapse by concentrating their local vorticity ( or equivalently , by creating a large local velocity dispersion ) ; this concentrated vorticity or velocity dispersion leads to real , extra volume expansion in accordance with the raychaudhuri equation , representable ( in the final state ) at great distances by the tail of a newtonian potential perturbation to the background friedman robertson - walker ( frw ) metric ; and this newtonian tail propagates causally outward into space by inducing inward mass flows towards the virialized object from farther and farther distances as time passes .the total perturbation at time for any location in space which will be independent of position , assuming similar structure formation rates everywhere will then be the combined effects of innumerable newtonian tails of this type , coming in towards the observer from the virializing masses ( in all directions ) which by that observation time have entered within the observer s cosmological `` clustering horizon '' .as is well known , the expansion evolution ( i.e. , the friedmann equation ) for some spherical volume can be derived using nonrelativistic newtonian equations , in fact , for a matter - dominated universe without reference to anything outside of that sphere .in contrast , the newtonian - level backreaction terms which we utilize here are due to perturbation information coming in from structures located predominantly outside of ( since one must go to cosmological distances for the effects to add up significantly ) . for any condensed structure ( at great distance ) which provides a gravitational pull upon the mass in ,its main perturbative effect is simply to impose an extra ( newtonian ) perturbation potential upon as an addition to its original cosmological metric .our phenomenological approach , therefore , is one in which we model the inhomogeneity - perturbed evolution of with a metric that contains the individually - newtonian contributions to the perturbation potential within ( `` '' ) from all clumped , virialized structures outside of that have been causally ` seen ' within by time , superposed _ on top of _ the background friedmann expansion of .we will reiterate the mathematical essentials of this formalism below , in section [ subsecoldformalism ] ; but first we must recount the various approximations and simplifications which have gone into our analysis , to consider their importance and the feasibility of eliminating them in order to develop a greater degree of physical realism in these causal backreaction models .first of all , though our backreaction - induced metric perturbations will indeed be time - dependent ( due to the causal flow of inhomogeneity information ) , they will be entirely spatially-_independent_. as noted above , we do not seek to achieve an observed acceleration through the mechanism of a local void ; but going even further , our model does not explicitly include any spatial variations whatsoever . rather , the system being modeled is what we term a `` smoothly - inhomogeneous '' universe , in which all perturbation information blends together evenly in a way that is essentially independent of cosmic position .now , this simplification is one made out of practical necessity , not physical realism .the smoothly - inhomogeneous approximation relies upon an assumption of randomly - distributed clustering which is certainly not true , as large clusters are not independent of each other , but preferentially clump near one another and are mutually correlated and this becomes ever less true during the ongoing cosmic evolution , as the universe grows more inhomogeneous with time .furthermore , this simplification relies upon the assumption that the region of space responsible for the dominant contributions to causal backreaction within volume will be large enough to contain a cosmologically - representative sample of both clusters and voids ; but as we will see below in section [ secrecnonlinresults ] , adding in recursive nonlinearities ( to correct another simplification , as described below ) greatly reduces the size of the cosmological region affecting from what it was in our original toy model , potentially calling this assumption into question .a proper accounting of causal backreaction in a realistically inhomogeneous universe would require the implementation of a fully spatially - detailed , three - dimensional cosmic structure simulation program perhaps along the lines of , for example but with newtonian - level backreaction effects from causal updating now added in .the development of such a 3d simulation model is far beyond the scope of this paper ( and beyond the efforts of any individual researcher ) , but would be a useful mission to be undertaken by the cosmological community at large .a second simplification is our use of newtonian potential terms i.e. , the long - distance approximation of the schwarzschild metric ( e.g. , * ? ? ?* ) to represent the tail of each individual perturbation felt from far away , rather than using a long - distance approximation of the kerr metric for spinning masses , despite the crucial role of some form of vorticity in stabilizing most structures against singular collapse .in this case , however , the approximation is a good one .a relatively small amount of vorticity can suffice for the self - stabilization of a clumped mass , if it is applied perpetually ; and the specific angular momentum ( i.e. , ] , thus being entirely negligible at the huge distances relevant for causal backreaction .( and the leading - order _ off - diagonal _ kerr perturbation terms , though actually proportional to , will effectively cancel out due to angle - averaging in the smoothly - inhomogeneous approximation , as discussed in bbi . )thus it appears quite safe to ignore any kerr - specific perturbation effects for physically reasonable situations .third , our formalism neglects the purely observational effects of localized inhomogeneities , such as lensing along beam paths ( e.g. , * ? ? ?* ) for rays from standard candles , and similar perturbative effects upon the apparent luminosity distance relationship for rays passing through inhomogeneous regions , which in some models represents the primary ` backreaction ' effect resulting from structure formation ( e.g. , * ? ? ?even if such effects by themselves are too small to generate an observed cosmic acceleration , they will still alter the output parameters estimated while using any cosmological model ( including ours ) , and thus should be kept track of ; and in case our causal backreaction method also falls short of providing the full result of an apparent acceleration all by itself , it might successfully be combined with these other observational effects upon the light rays to produce an apparent acceleration once everything is added together ( this point to be discussed again in section [ subunityclumpweak ] ) .combining these purely observational effects with those from causal backreaction is therefore an important task , and likely quite a feasible one ; though not one addressed yet in this current paper .the next , most theoretically treacherous approximation is our neglect of nonlinear gravitational effects , a simplification made implicitly by our method of linearly adding together the individual metric perturbations contributed by different self - stabilized mass ` clumps ' in order to produce the total , summed , newtonian - strength perturbation potential .( note that this is not the full `` newtonian '' approximation usually employed , since while we do assume weak - gravity , we do not completely assume ` slow - motion ' in the sense of dropping all time derivatives of the perturbation potential as that would neglect the causal flow of perturbation information . )thus our formalism explicitly neglects the nonlinear , purely general - relativistic effects that most other researchers primarily focus upon when studying `` backreaction '' .this approximation becomes increasingly bad as the magnitude of the summed perturbation potential approaches unity ; but since ( as will be seen below ) this potential typically does not grow to values in excess of or so as for most of our best - fitting simulation runs , the approximation is probably good enough for our simulations to provide fairly accurate estimations of the cosmic evolution up to now , and of our measurable cosmological parameters .( and to the extent that it is not good enough , a significant contribution due to nonlinear gravitational terms would likely only help produce the desired acceleration even more easily . )thus it is probably not necessary for us to include higher - order gravitational terms in our formalism , in order to achieve a sufficiently reliable understanding of the currently - observable universe for our present purpose of pointing the way towards an alternative concordance ; a fortunate situation , since our model is fundamentally designed around a linearized - gravity approach , and it may be challenging to find any convenient way of modifying it to include nonlinear gravitational effects . on the other hand , given the ever - increasing strength of gravitational nonlinearities in the cosmos over time , a fully general - relativistic model of causal backreaction ( computed using a 3d simulation of realistically - distributed inhomogeneities ) would almost certainly be necessary for accurately predicting the long - term future evolution of the universe .lastly , there is the approximation regarding what we have referred to as `` recursive nonlinearities '' .as will be seen from the metric given below in section [ subsecoldformalism ] , one of the effects of causal backreaction is real extra volume creation .but since causal backreaction depends upon the propagation of inhomogeneity information through space , the extra volume produced by old information from perturbations will slow down the propagation of all future inhomogeneity information ( as well as carrying all perturbing masses farther away from all observation points ) , thus feeding back upon the causal backreaction process in such a way as to strongly dampen it . of all of the simplifications and approximationsdiscussed so far in this subsection , the neglect of these recursive nonlinearities most likely has the strongest impact upon the quantitative predictions emerging from our causal backreaction models .fortunately , however , fixing this problem by adding these recursive nonlinearities into our formalism is one of the simpler improvements in physical realism for us to make ; and hence , this paper focuses upon achieving this fix , and then calculating and interpreting the results produced by this ` second - generation ' causal backreaction formalism .here we recall the technical details of our original formalism developed in bbi , to set the stage for its further development to follow . to obtain the newtonian approximation of a single ` clumped ' ( i.e. , virialized , self - stabilized ) object of mass , embedded at the origin ( ) in an expanding , spatially - flat , matter - dominated ( md ) universe , one may linearize the mcvittie solution , as can be seen from the perturbed frw expression given in .the resulting newtonianly - perturbed frw cosmology is given by the metric : d t^{2 } + [ a_{\mathrm{md}}(t)]^{2 } [ 1 - ( 2/c^{2 } ) \phi ( t ) ] d r^{2 } + [ a_{\mathrm{md}}(t)]^{2 } r^{2 } [ d { \theta}^{2 } + \sin^{2}{\theta } d { \phi}^{2 } ] ~ , \label{newtpertsingleclump}\ ] ] where \} ] , and .( note that this factor of is not ` fundamental ' , but is merely the result of our approximating the linearized sum of many individual ` newtonian ' solutions , which effectively spreads out the total spatial perturbation among all three spatial metric terms , rather than confining it solely to , as is usual .thus the general relativistic expectation of equal temporal and spatial potentials i.e. , is not really violated here , and no actual new physics or modified gravity is implied by it . ) we regard the term multiplying ^{2 } \vert d \vec{r } \vert ^{2} ] .defining , and with , , we thus have : ] . the total effectis then computed by integrating all shells from out to ; but in order to compute the metric perturbation from each shell quantitatively , it is first necessary to relate this clumping function to an actual physical density of material . as discussed above , for now consider the function as representing the dimensionless ratio of matter which can appropriately be defined as ` clumped ' at a given time , expressed as a fraction of the total physical density . assuming a flat md cosmology as the initially unperturbed state , the total physical density at all timeswill merely be an evolved version of the unperturbed frw critical closure density from early ( pre - perturbation ) times .recalling equation [ eqnangavgbhmatdomweakflat ] , we have the perturbation term = \ { ( 2 g m / c^{2 } ) / [ r^{\prime } ~ a_{\mathrm{md}}(t ) ] \} ] .the value of to use here is given by the clumped matter density at coordinate distance times the infinitesimal volume element of the shell .the clumped matter density at time , as implied above ( and still as considered with respect to the unperturbed background metric ) , will equal ] .collecting these terms ( and letting ) , the integrand will thus be equal to : _ { r^{\prime } = \alpha \rightarrow ( \alpha + d \alpha ) } & = & \ { ( 2 g / c^{2 } ) ~ d m ~ / ~ [ a_{\mathrm{md}}(t ) ~ \alpha ] \ } \\ & = & \ { ( 2 g / c^{2 } ) ~ [ a_{\mathrm{md}}(t ) ~ \alpha]^{-1 } ~ [ \psi ( t^{\prime } ) ~ \rho _ { \mathrm{crit}}(t ) ] ~ [ 4 \pi r_{\mathrm{phys}}^{2 } d r_{\mathrm{phys } } ] \ } \\ & = & \ { ( 8 \pi g / c^{2 } ) ~ \psi ( t^{\prime } ) ~ [ a_{\mathrm{md}}(t ) ~ \alpha]^{-1 } ~ [ \rho _ { \mathrm{crit}}(t ) ~ [ a_{\mathrm{md}}(t)]^{3 } ] ~ { [ \alpha^{2 } d \alpha } ] \ } \\ & = & \ { ( 8 \pi g / c^{2 } ) ~ \psi ( t^{\prime } ) ~ a_{\mathrm{md}}(t)^{-1 } ~ [ \rho _ { \mathrm{crit}}(t_{0 } ) ~ a_{0}^{3 } ] ~ { [ \alpha d \alpha } ] \ } \\ & = & \ { ( 8 \pi g / c^{2 } ) ~ \psi ( t^{\prime } ) ~ [ ( t_{0 } / t)^{2/3 } ~ ( 3 c t_{0})^{-1 } ] ~ \ { [ 3 h_{0}^{2 } / ( 8 \pi g ) ] ~ ( 3 c t_{0})^{3 } \ } ~ { [ \alpha d \alpha } ] \ } \\ & = & \{12 ~ \psi ( t^{\prime } ) ~ [ ( t_{0 } / t)^{2/3 } ] ~ { [ \alpha d \alpha } ] \ } ~ , \end{aligned}\ ] ] [ eqniintegrandprelim ] where for simplification we have used and the fact that ] in equation [ newtpertsingleclump ] .the only relativistic " piece of propagating information which is causally delayed is the state of clumping , ] , as follows : ~ r^\mathrm{frw}_{\mathrm{sn}}(t ) ~ [ 1 + z^\mathrm{obs}(t ) ] \\ & = & \frac{1 + ( i_{0}/3)}{\sqrt{1 + [ i(t ) / 3 ] } } ~ \frac{c ~ t_{0}^{4/3}}{t^{2/3 } } ~ \int^{t_{0}}_{t } \ { ( t^{\prime})^{-2/3 } ~ \sqrt{\frac{1 - i(t^{\prime})}{1 + [ i(t^{\prime } ) / 3 ] } } \ } ~ d t^{\prime } \\ & = & \frac{1 + ( i_{0}/3)}{\sqrt{1 + [ i(t_{r } ) / 3 ] } } ~ \frac{c ~ t_{0}}{t_{r}^{2/3 } } ~ \int^{1}_{t_{r } } \ { ( t^{\prime}_{r})^{-2/3 } ~ \sqrt{\frac{1 - i(t^{\prime}_{r})}{1 + [ i(t^{\prime}_{r } ) / 3 ] } } \ } ~ d t^{\prime}_{r } ~ , \end{aligned}\ ] ] [ eqndlumdefn ] where , and , are dimensionless time ratios ( e.g. , ) , with no change to the essential form of ( i.e. , ) . all cosmologically - relevant curves , fits , and parameters can now be calculated from the metric ( equation [ eqnfinalbhpertmetric ] ) , and from this luminosity distance function ( equation [ eqndlumdefn ] ) and its derivatives , as investigated in bbi .important modeled quantities include : the residual distance modulus function ( with respect to an empty coasting universe ) , , and the quality and probability of its fit , and , to the type ia supernova ( snia ) data ; the observed ( versus unperturbed ) hubble constant , ( versus ) ; the physically - measurable age of the universe , ; the `` true '' value of the total cosmic matter density , , which determines the primordial spatial curvature ( i.e. , for flatness in the pre - perturbed epoch ) , which our model must relate to some measured value of the density , , for normalization ( here and in bbi we use ) ; the observable values ( defined for ) of the deceleration parameter , the effective ( total ) cosmic equation of state , and the jerk parameter ; and finally , for comparison with complementary data sets from much earlier cosmic epochs , we compute the acoustic scale of the cosmic microwave background ( cmb ) acoustic peaks , . as a technical note , we recall that all evolving quantities in our numerical simulation program are calculated as discrete arrays in ( and in ) , with a tested pixelization that is fine enough for great accuracy in all parameters .given that the discrete version of must be differentiated ( and evaluated specifically for ) to obtain cosmological parameters , we do so by using the definition of the derivative for each pixel , as follows : _ { \ { i \ } } = \frac{d}{d ~ z^\mathrm{obs } } [ d^{(n-1 ) \prime}_{\mathrm{l } , \mathrm{pert } } ] _ { \ { i \ } } \equiv \frac{d^{(n-1 ) \prime}_{\mathrm{l } , \mathrm{pert}}(t _ { \ { i+1 \ } } ) - d^{(n-1 ) \prime}_{\mathrm{l } , \mathrm{pert}}(t _ { \ { i \ } } ) } { z^\mathrm{obs}(t _ { \ { i+1 \ } } ) - z^\mathrm{obs}(t _ { \ { i \ } } ) } ~ , \label{eqnpixeldiffdef}\ ] ] and then obtain the limit from the last , latest - in - time pixel : _ { ( z \rightarrow 0 , ~ t \rightarrow t_{0 } ) } \equiv [ d^{n \prime}_{\mathrm{l } , \mathrm{pert } } ] _ { \ { n_\mathrm{pix } \ } } ~ . \label{eqnfirsteval}\ ] ] the cosmological results which we obtain from this procedure appear to be robust , with only minor difficulties , as will be discussed below ; and while the greatest discretization errors occur for , which requires three differentiations of , virtually all of the results which we quote here for should be well within in terms of numerical accuracy . in order to conduct specific calculations with our formalism , we must design a set of physically reasonable clumping evolution functions , , to serve as convenient proxies for the combined effects of the linear density evolution of early - stage clustering , the nonlinear regime and virialization for very dense clumps , and the initial development ( in many cases triggered by collisions ) of entirely new clumps with substantial mass .guided by general cosmological considerations and simplicity , in bbi we chose three different classes of time - dependent behaviors to examine : , which is proportional to the evolving contrast of a density variation , , in the linear regime ; , a generally sensible choice depending simply upon the amount of time available for clumping ; and , an ` accelerating ' clumping model which we initially chose as a test case to see whether that would possibly help in creating an observed acceleration .this last class of models will take on more significance in this paper , however , since our results below with recursive nonlinearities will force us to regard as not merely a simple percentage of clumped versus unclumped matter , but as a quantity reflecting the details of virialization brought to completion ( i.e. , extremely nonlinear density perturbations ) on multiple cosmic scales ; and appropriately , the density contrast evolution for inhomogeneities in the nonlinear regime goes as with that is , with for a matter - dominated universe .quantitatively , we defined our three different classes of clumping evolution models as follows : [ eqnclumpmodels ] here ( or equivalently , ] .now , the differential mass element in the shell , , does not actually change ( and therefore requires no correction factor ) , since the dilution of its mass density is precisely offset by its expanded volume .but what _ does _ change is the effective distance of those perturbing clumps from the observation point .recalling that the strength of a newtonian perturbation embedded in an expanding universe depends simply upon its instantaneous physical distance , we see that the denominator ] , which in the end just puts a factor of into the denominator of the integral for .therefore , the modified formula ( replacing equation [ eqnitotintegration ] ) for calculating the metric perturbation function with the incorporation of recursive nonlinearities ( `` rnl '' ) , is given as : ~ [ ( t_{0 } / t)^{2/3 } ] \ } ~ \alpha}{\sqrt{1 + ( i/3 ) } } ~ d \alpha ~ , \label{eqnitotrnlintegration1}\ ] ] where the term `` '' inside the integral on the right - hand side represents the actual function , , itself .but since the denominator term is in fact independent of the integration variable , we can remove it from the integral and bring it to the left - hand side of the equation. the metric perturbation function can therefore be ( numerically ) solved for any given as the solution of : } = \int^{\alpha _ { \mathrm{max } } ( t , t_\mathrm{init } , i)}_{0 } \{12 ~ \psi [ t_{\mathrm{ret } } ( t , \alpha , i ) ] ~ [ ( t_{0 } / t)^{2/3 } ] \ } ~ \alpha ~ d \alpha ~ .\label{eqnitotrnlintegration2}\ ] ] to evaluate the expressions and in this above integral , we utilize ( analogously with the discussion preceding equations [ eqntret],[eqnalphamax ] ) : } } \frac{d t^{\prime}}{(t^{\prime})^{2/3 } } ~ , \label{eqnalpharnlintegration2}\ ] ] with and .this expression for could in theory be inverted to produce ; and in addition , we have . in practice , we perform these recursively - defined ` integrals ' by utilizing discrete arrays in cosmic coordinate time with some large number of pixels covering the range from to , for which the later pixels are calculated in terms of the earlier pixels . beginning with the first pixel at , we thus have the recipe ( with pixels , and ] ) , representing the time _ before _the present epoch at which point the growth of clustering became saturated to a final , fixed value , .this new clumping evolution function then fixes to remain equal to all the way from to .recalling from section [ subsuperunityclumpstrong ] that the and models only obtain their best fits and their most ` concordant ' cosmological parameters for very late values ( i.e. , ) , it then makes little sense to outfit those models with an even later ; we therefore make this modification only for the model , which was optimized for concordance with much earlier values . the resulting ,retrofitted function for early saturation is thus defined as : this is obviously a very elementary way of accounting for how astrophysical feedback acts to slow down the growth of clustering ( with the resultant lessening of its causal backreaction ) ; but even such an oversimplified model will provide us with valuable cosmological lessons . as a first run to explore the implications of early saturation ,we consider again our earlier , cosmologically successful run with ( functionally identical to with ) , which had used and its corresponding ; these results are now compared to those from a new run still using and ( the now non - optimal ) , but with now imposed a conservatively early value suggested by the onset of substantial hot baryon injection into the igm .the resulting residual hubble diagrams for the two cases are compared in figure [ figearlyvslatesat ] , where the powerful impact of moving from to upon the time - dependence of the observed backreaction can clearly be seen .not unexpected is the strongly - amplified apparent acceleration effect at early times ( ) , due to the greatly compressed timescale over which clustering grows from to .but what may be surprising is how dramatically the effects of these early - developing inhomogeneities become virtually irrelevant at late times , with the effective ` acceleration ' almost completely fading out by , and the residual hubble diagram relapsing to a nearly perfect scdm ( i.e. , decelerating ) cosmology not long after .this is in sharp contrast to what would be expected from the results explored in bbi , where early - developing clumping provided the strongest ongoing causal backreaction effects due to the ever - growing `` inhomogeneity horizons '' of observers .the fact that this is no longer the case is clearly due to rnl , which damps such ongoing backreaction effects by slowing down the causal propagation of inhomogeneity information ( as well as by diluting the perturbation effects of already - seen inhomogeneities via the extra volumetric expansion ) , thus greatly restricting the cosmological horizon out from which developing perturbations can affect the observer .as can be inferred from equations [ eqnroftintegration],[eqnalpharnlintegration2 ] above , for inhomogeneity information propagating towards an observer at null - ray speed ; and so models with very strong early clumping particularly those with a very early drive up the value of so close to unity , so early on , that the propagation of inhomogeneity information is practically frozen to a halt at later times .the cessation of new inhomogeneity information reaching the observer leads to a corresponding freeze in the continued evolution of ; and as is obvious from our metric given by equation [ eqnfinalbhpertmetric ] , a constant value can simply be transformed away via redefinitions of and , thus making a cosmology with static look exactly like a decelerating , matter - dominated scdm universe .( interestingly , a considerably stronger late - time acceleration effect can be generated by using a _ smaller _ value of for these very early runs , which lessens this ` freezing ' effect .the best possible snia fit for this case is therefore achieved with the relatively low value of ; though even that remains a very poor fit in absolute terms . )the importance of such behavior is that it limits the degree to which causal backreaction with rnl can be ` self - powered ' via the ever - expanding reach of observational horizons , in defiance of a static final value ; a result which increases its dependence upon being ` driven ' by a continually - growing function . as figure [ figearlyvslatesat ]has shown , an early saturation of clumping can lead to a virtually complete shutdown of apparent acceleration not much later .( though noting again the caveat that nonlinear gravitational effects are not accounted for in any of our models and that they may indeed be important here , since all of the very - early saturation models with which we tested do have excessively large values , all significantly exceeding , with some nearly approaching unity . )these considerations will have important implications for the eventual fate of the universe , as will be discussed below in section [ subnewfuture ] ; but first , we consider their ramifications regarding the _ past _ cosmic ` acceleration ' that has already been observed via the supernova data sets . since a value of effectively terminates the impact of causal backreaction far too prematurely for it to account for the apparent acceleration seen in the snia data from around to now , we consider runs with values that are similar to or smaller ( i.e. , later ) than this epoch of acceleration . specifically , we choose values of for study i.e. , representing times by which the mass fraction of cosmic baryons possessing temperatures of has increased to reach ( * ? ? ?* figure 1b ) .these values of are also well - placed to further illuminate the results of , which demonstrated the significant alteration in clustering behavior ( due either to dark energy or to causal backreaction ) between one cluster sample with ( and particularly its most distant subsample with ) , versus their more nearby cluster sample at .re - optimizing for each of these chosen values ( with fixed for all of these runs ) , the resulting ` best runs ' with early saturation are those with the parameters : , , and ; we compare this to the best - fitting run , which had .residual hubble curves for these four runs are plotted in figure [ figvaryingearlysatzval ] , where we see that while these early - saturation runs are no longer effectively indistinguishable from best - fit ( as was the run ) , they are nevertheless very close to it particularly within the redshift range containing most of the snia thus providing quite good fits to these data . the complete fit quality results and cosmological output parameters for these runs are presented in table [ tablernlearlysatruns ] .first , we note that there are limits to the largest value of which may practically be used , since we see that as gets as high as , the fit probability gets progressively worse ( i.e. , begins to grow unacceptably large ) , the cosmological parameters become increasingly ` discordant ' ( particularly and ) , and there are even concerns that the formalism itself begins to break down due to gravitationally nonlinear effects , given the disturbingly large value of . if we restrict ourselves to , however , then at a cost of only a small increase in , the situation gets considerably better with the use of nonzero in several crucial ways . considering the case in particular , moving away from improved the match to the cmb data and the verification of spatial flatness ( better and values , respectively ) , while remaining essentially as good in terms of the other cosmological parameters , with just a tiny degradation in the fit probability ( by ) .but most importantly , we see that all of this been achieved with a _ much _ lower value table [ tablernlearlysatruns ] showing how decreases with increasing , in general dropping all the way from for , to for .this is now well within the range of ` reasonable ' values for hierarchical clustering , as was specified earlier in section [ subsuperunityclumpstrong ] .( in addition , we have also conducted fits to a more recent supernova data set , the scp union2 snia compilation ; and when we minimize for the case with respect to the union2 data , rather than the union1 data , we still get similar results : an excellent fit with ( compared to for ) and ; only a slightly higher clumping strength value of ; and similar cosmological parameters that are also quite acceptable .only the unperturbed matter density is slightly high with ; but as discussed above , this may be due to the adopted value of being slightly too large . ) in consequence , it is justified to say that we have in every sense re - obtained a successful alternative concordance with causal backreaction in the presence of recursive nonlinearities , having generated the proper amount ( and temporal behavior ) of apparent acceleration as evidenced by a fit to the snia data that is very nearly as good as that achievable with best - fit ( even _ without _ performing a rigorous -minimization search over our model parameter space ) as well as having produced output cosmological parameters that are more than acceptably consistent with a variety of complementary cosmic measurements . the price to be paid to achieve this goal , due tothe incorporation of recursive nonlinearities , is the necessity of permitting values in excess of unity which though going against the original interpretation of this ( heuristic ) model parameter , as defined in bbi , does seem justifiable in a realistic cosmology exhibiting hierarchical structure formation on a variety of scales . on the other hand , a new benefit of these results is the shifted focus from models to models , equivalent to a shift in emphasis from linearized matter density perturbations to nonlinear density perturbations ; a new emphasis which in fact makes much more sense for a causal backreaction paradigm depending upon vorticity- and velocity - dispersion - generated virialization as the fundamental origin of structure - induced perturbations to the observed cosmological metric .one last ( yet very important ) consideration still remains regarding early saturation , though , relating to its impact upon the very late - time behavior of the cosmic evolution .one way to approach this , is to consider _ why _ the optimized value of drops so precipitously with increasing , so conveniently solving our problem by reducing to believable values . to understand why this happens ,consider the clumping evolution functions , themselves , for the early saturation models that we have studied ; plots of these functions are shown in figure [ figpsitplotsvszsat ] . from these plotted curves ,we see that despite the vast differences in for the case versus those with , as long as one does not use too large a value of ( sticking to , say , ) , then the degree of clumping at _ mid - range _ values of ( e.g. , ) actually remains fairly similar from run to run .the distinctively huge increase from to for the case does not actually happen until very late times , .but why should completely different behaviors of at late times have such a small effect upon the fits to the snia data ?the answer is twofold : first , due to a lack of sensitivity of the data to such changes ; and second , due to the nature of causal backreaction , itself .for the first issue , consider that most of the snia included in these supernova compilations for measuring the cosmic acceleration are located at fairly high redshift ; it is therefore unsurprising that the evolution of _ after _ the epoch of these supernovae should have little effect upon the acceleration measured by those high- snia , regardless of whether experiences a continued increase or a saturation .but , even if lower- snia were also included in the study , and could be trusted to accurately map out the hubble flow despite their local peculiar motions , there should still be little effect other than a late - time offset , perhaps registered as a small change in . as noted by , simple cosmic evolution functions that are relatively insensitive to timevariations tend to measure an averaged cosmological equation of state around a ` pivot ' redshift of about .and so it is understandable why the detailed behavior of after this crucial epoch should end up having little effect upon the precise amount of apparent acceleration measured . for the second issue , we recall from equation [ eqniintegrandprelim ] that the amount of causal backreaction due to a spherical shell at coordinate radius will be proportional to , and so very late clumping which corresponds to relatively small look - back times , and thus small distances will simply not involve a sufficiently large amount of inhomogeneous mass to generate significant causal backreaction .thus the detailed dynamics of inhomogeneities at higher ( and larger ) assuming distances not so far away as to get damped by rnl , or having look - back times so far back relative to as to have very small clustering , will be more important in determining the detailed effects of causal backreaction than would the very late ( ) behavior of clumping .the overall result is that there is a parameter degeneracy for causal backreaction models , where one can reduce in tandem with an increase in , without much change in the quality of the fit to the actual snia data .this degeneracy has a beneficial aspect , in that it has allowed us to successfully create an alternative concordance with causal backreaction and to do so in a flat , matter - only universe without any form of dark energy while using astrophysically realistic values of .but it has a negative aspect , as well , in that it lessens our ability to predict precise ranges for the late - time cosmological parameters that would be output by the various ( successfully concordant ) models in this formalism .measurable cosmological parameters such as , , and are defined in theory , at least in terms of the behavior of taylor expansions of the luminosity distance function , .hence , while a variety of functions may all provide good fits to the key snia located at mid - to - high redshifts , these functions may all have quite different behaviors , and thus very significant differences in their cosmological output parameters . in particular , consider again the first three of the four runs in table [ tablernlearlysatruns ] .as one goes from to , we see that decreases below the concordance value of ; but yet , the difference is small enough to not be a serious concern . for ,obtained from one more differentiation of , the difference is larger , going from to ; and yet , the precise numerical value of the cosmic ` equation of state ' is of no great concern to us here since it is not our task to pin down the physics of some hypothesized form of dark energy , but merely to reproduce the cosmological observations so long as a good fit with a sufficient amount of apparent acceleration can be produced , as has indeed been accomplished .the biggest change , however , occurs for obtained via yet another differentiation of which drops all the way from for , down through the value of and far past it , even going negative for values that still provide decent snia fits .a small part of this change may be due to the highly simplified nature of the functions that we use ; in particular , since we obtain a hubble curve through two integrations of that is , , and then ( cf . equations [ eqnitotintegration]-[eqndlumdefn ] ) the parameter ( obtained from three differentiations of ) essentially contains one differentiation of .as is obvious from figure [ figpsitplotsvszsat ] , our simple functions are not differentiable at , meaning that ( and thus ) will also be non - differentiable there , and so will pick up an actual discontinuity there .still , the discontinuity is not large enough to account for the huge overall differences in for the different values ; and most of the effect of the jump in at is almost certainly real , due to the real change in the cosmic evolution at that point , and would likely still take place to a very similar degree ( just more spread out in time ) for some more complicated functions designed to apply smoothing at . in any case, the set of runs in table [ tablernlearlysatruns ] does fairly clearly establish the trend that increasing away from zero leads to a steady decrease in the final value of .what we can conclude with some certainty , unfortunately , is that we have lost any reasonable degree of predictability for this parameter in the causal backreaction paradigm , due the weak dependence of the backreaction upon the very - late - time behavior of , and the resulting degeneracy in .this is significant , because of the different possible ways suggested in bbi to distinguish between causal backreaction and cosmological constant ( or any similar form of dark energy ) , in order to provide our paradigm with a falsifiable test , the clearest signature by far was the search for .for all intents and purposes , the reliability of this signature now appears to be gone , and some more intricate means will obviously be needed to distinguish causal backreaction from even the simplest , -like version of dark energy ( though would still rule out in favor of _ some _ alternative model ) .thus while the use of early saturation has greatly enhanced the prospects for achieving an alternative cosmic concordance with realistic values of the clumping parameter , the incorporation of this additional degree of freedom in the models has made it significantly more challenging to definitively replace the paradigm of dark energy with that of causal backreaction , without resorting to aesthetic arguments of a subjective nature .if appropriate observational tests should eventually be able to demonstrate causal backreaction as superior to dark energy as the driver of the cosmic evolution , then one final question of great importance would of course be : what is the ultimate fate of the universe ?this topic was discussed in detail in bbi , in which it was found that the long - term cosmic fate depends upon which way the balance tips between the forces working to power the cosmic ` acceleration ' , versus the opposing factors acting to restrain it . for the former ,the only real influence working to keep the apparent acceleration going ( and perhaps ultimately promote it in strength to a real volumetric acceleration ) was the ever - expanding causal horizon , growing in time , out from which an observer can ` see ' substantial inhomogeneities i.e. , the farthest distance from the observer out to which , are still true . for opposition to the acceleration , one source of restraint is the frw expansion of the universe itself , which exerts a natural damping effect upon causal backreaction by diluting the density of inhomogeneities , simply by pulling them farther away from one another ( and from any given cosmological observer ) over time . also acting to limit a possible long - term acceleration was the inevitable saturation presumed for , which in our original interpretation of this measure of clumping , as described in bbi would be limited by an upper bound of .considering all of these factors , analytical approximations were derived for the ( pre - rnl ) metric perturbation function as ; and it was found that always asymptotes to a constant numerical value for functions evolving as a simple power of , .this asymptotic value of is larger for smaller , with it being equal to unity ( implying a complete breakdown of the newtonianly - perturbed metric ) for ( i.e. , ) .thus a fully general - relativistic acceleration at late times due to causal backreaction did appear to be realistically possible , according to the formalism of bbi though of course that conclusion was made pending the still - undetermined effects of recursive nonlinearities , and other complications . in this paper, however , we will have to revise our expectations , since the incorporation of rnl ( recursive nonlinearities ) has made the prospects of an ` eternal ' , self - powered acceleration seem far less likely. we can conclude this despite the fact that is no longer necessarily amenable to a simple analytical analysis , even for .one point in favor of such a conclusion is that rnl has forced us to change to functions with larger exponents in order to re - establish a concordance i.e. , moving from and before , to now and as our previous results have shown , larger exponents in lead to smaller late - time values of ( even if we no longer know its exact asymptotic behavior ) , thus implying a less general - relativistic , less ` accelerative ' long - term evolution . butan even more important factor is what we learned about the impact of the ultimate ` saturation ' of the clumping evolution function at some final , fixed numerical value , as was depicted in figure [ figearlyvslatesat ] .this result ( and other simulation runs that we have done along these lines ) clearly demonstrate that rnl , which acts to stall the expansion of an observer s `` inhomogeneity horizon '' particularly so in cosmologies with strong early - time backreaction will largely ` shut off ' all apparent acceleration effects not long after the ongoing clustering and virialization have ceased , once has settled down to some mostly - constant final value .this behavior certainly works against the possibility of an ` eternally ' ( or even long - term ) ` accelerating ' universe .the one enhancement introduced in this paper which might possibly help lead to a long - term acceleration , is the now fundamentally unbounded nature of due to hierarchical clustering ( assuming that an effective end to clustering is not imposed explicitly by early saturation , as we did in fact choose to impose for our models in section [ subearlysatclumping ] ) .but while this change in interpretation allowing is indeed based upon real physics , is it strong enough to keep the ` acceleration ' going continually , deep into the future ?the answer to this question is naturally uncertain : on ( relatively ) small scales , clustering has never ceased , as star clusters and small galaxies continue to merge ( with new virialization ) into large galaxies ; galaxies continue to merge into galactic clusters ; and so on .yet , one would realistically expect to find steadily diminishing returns on such ` small ' scales .any real hope for a continuing acceleration would seem to rest upon the future clustering behavior on extraordinarily _ large _ scales a realm of structure formation with no real upper limit , as superclusters eventually manage to self - virialize internally , then begin themselves to merge into even larger structures , and so on , ad infinitum .higher and higher scales of clustering take exponentially longer to complete than the levels below them , though , and it is not clear how large the effective ` inhomogeneity horizon ' for causal backreaction can ever practically become especially given the fact that the cosmic acceleration itself tends to impede the expansion of observational horizons . a truly long - term cosmic acceleration ( apparent or real ) into the far futurewould therefore seem very difficult to accomplish using causal backreaction with recursive nonlinearities ; though such conclusions can not be considered definitive without more detailed cosmological simulations , employing a significantly more sophisticated treatment of the effects of vorticity and virialization than we have used in these calculations so far , given our toy models for , and therefore .there are still two remaining wildcards , however , which were discussed in bbi and must be mentioned again here .the first one is the possible advent of truly general - relativistic perturbations , causing the breakdown of our newtonianly - perturbed metric approximation ( equation [ eqnangavgbhmatdomweakflat ] ) , and the failure of our treatment of the individual perturbations due to the multitude of locally - clumped masses as being linearly summable .the possibility of such a complication still exists , as is made obvious ( for example ) by the very large values that we typically find for early - saturation runs with large settings ( e.g. , the case from table [ tablernlearlysatruns ] ) . as noted above in section [ subearlysatclumping ], the fact that the propagation of inhomogeneity information obeys the proportionality , means that the flow of such information will be choked off whenever approaches unity , thus freezing the expansion of the relevant inhomogeneity horizons , thereby locking nearly in place at whatever actual value that it has managed to grow to by that time .( this is in fact the biggest reason why the model from figure [ figearlyvslatesat ] experienced such an abrupt shut - off of its apparent acceleration so quickly after the growth of its clumping evolution function had ceased . )what we can not know from our formalism , however , is how to physically interpret the effects of a metric perturbation function that is hovering around a nearly fixed value , but where that value is always slowly asymptoting towards unity . does it look almost exactly like a decelerating scdm universe , since a constant can simply be transformed away through coordinate redefinitions ? or would it instead be a universe perpetually appearing to be right on the verge of undergoing a runaway acceleration , just never quite doing it , basically ` riding the edge ' forever ? ( perhaps succeeding , at least , at producing a hesitating , stop - and - go accelerating behavior . ) or on the other hand , does the proximity of to unity manage to override all other considerations , and lead to an actual , volumetric , runaway acceleration ?the real answers to these questions can not be determined without at least some nonlinear gravitational terms being added into the formalism , if not actually requiring a fully general - relativistic treatment .but while the quantitative calculation of such effects to determine their physical implications is beyond the scope of the causal backreaction formalism that has been presented here ( or in bbi ) , it is clear that the sum of innumerably many newtonian - level perturbations is indeed capable of driving the total cosmological metric perturbation right up to the breaking point , where a ` real ' cosmic acceleration ( by any definition ) may very possibly take over , and for an indeterminate period of time .last of all considerations , though , is the one that can least safely be neglected : and that is the eventual breakdown of our smoothly - inhomogeneous approximation , as increasing clumpiness leads to fundamentally non - negligible anisotropies across much of the observable universe itself ( e.g. , the dark flow of * ? ? ? * ) ; a scenario which in bbi was termed the `` big mess '' .this final , virtually certain breakdown of the cosmological principle is the ultimate game - changer , and questions about the possibility of a ` permanent ' cosmic acceleration due to causal backreaction then become just as hard to define as they are to answer , as the fundamental frw basis of cosmological analysis finally breaks down entirely .in this paper , we have revisited the causal backreaction paradigm introduced in , for which the apparent cosmic acceleration is generated not by any form of dark energy , but by the causal flow of information coming in towards a typical cosmological observer from a multitude of newtonian - strength perturbations , each one due to a locally clumped , virializing system .self - stabilized by vorticity and/or velocity dispersion , such perturbations are capable of generating positive volume expansion despite their individually - newtonian natures . noting that previous ` no - go ' arguments against newtonian - level backreaction are based upon non - causal backreaction frameworks , we see that the sum total of these small but innumerable perturbations adds up to an overall effect that is strong enough to explain the apparent acceleration as detected by type ia supernovae , as well as permitting the formulation of an alternative cosmic concordance for a matter - only , spatially - flat universe .our purpose here has been to develop and test a second - generation version of this causal backreaction formalism , filling in one of the most important gaps of the original ` toy model ' by including what we have termed `` recursive nonlinearities '' specifically referring to the process by which old metric perturbation information tends to slow down the causal propagation of all future inhomogeneity information , therefore reducing the effective cosmological range of causal backreaction effects , and thus damping the strength of their overall impact upon the cosmic evolution and upon important cosmological observations. utilizing the new simulation program introduced here , which now incorporates recursive nonlinearities into causal backreaction , we find profound differences in the resulting cosmological model calculations .for a given magnitude of self - stabilized clustering assumed for large - scale structure , denoted by dimensionless model input parameter , the overall power of causal backreaction is now considerably weaker , in addition to fading out relatively rapidly after the growth of clustering ceases .this is unlike the results of the original model , in which causal backreaction effects would continue to grow regardless of any late - time saturation of clustering , due to the causally - expanding `` inhomogeneity horizon '' seen by an observer which continually brings more ` old ' inhomogeneities into view from ever - greater cosmic distances .after discussion of some of the possible reasons for which causal backreaction may now appear to fall short of its cosmological goals either due to issues regarding the fundamental mechanism itself , or due to our highly simplified treatment of it we then considered a very straightforward way in which the paradigm may be fully recovered as a cosmological replacement for dark energy : all that is needed is the adoption of values greater than unity . though representing an ad - hoc modification of the original formalism, the change makes astrophysical sense in a number of ways . rather than viewing the clumping evolution function as simply representing the fraction of cosmic matter in the ` clumped ' versus ` unclumped ' state at any given time , can now be recognized ( more realistically ) as representing the total backreaction effect of hierarchical structure formation in the universe , where clustering and virialization take place simultaneously on a number of different cosmic length scales from stellar clusters , to individual galaxies , to galaxy clusters , etc .model input parameter is now interpreted as the effective number of ` levels ' of completed clustering that exists ( at current time ) in the large - scale structure when one sums over the ( partial or total ) clustering of matter on all relevant cosmic scales .given this enlarged parameter space with now permitted , we once again find a selection of model cosmologies that succeed ( despite the damping effects of recursive nonlinearities ) at reproducing the observed cosmic acceleration , while also re - establishing an alternative cosmic concordance by producing output parameters that match the observables derived from several of the most important cosmological data sets .furthermore , astrophysical considerations regarding the necessary _ input _ parameters for these apparently successful models specifically , the need to assume a sufficiently early beginning of clustering results in a preference by the new formalism for models that reflect the late , nonlinear phase of structure formation .this is an improvement over the old formalism without recursive nonlinearities which had preferred models that embody the early phase of clustering , with linearized matter fluctuations since it is this final , nonlinear stage of clustering during which virialization occurs via the generation of vorticity and velocity dispersion , and hence represents the more astrophysically reasonable source of substantial causal backreaction . noting that the only problem still remaining for this new concordance is the somewhat excessively large amount of clustering required to achieve it that is , , rather than what we consider to be more reasonable values like we then determined that this problem could be successfully fixed ( i.e. , a good concordance generated with ) by introducing `` early saturation '' , in which the clumping evolution function reaches its ultimate value of somewhat in the past ( ) , and then changes little thereafter .this is a highly reasonable adjustment to the formalism , since in the real universe `` gastrophysics '' feedback exists which creates superheated baryons , sending large amounts of material back into the intergalactic medium , thereby slowing down the continued clustering of matter at late times ; not to mention the likely slowdown of clustering due to the feedback effects of the backreaction , itself .the only major drawback of this new feature is the addition of an extra model input parameter the epoch of saturation , which results in a degeneracy within -space , providing a range of models that all fit the type ia supernova data well , yet lead to significant differences for certain output cosmological parameters . the greatest variation in the output results due tothis degeneracy occurs for the observable jerk parameter , , hence implying a loss of predictability for by our causal backreaction formalism .this is a significant loss , given the previous findings from ( without recursive nonlinearities ) which had indicated that was the most distinctive signature of causal backreaction , thus serving as the clearest way for distinguishing it from cosmological constant ( or from anything close to it ) , since flat always requires .it thus becomes more difficult to find a falsifiable test of the causal backreaction paradigm , a test that is needed to definitively distinguish it from dark energy in order to eventually rule out one cosmological approach in favor of the other .finally , concerning the ` ultimate ' fate of the universe , we note that the incorporation of recursive nonlinearities tends to shut down any strong apparent acceleration effects fairly quickly once the ongoing clustering ( i.e. , the continued growth of ) finally stops .even more dramatic is the way in which the metric perturbation function , , becomes essentially locked in place when approaching too close to unity , making it an even greater obstacle in terms of preventing the acceleration ( apparent or otherwise ) from completely taking over the cosmic evolution .this makes the scenario of a perpetual , ` eternal ' acceleration seem even less likely than it already did in ; though the now - unbounded nature of could potentially provide some aid in producing a long - term acceleration , as long as virialized structure can continue to form on ever - larger cosmic scales , without any fundamental upper limit to the sizes of coherent structures .furthermore , the question of the ultimate cosmic fate is once again complicated by the possible backreaction contributions of gravitationally nonlinear terms , and the ( unavoidable ) eventual breakdown of the approximation of the universe as `` smoothly - inhomogeneous '' both complications representing scenarios which our toy - model formalism is not presently designed to account for . in summary, we conclude that our causal backreaction formalism remains successful at generating an alternative cosmic concordance for a matter - only universe , without requiring any form of dark energy ; though the necessary incorporation of recursive nonlinearities into these models implies that a significantly stronger amount of such backreaction than before is now needed , acting throughout the crucial ` acceleration epoch ' of or so , in order to provide a degree of observed acceleration sufficient to match the cosmological standard candle observations .bochner , b. 2011 , preprint ( arxiv:1109.4686v3 ) ; a briefer overview version is available as : 2011 , preprint ( arxiv:1109.5155v3 ) buchert , t. , & ehlers , j. 1997 , 320 , 1 ; preprint ( arxiv : astro - ph/9510056v3 ) with , where the upper solid line represents the old version of the simulated cosmology without recursive nonlinearities ( rnl ) , and the lower solid line represents the new version with rnl .shown along with them for comparison ( broken lines ) are the flat scdm and ( union1-best - fit , ) concordance cosmologies . ]runs ( as described in the text ) , selected from the new simulations with rnl but with the choice of model input parameters restricted to those used in bbi ; specifically , these and curves both have .also plotted here are the union1-best - fit flat scdm and concordance ( ) cosmologies .shown along with these curves are the scp union1 snia data , here binned and averaged for visual clarity ( bin size = 0.01 ] ) .each theoretical curve is displaced vertically , relative to the snia data , to depict its individualized -optimization . ] , plotted versus $ ] . in order of the positions of the functions ` corners ' from left to right ( increasing ) , the lines depict functions with the parameters : ; ; ; and . ]ccccccccccccc & & & & & & & & & & & & + + 1.5 & 2.8 & 314.7 & 0.324 & 0.53 & 1.16 & 69.54 & 41.96 & 13.95 & 0.948 & -0.639 & 0.59 & 297.8 + 2 & 2.9 & 315.1 & 0.319 & 0.62 & 1.17 & 69.58 & 38.62 & 14.44 & 1.162 & -0.639 & 0.51 & 288.7 + + 2 & 3.2 & 313.6 & 0.340 & 0.53 & 1.16 & 69.75 & 42.10 & 13.88 & 0.946 & -0.675 & 0.89 & 297.3 + 3 & 3.3 & 314.0 & 0.334 & 0.62 & 1.16 & 69.68 & 38.72 & 14.30 & 1.159 & -0.667 & 0.83 & 287.2 + + 25 & 4.1 & 311.8 & 0.367 & 0.53 & 1.14 & 70.07 & 42.32 & 13.64 & 0.943 & -0.751 & 1.73 & 294.5 + + & & 311.9 & 0.380 & & 1.0 & 69.96 & 69.96 & 13.64 & 0.287 & -0.713 & 1.0 & 285.4 + + & & 608.2 & 3.4e-22 & & 1.0 & 61.35 & 61.35 & 10.62 & 1.0 & 0.0 & 1.0 & 287.3 + ccccccccccccc & & & & & & & & & & & & + + 0 & 4.1 & 311.8 & 0.351 & 0.53 & 1.14 & 70.07 & 42.32 & 13.64 & 0.943 & -0.751 & 1.73 & 294.5 + 0.25 & 2.6 & 313.5 & 0.326 & 0.58 & 1.15 & 69.60 & 40.24 & 14.00 & 1.054 & -0.620 & 0.15 & 289.7 + 0.5 & 2.3 & 316.6 & 0.284 & 0.68 & 1.15 & 69.40 & 36.32 & 14.65 & 1.338 & -0.585 & -0.14 & 279.8 + 1 & 2.2 & 320.2 & 0.238 & 0.80 & 1.14 & 68.77 & 29.54 & 15.75 & 2.086 & -0.488 & -0.94 & 259.9 + + & & 311.9 & 0.380 & & 1.0 & 69.96 & 69.96 & 13.64 & 0.287 & -0.713 & 1.0 & 285.4 +
we revisit the causal backreaction paradigm , in which the need for dark energy is eliminated via the generation of an apparent cosmic acceleration from the causal flow of inhomogeneity information coming in towards each observer from distant structure - forming regions . a second - generation version of this formalism is developed , now incorporating the effects of `` recursive nonlinearities '' : the process by which metric perturbations already established by some given time will subsequently act to slow down all future flows of inhomogeneity information . in this new formulation , the long - range effects of causal backreaction are damped , substantially weakening its impact for simulated models that were previously best - fit cosmologies . despite this result , we find that causal backreaction can be recovered as a replacement for dark energy through the adoption of larger values for the dimensionless ` strength ' of the clustering evolution functions being modeled a change justified by the hierarchical nature of clustering and virialization in the universe , occurring as it does on multiple cosmic length scales simultaneously . with this and with the addition of one extra model parameter used to represent the slowdown of clustering due to astrophysical feedback processes , an alternative cosmic concordance can once again be achieved for a matter - only universe in which the apparent acceleration is generated entirely by causal backreaction effects . the only significant drawback is a new degeneracy which broadens our predicted range for the observed jerk parameter , thus removing what had appeared to be a clear signature for distinguishing causal backreaction from cosmological constant . considering the long - term fate of the universe , we find that incorporating recursive nonlinearities appears to make the possibility of an ` eternal ' acceleration due to causal backreaction far less likely ; though this conclusion does not take into account potential influences due to gravitational nonlinearities or the large - scale breakdown of cosmological isotropy , effects not easily modeled within this formalism .
delayed feedback was found to be a highly efficient tool for controlling the coherence of noisy oscillators .even weak feedback with sufficiently long delay time can diminish or enhance the phase diffusion constant , quantitative measure of coherence , by one order of magnitude and even more .the theory of the delay feedback control of the phase diffusion was developed in .however , for long delay times and vanishing noise , the multistability of the mean frequency can occur . in the presence of multistability , noise results in intermittent switchings between states with different mean frequencies and , thus , the phase diffusion is contributed not only by fluctuations around the mean linear growth of the phase but also by the alternation of ` local ' mean growth rates . for weak noise which is a physically relevant situation it turns - out to be possible to find natural variable in terms of which the switching between two stable phase growth rates becomes a perfect telegraph process . in , frequency multistability and noise - induced switchings were studied for extremely long delay times , although without consideration of coherence and phase diffusion . in this work , on the basis of the ` telegraphness ' property, we construct the analytical theory of the effect of delayed feedback on the phase diffusion in the presence of multistability . in agreement with the results of numerical simulationwe analytically derive that the phase diffusion constant has giant peaks close to the points where the residence times in two states are equal .the dynamics of a limit - cycle oscillator subject to weak action can be described within the framework of the phase reduction , where the system state in determined solely by the oscillation phase .the phase reduction can be used as well for the case of weak noise , including a -correlated one . for nearly harmonic oscillatorssubject to white gaussian noise and linear delayed feedback the phase equation reads +\varepsilon\xi(t)\ , , \label{eq-01}\ ] ] where is the natural frequency of the oscillator , and are the strength and delay time of the feedback , respectively , is the noise strength , is the normalised -correlated gaussian noise : recently , a strong impact of weak anharmonicity for recursive delay feedback has been reported for the effects under our consideration ; however , for a ` simple ' delay the harmonic approximation and model reduction remain well justified for many physical , chemical and biological systems ( _ e.g. _ , see ) .noise disturbs the linear growth of the phase , resulting in its diffusion according to the law ^ 2\rangle = dt ] .notice , since the mean residence times depend on exponentially strong with small number in the denominator of the argument of the exponential , we linearise not but next to .coefficients are positive number of order of magnitude of . hence , the exponential in the nominator is a small correction to the cube of hyperbolic cosine in the denominator .one has to expect a strong peak near with height somewhat above .notice , for the weak noise , the latter expression is exponentially large , as is exponentially large .moreover , for vanishing noise the width of the peak , while its height is exponentially large in , _ i.e. _ , the integral of this peak over diverges exponentially fast . in onecan see these peaks in the results of numerical simulation .noteworthy , these peaks have a well pronounced triangular shape in the linear log scale , meaning equation to represent the behaviour of the phase diffusion remarkably well for the major part of each multistability domain , not only at its centre . in onecan see how the noise strength influences peaks in multistability domains .the numerical simulation data in all the figures are calculated with time series of length .in this paper we have developed the theory of the effect of delayed feedback on coherence of noisy phase oscillators in the presence of frequency multistability induced by this time delay .the coherence has been quantified by the phase diffusion constant .the process of alternation between two states has been demonstrated to be well representable as an asymmetric markovian ` telegraph ' process .employment of this ` telegraphness ' property allows constructing the framework for analytical study on the problem .the behaviour of the phase diffusion constant has been revealed to be smooth on the edges of the multistability domains .giant peaks in the dependence of the phase diffusion constant on delay time have been discovered near points where the mean residence times in two states are equal ( see equation and figures [ fig3 ] and [ fig4 ] ) . remarkably , their ` integral strength ' increases for vanishing noise , as their width while height is exponentially large in .for the longer delay time the peaks become taller .
for self - sustained oscillators subject to noise the coherence , understood as a constancy of the instantaneous oscillation frequency , is one of the primary characteristics . the delayed feedback has been previously revealed to be an efficient tool for controlling coherence of noise - driven self - sustained oscillator . the effect of the delayed feedback control on coherence is stronger for a longer delay time . meanwhile , the instantaneous frequency of a noise - free oscillator can exhibit multistability for long delay time . the impact of the delay - feedback - induced multistability on the oscillation coherence , measured by the phase diffusion constant , of a noisy oscillator is studied in this work both numerically and analytically .
it is shown in that any tensor ( i.e. , multi - dimensional array ) can be decomposed into the product of orthogonal matrices and an _ all - orthogonal _ core tensor .this decomposition generalizes the matrix svd and is today commonly called higher - order singular value decomposition ( hosvd ) or multilinear svd . in applications , people are usually interested in seeking a low - multilinear - rank approximation of a given tensor , such as the multilinear subspace learning and multilinear principal component analysis . unlike the matrix svd ,_ truncated _ hosvd can give a good but not necessarily the best low - multilinear - rank approximation of the given tensor . to obtain a better approximation , people ( e.g. , ) solve the best rank- approximation problem where is a given tensor, denotes mode- tensor - matrix multiplication ( see the definition in below ) , and with fixed , the optimal core tensor is given by . absorbing this into the objective, one can write equivalently to ( see ( * ? ? ? * theorem 3.1 ) for detailed derivation ) one popular method for solving is the higher - order orthogonality iteration ( hooi ) ( see algorithm [ alg : hooi ] ) .although hooi is commonly used and practically efficient ( already coded in the matlab tensor toolbox and tensorlab ) , existing works only show that the objective value of at the generated iterates increasingly converges to some value while the iterate sequence convergence is still an open question ( c.f . ) . the iterate sequence convergence ( or equivalently the multilinear subspace convergence ) is important because without convergence , running the algorithm to different numbers of iterations may give severely different multilinear subspaces , and that will ultimately affect the results of applications . in this paper , we address this open question .our main results are summarized in the following theorem .[ thm : main ] let be the sequence generated by the hooi method .we have : _ ( i)_. if has a block - nondegenerate ( see definition [ def : nondeg ] ) limit point , then is a critical point and also a block - wise maximizer of .in addition , , where _ ( ii)_. if the starting point is sufficiently close to any block - nondegenerate local maximizer of , then the entire sequence must converge to some point and is a local maximizer of .we make some remarks on the assumption and the convergence results .[ rm : main ] the block - nondegeneracy assumption is also necessary because even starting from a critical point , the hooi method can still deviate from if it is not block - nondegenerate ( see remark [ rm : non - deg ] ) , that is , a degenerate critical point is not stable ( see for the perturbation analysis ) . in practice ,the block - nondegeneracy is always observed because .] , and it is implied by , where is defined by .the assumption is similar to the one assumed by the orthogonal iteration method for computing -dimensional dominant invariant subspace of a matrix .typically , the convergence of the orthogonal iteration method requires that there is a positive gap between the -th and -th largest eigenvalues of in magnitude , because otherwise , the -dimensional dominant invariant subspace of is not unique . for a block - wise maximizer , its block - nondegeneracy is equivalent to negative definiteness of each block hessian over the stiefel manifold .the definition of our block - nondegeneracy is different from the nondegeneracy in .a nondegenerate local maximizer in is one local maximizer that has negative definite hessian , so the nondegeneracy assumption in is strictly stronger than our block - nondegeneracy assumption . since the solution to each subproblem( see ) of the hooi method is not unique and actually still a solution after multiplying any orthogonal matrix to its right , we can only hope to establish convergence of the projection matrix sequence instead of itself . before proceeding with discussion ,we first review some basic concepts about tensor that we use in this paper ; see for more review .the -th component of an -way tensor is denoted as . for ,their inner product is defined in the same way as that for matrices , i.e. , the frobenius norm of is defined as a _ fiber _ of is a vector obtained by fixing all indices of except one .the mode- _ matricization _ ( also called _ unfolding _ ) of is denoted as , which is a matrix with columns being the mode- fibers of in the lexicographical order .the mode- product of with is written as which gives a tensor in and is defined component - wisely by if , then for any , the hooi method updates by maximizing the objective of alternatingly with respect to , one factor matrix at a time while the remaining ones are fixed . specifically , assuming the iterate to be at the beginning of the -th iteration , it performs the following update sequentially from through : where we have used , and any orthonormal basis of the dominant -dimensional left singular subspace of is a solution of .the pseudocode of hooi is given in algorithm [ alg : hooi ] .* input : * and * initialization : * choose with it is easy to implement algorithm [ alg : hooi ] by simply setting to the left leading singular vectors of .this implementation is adopted in the matlab tensor toolbox and tensorlab .however , such choice of causes difficulty to the convergence analysis of the hooi method . while preparing this paper, we did not find any work that gives an iterate sequence convergence result of hooi , except for our recent paper that establishes subsequence convergence by assuming a strong condition on the entire iterate sequence .the essential difficulty is the non - uniqueness of the solution of , and the leading singular vectors are not uniquely determined either . to tackle this difficulty, we first analyze a greedy method , which always chooses one solution of that is closest to as follows : where the pseudocode of the greedy implementation is shown in algorithm [ alg : ghooi ] .the subproblem in can be solved by the method given in remark [ rm : sol - sub ] .although can in general have multiple solutions , we will show that near any limit point of the iterate sequence , it must have a unique solution . with the greedy implementation , we are able to establish iterate sequence convergence of the greedy hooi method ( i.e. , algorithm [ alg : ghooi ] ) , as shown in sections [ sec : subseq ] and [ sec : glb - cvg ] . through relating( see and figure [ fig : cvg - bh ] ) the two iterate sequences generated by the original ( i.e. , algorithm [ alg : hooi ] ) and greedy hooi methods , we then establish the iterate sequence convergence of the original hooi method , as shown in section [ sec : pf - of - main ] .* input : * and * initialization : * choose with besides the hooi method , several other methods have been developed for solving the low - multilinear - rank tensor approximation problem. one of the earliest methods , called tuckals3 , was proposed in .tuckals3 also sequentially updates through and then cycles the process , but different from hooi , it obtains approximate leading left singular vectors of by carrying out only one step of the so - called bauer - rutishauser method starting from .this update is equivalent to solving a linearized version of the subproblem , and it prevents being far away from .subsequence convergence of tuckals3 was established under the assumption that is positive definite for all and .although tuckals3 has slightly lower per - iteration complexity than hooi , it does not converge as fast as hooi as demonstrated in figure [ fig : cvg - bh ] .recently , some newton - type methods on manifolds were developed for the low - multilinear - rank tensor approximation problem such as the newton - grassmann method in and the riemannian trust region scheme in .these methods usually exhibit superlinear convergence .numerical experiments in demonstrate that for small - size problems , the riemannian trust region scheme can take much fewer iterations and also less time than the hooi method to reach a high - level accuracy based on the gradient information .however , for medium - size or large - size problems , or if only medium - level accuracy is required , the hooi method is superior over the riemannian trust region scheme and also several other newton - type methods . under negative definiteness assumption on the hessian of a local maximizer ,the newton - type methods are guaranteed to have superlinear or even quadratic local convergence ( c.f .compared to our block - nondegeneracy assumption , their assumption is strictly stronger because as mentioned in remark [ rm : main ] , for a local maximizer , its block - nondegeneracy is equivalent to the negative definiteness of each block hessian . only with block - nondegeneracy assumption ,it is not clear how to show the local convergence of the newton - type methods .[ fig : cvg - bh ] [ cols="^,^ " , ] we summarize our contributions as follows .* we propose a greedy hooi method .for each update , we select from the best candidates one that is closest to the current iterate . with the greedy implementation ,we show that any block - nondegenerate limit point is a critical point and also a block - wise maximizer , and if a block - nondegenerate limit point exists , then the entire iterate sequence converges to this limit point .* through relating the iterates by the original hooi method to those by the greedy hooi method , we for the first time establish global convergence to a critical point by assuming the existence of a block - nondegenerate limit point and local convergence to a local maximizer by assuming sufficient closeness of the starting point to a block - nondegenerate local maximizer .* as a result , we show that the iterate sequence converges to a globally optimal solution , if the starting point is sufficiently close to any block - nondegenerate globally optimal solution .we use bold capital letters to denote matrices , caligraphic letters for ( set - valued ) mappings , and bold caligraphic letters for tensors . denotes an identity matrix , whose size is clear from the context .the -th largest singular value of a matrix is denoted by .the set of all orthonormal matrices in is denoted as . throughout the paper ,we focus on real field , but our analysis can be directly extended to complex field .[ def : nondeg ] a feasible solution of is block - nondegenerate if , where [ rm : non - deg ] in general , we are only able to claim convergence with existence of a block - nondegenerate limit point .the original hooi method can deviate from a critical point if it is block - nondegenerate . to see this ,suppose is a block - wise maximizer and thus a critical point .assume .let the original hooi method start from and update the first factor to .then may not span the same subspace as that by because has more than one dominant -dimensional left singular subspaces .therefore , we can not guarantee the convergence of the learned multilinear subspace .the rest of the paper is organized as follows .section [ sec : subseq ] shows subsequence convergence of the greedy hooi . in section [ sec : glb - cvg ] , global convergence of the greedy hooi is established under the assumption of the existence of a block - nondegenerate limit point .the convergence of the original hooi is shown in section [ sec : pf - of - main ] .finally , section [ sec : discussion ] concludes the paper .in this section , we show the subsequence convergence of algorithm [ alg : ghooi ] , namely , the criticality on the limit point of the iterates . if is a critical point of , then letting , we have to be a critical point of .therefore , our analysis will only focus on .the lagrangian function of is where is the lagrangian multiplier .the kkt conditions or first - order optimality conditions of can be derived by , namely , [ eq : kkt0 ] where is defined in . from , we have .hence , the condition in can be written to [ eq : kkt ] we say a point is a critical point of if it satisfies the conditions in and. the following result is well known , and we will use it several times in our convergence analysis . [lem : von - ineq ] for any matrices , it holds that the inequality holds with equality if and have the same left and right singular vectors . to show the convergence of algorithm [ alg : ghooi ] , we analyze the solution of the subproblem , which can be written in the following general form : where and are given , and given a matrix and positive integer , define for any , if , i.e. , they span the same subspace , we say they are equivalent . by this equivalence relation , we partition to a set of equivalence classes and form a quotient set denoted as . throughout the paper , we regard as the finite set of orthonormal matrices , and each of its elements is a representative of the bases that span the same subspace .if , then has a unique dominant -dimensional left singular subspace , and is a singleton .however , if , then has multiple dominant -dimensional left singular subspaces , and has more than one element .[ prop : projh ] the problem has a unique solution if the following two conditions hold : 1 . if , then is nonsingular ; 2 . for any , if , then ; where denotes matrix nuclear norm , defined as the sum of all singular values of a matrix .assume and are both solutions of .note that in is exactly the set .hence , and for and some .note then by lemma [ lem : von - ineq ] and the optimality of on solving , we have hence , from items 1 and 2 , it follows that , and similarly .let be the full svd of and , so .then from , it holds that note that and .the equality holds only if .since is orthogonal , we must have .hence , and . for the same reason , .therefore , , and the solution of is unique .it is easy to see that the two conditions in items 1 and 2 are also necessary for uniqueness of the solution of .define then for any , has a unique solution , which we denote as . in this way, defines a mapping on .[ rm : sol - sub ] the proof of proposition [ prop : projh ] provides a way for finding a solution of .find and get full svd of .then is a solution of .using proposition [ prop : projh ] , one can easily show the following two corollaries .if is sufficiently close to one in , then the solution of is unique . if , then , i.e. , is a fixed point . furthermore , we can show the continuity of .[ thm : ops ] the mapping is continuous on . for convenience of the description , in this proof, we simply write and to and , respectively . for any , let . if is not continuous at , then there exists and a sequence in such that and , where . by the definition of , we know that there is such that for any . similarly , there is a sequence in such that for each , for any .let .there is a sufficiently large integer such that for all , it holds and .note .hence , , i.e. , . therefore , by the definition of , it must hold that .hence , we can write and for all , where . note as .then from the proof of proposition [ prop : projh ] , we have and thus as .this contradicts to .therefore , is continuous at . since is an arbitrary point in , this completes the proof .one can also show the following result .[ thm : conty ] assume and as .if , then there is a sufficiently large integer such that for all , and by the assumption , is a singleton .let . then from , it follows that is nonsingular . since as , there exists an integer , such that , i.e. , is a singleton for all .let .we can choose the representative satisfying , since .therefore , taking another larger if necessary , we have that is nonsingular and thus for all . finally , using remark [ rm : sol - sub ] and , we have and complete the proof .we also need the following result .[ lem : crit - pt ] for any feasible solution , if , then is a critical point and also a block - wise maximizer of , where note that implies that is a basis of the dominant -dimensional left singular subspace of . hence , , is a critical point . in addition, implies that is a solution to over for all .hence , is a block - wise maximizer .this completes the proof .now we are ready to show the subsequence convergence result .[ thm : subseq ] let be the sequence generated from algorithm [ alg : ghooi ] .then any block - nondegenerate limit point of is a critical point and a block - wise maximizer of .suppose that is one block - nondegenerate limit point and the subsequence converges to .from the update rule in , it is easy to see we claim that is a solution of .otherwise , .note which contradicts to .hence , .note that as and as is sufficiently large . from the block - nondegeneracy of and theorems [ thm : ops ] and [ thm : conty ] ,we have hence , taking a sufficiently large , we can make sufficiently small , and thus we can repeat the above arguments for to conclude therefore , from the definition of , it holds that , and is a critical point and a block - wise maximizer of from lemma [ lem : crit - pt ] .the result in is a key step to have the subsequence convergence . in general , without the block - nondegeneracy assumption , it may not hold .in this section , we assume the existence of one block - nondegenerate limit point and show global convergence of algorithm [ alg : ghooi ] .the key tool we use is the so - called kurdyka - ojasiewicz ( kl ) property ( see definition [ def : kl ] below ) .let and be the indicator function on for .also let then is equivalent to , and is a critical point of _ if and only if _ , where denotes the limiting frchet subdifferential ( see for example ) .we show the global convergence of algorithm [ alg : ghooi ] also by analyzing the solution of the subproblem .as shown below , if there is a positive gap between and , the distance between and can be bounded by the objective difference .[ thm : key - ineq ] given and , any solution of satisfies note for some and .let be the full svd of .also , let and then and from . as in the proof of proposition [ prop : projh] , we have and where the last equality is from lemma [ lem : von - ineq ] and the optimality of for .also , note that assume to be the full svd of . then let be the first largest singular values of .then , and using lemma [ lem : von - ineq ] again , we have and hence , from and through , we have where the last inequality is from . using the fact $ ] , we have and thus from , it follows that plugging the above inequality into , we have the desired result . using theorem [ thm : key - ineq ] , we show the following result . [lem : sq - bd ] let be the sequence generated from algorithm [ alg : ghooi ] .assume it has a block - nondegenerate limit point .then there is a constant such that if is sufficiently close to , we have it is easy to see that there exists a small positive number such that if , then where the strict inequality is from the block - nondegeneracy of .assume is sufficiently close to such that from theorem [ thm : key - ineq ] , it follows that where is defined in , and we have used .hence , and repeating the above arguments , in general , we have for all that and therefore , every intermediate point is in , and thus for all , let .summing the above inequality from to gives the desired result .using lemma [ lem : sq - bd ] and the kl property of , we show the global convergence of algorithm [ alg : ghooi ] .[ def : kl ] a function satisfies the kl property at point if there exists such that is bounded around under the notational conventions : in other words , in a certain neighborhood of , there exists for some and such that the kl inequality holds where and .the kl property was introduced by ojasiewicz on real analytic functions , for which the term with in is bounded around any critical point .kurdyka extended this property to functions on the -minimal structure in .recently , the kl inequality was extended to nonsmooth sub - analytic functions .the works give a lot of concrete examples that own the property .the function is one of their examples and thus has the kl property .[ thm : glb - cvg ] if is a block - nondegenerate limit point of the sequence generated from algorithm [ alg : ghooi ] , then is a critical point of , and from theorem [ thm : subseq ] , we have the criticality of , so we only need to show . note that is nondecreasing with repsect to and thus converges to .we assume otherwise , if for some , , we must have .since has the kl property , then in a neighborhood , there exists for some and such that if necessary , taking a smaller , we assume where and s are defined in the same way as those in the proof of lemma [ lem : sq - bd ] .note that there is a constant such that where since is a limit point , there is a subsequence convergent to .hence , we can choose a sufficiently large such that is sufficiently close to .without loss of generality , we assume ( otherwise set as a new starting point ) is sufficiently close to such that and which can be guaranteed from lemma [ lem : sq - bd ] and where is the same as that in lemma [ lem : sq - bd ] .assume for .we go to show and thus by induction . for any , from the optimality of on problem , it holds that hence , letting and , we have which implies summing the above inequality from to and simplifying the summation gives and thus therefore , and , by induction .hence , holds for all , and letting , we conclude that is a cauchy sequence and converges .since is a limit point , then as .this completes the proof .as long as the starting point is sufficiently close to any block - nondegenerate local maximizer , algorithm [ alg : ghooi ] will yield an iterate sequence convergent to a local maximizer as summarized below .[ thm : loc - min ] assume algorithm [ alg : ghooi ] starts from any point that is sufficiently close to one block - nondegenerate local maximizer of . then the sequence converges to a local maximizer .first , note that if some is sufficiently close to and , then must also be a local maximizer and block - nondegenerate . in this case , .hence , without loss of generality , we can assume .secondly , note that in the proof of theorem [ thm : glb - cvg ] , we only use and the sufficient closeness of to to show to be a cauchy sequence .therefore , repeating the same arguments , we can show that if is sufficiently close to , then is a cauchy sequence and thus converges to a block - nondegenerate point near . from theorem [ thm : subseq ], it follows that is a critical point .we claim , i.e. , is a local maximizer .if otherwise , then by the kl inequality , it holds that , which contradicts to .hence , .this completes the proof . from theorem [ thm : loc - min ] , we can easily get the following local convergence to a globally optimal solution .[ thm : glb - opt ] assume algorithm [ alg : ghooi ] starts from any point that is sufficiently close to one block - nondegenerate globally optimal solution of. then the sequence converges to a globally optimal solution .in this section , we analyze the convergence of the original hooi method by relating its iterate sequence to that of the greedy hooi method .because any solution to each subproblem of the original hooi method is still a solution after arbitrary rotation , we do not hope to establish convergence on the iterate sequence itself .instead , we show the convergence of the projection matrix sequence .first note that we also need the following two lemmas .[ limit - pt ] if and is a critical point of , then is also a critical point .since is a critical point of , it holds that and for all .note that implies .hence , for any , multiplying to both sides and noting gives and thus is a critical point .[ lem : nchg ] let be the sequence generated by the original hooi method and assume it has a block - nondegenerate limit point . if for some , , then there is an integer such that .because is nondecreasing and upper bounded , we have and , so if , then .since is a limit point , there must be an integer such that is sufficiently close to and is block - nondegenerate .hence , has a unique dominant -dimensional left singular subspace .note therefore , and both span the dominant -dimensional left singular subspace of , and thus . using , we can repeat the arguments to have , i.e. , . now starting from and repeating the arguments, we have the desired result . by lemma [ lem : nchg ] , without loss of generality , we assume in the remaining analysis . with lemmas [ limit - pt ] and [ lem : nchg ] , we are now ready to prove the main theorem .* part ( i ) : * since is a limit point of , there is a subsequence convergent to , and there is such that is sufficiently close to . without loss of generality ,we assume that is sufficiently close to because otherwise we can set as a new starting point and the convergence of is equivalent to that of .let be the sequence generated by the greedy hooi method starting from .we go to show that if is sufficiently close to , then repeating the same arguments in the proof of lemma [ lem : sq - bd ] , we have that if is sufficiently close to , then is also sufficiently close to .note that when is sufficiently close to , it is block - nondegenerate and .hence , and both span the dominant -dimensional left singular subspace of and thus . since both and are sufficiently close to , we have .note .hence , and both span the dominant -dimensional left singular subspace of and thus . repeating the above arguments , we have , i.e. , .assume that for some integer , it holds and for all , where is sufficiently small and plays the same role as that in the proof of theorem [ thm : glb - cvg ] . from, it follows that . through the same arguments as those in the proof of theorem [ thm : glb - cvg ] , we have , and thus by the above arguments that show . by induction, we have the result in . taking another subsequence if necessary, we can assume converging to and thus by .note that the block - nondegeneracy of is equivalent to that of .hence , is block - nondegenerate and is a critical point and a block - wise maximizer by theorem [ thm : subseq ] , and converges to by theorem [ thm : glb - cvg ] .therefore , converges to . from lemma [ limit - pt ], we have that is a critical point of , and from , is a block - wise maximizer .this completes the proof of part ( i ) . *part ( ii ) : * let be the sequence generated by the greedy hooi method starting from . from theorem [ thm : loc - min ] , it follows that converges to a local maximizer of .in addition , by similar arguments as those in the proof of part ( i ) , we can show that still holds .hence , converges to , and this completes the proof .we proposed a greedy hooi method and established its iterate sequence convergence by assuming existence of a block - nondegenerate limit point . through relating the iterates by the original hooi to those by the greedy hooi ,we have shown the global convergence of the hooi method , for the first time .in addition , if the starting point is sufficiently close to any block - nondegenerate locally optimal point , we showed that the original hooi could guarantee convergence to a locally optimal solution ., _ proximal alternating minimization and projection methods for nonconvex problems : an approach based on the kurdyka - lojasiewicz inequality _ , mathematics of operations research , 35 ( 2010 ) , pp. 438457 .height 2pt depth -1.6pt width 23pt , _ perturbation theory and optimality conditions for the best multilinear rank approximation of a tensor _ , siam journal on matrix analysis and applications , 32 ( 2011 ) , pp .14221450 . ,_ a block coordinate descent method for regularized multiconvex optimization with applications to nonnegative tensor factorization and completion _, siam journal on imaging sciences , 6 ( 2013 ) , pp .
the higher - order orthogonality iteration ( hooi ) has been popularly used for finding a best low - multilinear - rank approximation of a tensor . however , its iterate sequence convergence is still an open question . in this paper , we first analyze a greedy hooi , which updates each factor matrix by selecting from the best candidates one that is closest to the current iterate . assuming the existence of a block - nondegenerate limit point , we establish its global convergence through the so - called kurdyka - ojasiewicz ( kl ) property . in addition , we show that if the starting point is sufficiently close to any block - nondegenerate globally optimal solution , the greedy hooi produces a sequence convergent to a globally optimal solution . relating the iterate sequence by the original hooi to that by the greedy hooi , we then show that the same convergence results hold for the original hooi and thus positively address the open question . higher - order orthogonality iteration ( hooi ) , global convergence , kurdyka - ojasiewicz ( kl ) property , greedy algorithm , block coordinate descent
the famous supernova sn1987a in the large magellanic cloud ( lmc ) brought the field of supernova neutrino astrophysics to life .two water cherenkov detectors , kamiokande ii and imb , detected 20 events between them ; two scintillator detectors , baksan and lsd also reported observations .the sparse sn1987a neutrino data were sufficient to confirm the baseline model of gravitational collapse causing type ii sne and to put limits on neutrino properties ( such as a mass limit of around 20 ev . ) to make distinctions between different theoretical models of core collapse and supernova explosions and to extract more information about neutrino properties , we await the more copious neutrino signal which the new generation of large neutrino experiments will detect from the next such event in our galaxy . when the core of a massive star at the end of its life collapses , less than 1% of the gravitational binding energy of the neutron star will be released in the forms of optically visible radiation and the kinetic energy of the expanding remnant .the remainder of the binding energy is radiated in neutrinos , of which % will be electron neutrinos from an initial `` neutronization '' burst and the remaining 99% will be neutrinos from the later cooling reactions , roughly equally distributed among flavors .average neutrino energies are expected to be about 13 - 14 mev for , 14 - 16 mev for , and 20 - 21 mev for all other flavors .the neutrinos are emitted over a total timescale of tens of seconds , with about half emitted during the first 1 - 2 seconds .reference summarizes the expected features of a core collapse neutrino signal ; more recent simulation work can be found in _e.g_. .a core - collapse supernova in our galaxy will bring a wealth of scientific information .the neutrino signal will provide information about the properties of neutrinos themselves and astrophysicists will learn about the nature of the core collapse .one unique feature of the neutrino signal is that it is _ prompt _ neutrinos emerge on a timescale of tens of seconds , while the first electromagnetic signal may be hours or days after the stellar collapse .therefore , neutrino observation can provide an _ early alert _ that could allow astronomers a chance to make unprecedented observations of the very early turn - on of the supernova light curve ; even observations of sne as young as a few days are rare for extra - galactic supernovae .the environment immediately surrounding the progenitor star is probed by the initial stages of the supernova .for example , any effects of a close binary companion upon the blast would occur very soon after shock breakout .uv and soft x - ray flashes are predicted at very early times .finally , there may be entirely unexpected effects no supernova has ever been observed very soon after its birth . although the neutrino signal will be plentiful in practically all galactic core collapses , it is possible that there will be little or no optical fireworks ( the supernova `` fizzles '' ) ; the nature of any observable remnant would then be very interesting .this paper focuses on the prompt alert which is possible using the neutrino signal .we will describe the technical aspects of the system .section [ overview ] gives an overview of snews , and section [ signal ] briefly covers the expected signal in current detectors .section [ 3ps ] discusses some issues associated with snews .section [ indiv ] introduces the individual experiments monitors .section [ implementation ] covers snews implementation and defines the coincidence conditions and alert scheme .section [ highrate ] describes the results of the `` high - rate '' system test performed in 2001 .section [ alert ] describes the alert to the astronomical community .section [ future ] gives future directions .the final section summarizes .the snews ( supernova early warning system ) collaboration is an international group of experimenters from several supernova neutrino - sensitive experiments .the primary goal of snews is to provide the astronomical community with a prompt alert for a galactic supernova .an additional goal is to optimize global sensitivity to supernova neutrino physics , by such cooperative work as downtime coordination . the idea of a blind central coincidence computer receiving signals from several experiments has been around for some time ( _ e.g. _ . )in addition to the basic early warning advantages of a neutrino detector , there are several benefits from a system involving neutrino signals from two or more different detectors .first , if the supernova is distant and only weak signals are recorded , a coincidence between signals from different detectors effectively increases the sensitivity by allowing reductions in alarm thresholds and allowing one to impose a minimum of ( possibly model - dependent ) expectations on the form of the signal .second , even if a highly sensitive detector such as super - k is online , __ requiring a coincidence among several detectors effectively reduces the `` non - poissonian '' background present for any given detector and enormously increases the confidence in an alert . __ background alarms at widely separated laboratories are highly unlikely to be correlated . without the additional confidence from coincident neutrino observations, it would be very difficult for any individual detector to provide an _ automated _ alert to astronomers .finally , using signals from more than one detector , there is some possibility for determining the direction of the source when a single detector alone can provide no information ( see reference . )unfortunately triangulation is in practice quite difficult to do promptly , and can not point as well as individual detectors .an important question for snews is : how often is a galactic supernova likely to occur ?estimates vary widely , but are typically in the range of about one per 30 years ( _ e.g. _ . )this is frequent enough to have a reasonable hope of observing one during the next five or ten years , but rare enough to mean that we must take special care not to miss anything when one occurs .the charter member experiments of snews are super - kamiokande ( super - k ) in japan , the sudbury neutrino observatory ( sno ) in canada and the large volume detector ( lvd ) in italy .representatives from amanda , icecube , kamland , borexino , mini - boone , icarus , omnis , and ligo participate in the snews working group , and we hope will eventually join the active coincidence . there is currently a single coincidence server , hosted by brookhaven national laboratory .we expect that additional machines will be deployed in the future .the bnl computer continuously runs a coincidence server process , which waits for alarm datagrams from the experiments clients , and provides an alert if there is a coincidence within a specified time window ( 10 seconds for normal running . )we have implemented a scheme of `` gold '' and `` silver '' alerts : gold alerts are intended for automated dissemination to the community ; silver alerts will be disseminated among the experimenters , and require human checking . as of this writing , no inter - experiment coincidence , real or accidental , has ever occurred ( except in high rate test mode ) , nor has any core collapse event been detected within the lifetimes of the currently active experiments .there are several classes of detectors capable of observing neutrinos from gravitational collapse .most supernova neutrino detectors are designed primarily for other purposes , _ e.g. _ for proton decay searches , solar and atmospheric neutrino physics , accelerator neutrino oscillation studies , and high energy neutrino source searches ..supernova neutrino detector types and their primary capabilities.[tab : detector_types ] [ cols="^,^,^,^,^,^",options="header " , ] although these somewhat non - stationary data , taken at lowered threshold , do not necessarily imply that rates will also be non - stationary when thresholds are raised and running conditions are normal , one can never be completely sure that individual experiment rates will not increase unexpectedly .this is the motivation for the rate - dependent gold suppression scheme of section [ suppression ] .the coincidence server now has capability for continuous high rate testing , using tagged test alarms in parallel with normal alarms .at the supernova early alert workshop of 1998 , the conclusion from the astronomer working group was that `` the message will spread itself '' and that snews will need to do no more than send out emails to as many astronomers as possible .snews maintains a mailing list of interested parties , including both professionals and amateurs , to be alerted in the case of a coincidence . in an ideal case, the coincidence network provides the astronomical community with an event time and an error box on the sky at which interested observers could point their instruments . in a realistic case ,the size of the error box is dependent on the location of the supernova and the experiments which are online , and may be very large ( and at this time will not be available in the initial alert message . )however , members of the mailing list with wide - angle viewing capability ( satellites , small telescopes ) should be able to pinpoint an optical event quickly .although an unknown fraction of galactic supernovae will be obscured by dust , many will be visible to amateurs with modest equipment .regardless of the quality of neutrino pointing available , however , the advance warning alone gives observers of all kinds valuable time to get to their observatories and prepare to gather data as soon as an accurate position is determined .a target of opportunity proposal for the hubble space telescope , `` observing the next nearby supernova '' , aiming to take advantage of early supernova light based on an early warning , was approved for cycle 13 and was operational for cycles 8 through 12 .the large pool of skilled and well - equipped amateur astronomers is also prepared to help locate a nearby supernova .the editors of _ sky & telescope _ magazine have set up a clearinghouse for amateur observers in search for first light ( and a precise optical position as early as possible ) , via their astroalert service .this was started by former editor - in - chief leif robinson , and has the continued support of current editor - in - chief rick fienberg . in collaboration with the american association of variable star observers ,they have developed a set of criteria for evaluating amateur responses to an alert , so that a reliable precise position can be disseminated as early as possible .for instance : there must be at least two consistent reports , demonstrated lack of motion , lack of identification with known asteroid and variable star databases , variability consistent with supernova light curves and , if the information is available , a spectrum consistent with known supernova types . on february 14 2003 , _ sky & telescope _performed a test for amateurs . a transient target ( the asteroid vesta at a near - stationary point in its retrograde loop )was selected , which at the time was about magnitude 6.7 ._ sky & telescope _ issued an alert ( very carefully tagged as a test ) to their mailing list , with a given 13-degree uncertainty radius .they received 83 responses via the web response form , and more by email .the responses were of world - wide distribution , and although many observers experienced poor conditions , six were successful in identifying the target . from this experience , they have suggested refinements to optimize amateur astronomer strategy .a second test is planned soon , and should be a regular occurrence .we maintain two alert mailing lists which will be sent to automatically by the snews coincidence software in the case of an alert .the first is the gold alert list , which includes all astronomers who have signed up , including _ sky & telescope _ and the hst astronomers , and is to be an _ automated _ alert . the second mailing list will be for silver alerts , and is to be sent to neutrino experimenters only .these alerts will be checked out by shiftworkers at their respective experiments before an alert is issued ; each experiment is responsible for making sure the silver alert messages reach shiftworkers .each experimental collaboration defines its own protocol for acting on a snews silver or gold alert . for both silver and gold cases , a message containing the following information : * utc time of the coincidence , * all detectors involved in the coincidence , and * the types of alarms ( good , possible ) for each experiment involved in the coincidence will be automatically sent by the server to the snews subgroup members .the information may also be posted to a restricted snews subgroup page for silver , and a public page for gold . to allow the confirmation of a snews alert as really coming from snews, any alerts will be public key signed using the snews key .this key has the i d # 68df93f7 , and is available on the network of public pgp keyservers such as ` http://pgp.mit.edu/ ` note that there is no restriction on individual experiments making any announcement based on individual observation in the case of absence of a snews alert , silver or gold , or preceding or following any snews alert message . any individual experiment may publicly announce a supposed supernova signal following a dispatched silver alert which has not yet been upgraded to gold . in this case the information that a previous silver alert from the snews server(s ) has been received should be cited .at the time of this writing , silver alerts only between super - k and lvd are activated .we are working towards having the operational mode described in this paper to be activated in the very short term , comprising automated good alarms from super - k and lvd , but automated possible alarms only from sno , such that sno will participate in a gold alert only if at least two other experiments good alarms are present .we also expect snews to incorporate more galactic - supernova - sensitive neutrino detectors over the next few years .in addition , we may expand the network of servers with additional secure sites .in summary , several supernova neutrino detectors are now online . if a stellar core collapse occurs in our galaxy , these detectors will record signals from which a wealth of physical and astrophysical information can be mined .an early alert of a gravitational collapse occurrence is essential to give astronomers the best chance possible of observing the physically interesting and previously poorly observed early turn - on of the supernova light curve .a coincidence of several neutrino experiments is a very powerful technique for reducing `` non - poissonian '' false alarms to the astronomical community , in order to allow a prompt alarm .we have implemented such a system , currently incorporating several running detectors : lvd , sno and super - k .we expect to expand the network in the near future , and move to a more automated mode in the near future .
this paper provides a technical description of the supernova early warning system ( snews ) , an international network of experiments with the goal of providing an early warning of a galactic supernova .
network - based dynamical systems feature agents that communicate via a dynamic graph while acting on the information they receive .these systems have received increasing attention lately because of their versatile use in modeling social and biological systems . typically , they consist of a fixed number of agents , each one located at on the real line .the agents positions evolve as interactions take place along the edges of a dynamical graph that evolves endogenously .the motivation behind the model is to get a better understanding of the dynamics of collective behavior .following , we express the system as a set of coupled stochastic differential equations : where is the magnitude of the noise , are independent wiener processes , and the influence " parameter is a function of the distance between agents and ; in other words , , where is nonnegative ( to create attractive forces ) and compactly supported over a fixed interval ( to keep the range of the forces finite ) .intuitively , the model mediates the competing tension between two opposing forces : the sum in ( [ sde ] ) pulls the agents toward one another while the diffusion term keeps them jiggling in a brownian motion ; the two terms push the system into ordered and disordered states respectively . in the mean field limit , , equation ( [ sde ] )induces a nonlinear fokker - planck equation for the agent density profile : the function is the limit density of , as goes to infinity , where denotes the dirac measure with point mass at . in the classic hegselmann - krause ( _ hk _ ) model , one of the most popular systems in consensus dynamics , each one of the agents moves , at each time step , to the mass center of all the others within a fixed distance .the position of an agent represents its `` opinion '' .if we add noise to this process , we obtain the discrete - time version of ( [ sde ] ) for }\left(y\right) ] in ( [ pde ] ) . for concreteness ,let us denote ] for , and consider the following periodic problem for the _ hk _ system : \\ \rho & = \rho_{0 } & & \text{on}\ u\times\left\ { t=0\right\ } \end{aligned } \end{cases}\label{eq : evo_equation}\end{aligned}\ ] ] where and the initial condition is assumed to be a probability density , i.e. , and .the positive constants , and are fixed with .note that we have to periodically extend outside of in order to make sense of the integral above .the periodicity of , together with eq .( [ eq : evo_equation ] ) , immediately implies the normalization condition for all .[ [ main - results . ] ] main results .+ + + + + + + + + + + + + we establish the global well - posedness of eq .( [ eq : evo_equation ] ) , which entails the existence , uniqueness , nonnegativity and regularity of the solution .in addition , we prove a global stability condition for the uniform solution , representing the state without any clustering of opinions .this gives a sufficient condition involving and for which no consensus can be reached regardless of the initial condition .the paper is organized as follows . in section [ sec : apriori ] , we first derive the aforementioned global stability condition by assuming that a sufficiently smooth solution exists . more precisely , we show that as long as , the uniform solution is unconditionally stable in the sense that exponentially as for any initial data . an important ingredient inthe proof is a estimate on the solution ( lemma [ lem : l1 ] ) .interestingly , this estimate immediately implies the nonnegativity of the solution while no arguments using maximum principles are required .the latter may not be easy to obtain for nonlinear partial integro - differential equations , such as the one we consider here .we close the section by discussing the physical significance of the stability result and how it relates to other works in the opinion dynamics literature . in section [ sec : existenceuniqueness ] , we prove the global existence and uniqueness of the weak solution to ( [ eq : evo_equation ] ) when the initial data . here , we construct approximate solutions by solving a series of linear parabolic equations obtained from ( [ eq : evo_equation ] ) by replacing with . using energy estimates ,we find that the sequence of solutions forms a cauchy sequence in and we use this strong convergence result to simplify the existence proof . finally , in section[ sec : regularity ] we establish improved regularity properties of the weak solution if for some .this allows us to remove the a priori smoothness assumptions in the stability and positivity results of section [ sec : apriori ] .the main results in this paper are then summarized in theorem [ thm : mainresult ] .[ [ notation . ] ] notation .+ + + + + + + + + as customary in the literature , we often treat ( and other functions on ) not as a function from to , but from ] can be split into two integrals : ^{1/2}\right\vert _ { 2}^{2 } & = & \int_{u}\rho^{2}\chi_{\epsilon}^{\prime\prime}\left(\rho\right)\ , dx\nonumber \\ & = & \int_{u}\rho^{2}\chi_{\epsilon}^{\prime\prime}\left(\rho\right)\mathbf{1}_{\left\ { \left|\rho\right|>\epsilon\right\ } } \ , dx\nonumber \\ & & + \int_{u}\rho^{2}\chi_{\epsilon}^{\prime\prime}\left(\rho\right)\mathbf{1}_{\left\ { \left|\rho\right|\leq\epsilon\right\ } } \ , dx .\label{eq : bef_second_one } \end{aligned}\ ] ] for , by construction , and hence the first integral above is zero .the second integral is estimated as : therefore , by ( [ eq : gtilda ] , [ eq : bef_second_one ] , [ eq : second_one ] ) , eq . ( [ eq : l1_beforelim ] ) becomes ^{2}.\ ] ] applying grnwall s inequality , we get .\label{eq : l1_gronwall } \end{aligned}\ ] ] since is continuous , the integral in the exponential is finite . therefore , taking the limit yields for every , as required .incidentally , lemma [ lem : l1 ] establishes the nonnegativity of .this is important because represents the density of opinions of individuals and , as such , is necessarily nonnegative at all times .it is interesting that a estimate suffices to show nonnegativity and no arguments from maximum principles are required .[ cor : positivity ] if is a solution of ( [ eq : evo_equation ] ) , with and , then and in for all . since ,the normalization condition ( [ eq : normalization_cond ] ) is satisfied for . applying lemma [ lem : l1 ] , we have hence , . but these equations imply that , and hence , a.e . in . by continuity , in for all . with lemma [ lem : l1 ], it follows from ( [ eq : g_infty_est ] ) that next , we also have consequently , with estimates ( [ eq : est_1 ] ) and ( [ eq : est_2 ] ) , ( [ eq : energy_est1 ] ) becomes hence , we have by construction , for all .thus , satisfies the poincar inequality for the interval $ ] , the optimal poincar constant is .therefore , ( [ eq : energy_est2 ] ) becomes but thus we obtain the integral form of ( [ eq : energy_est2 ] ) : in particular , if , the constant factor in the exponential is negative , therefore as long as .we summarize these results : let be a solution of ( [ eq : evo_equation ] ) with , , and .if , then in exponentially as .[ thm : apriori_stability ] the noisy _ hk _ model generally exhibits two types of steady - states . the first is a spatially uniform steady - state , i.e. , is constant .this represents the case where individuals have uniformly distributed opinions , without any local or global consensus .the second involves one or more clusters of individuals having similar opinions , in which case is a multi - modal profile . which of these two steady - states appear in the long - time limitdepends on the interaction radius and noise , as well as the initial profile . in this aspect ,theorem [ thm : apriori_stability ] gives a sufficient condition for the spatially uniform solution to be the globally attractive state , irrespective of the initial profile .in other words , as long as , any initial profile converges to the spatially uniform state .in particular , clustered profiles do not even have local stability .this immediately indicates a forbidden zone for consensus : when the volatility of one s opinion is too large compared to the interaction radius , there can be no clustering of opinions regardless of the initial opinion distribution .it should be noted that this is the first result regarding the nonlinear stability of the _ hk _ system . on the other hand , it is straightforward to perform linear stability analysis of eq .( [ eq : evo_equation ] ) at the uniform solution to derive a linear stability condition for the uniform solution .the combination of these two results indicate a region where it is possible to have both clustered and uniform states as locally stable solutions ( see figure [ fig : pd ] ) . , above which the spatially uniform solution ( ) is unconditionally stable , i.e. no clustering of opinions is possible .the bottom ( blue ) curve is obtained from linear stability analysis around the spatially uniform solution , and has the form . below this curve ,the uniform solution loses linear stability and only clustered solutions are permitted . between these two curvesis the region for which both clustered and uniform solutions can be stable with respect to small perturbations . ]our discussion so far has assumed the existence of a solution to ( [ eq : evo_equation ] ) . in this section ,we prove the existence and uniqueness of the weak solution by defining a sequence of linear parabolic equations , whose solutions converge strongly to a function that solves a weak formulation of eq .( [ eq : evo_equation ] ) . to begin with ,let and consider a sequence of linear parabolic equations \\ \rho_{n } & = \ , \rho_{0 } & & \text{on}\ u\times\left\ { t=0\right\ } \end{aligned } \end{cases}\label{eq : sequence_evolution}\end{aligned}\ ] ] for , with for all . for convenience , we assume that the initial condition satisfies , and . the smoothness condition will be relaxed later ( see theorem [ thm : existnece-1 ] at the end of this section ) .consider the case .since and both and are bounded , by standard results on linear parabolic evolution equations , there exists a unique satisfying ( [ eq : sequence_evolution ] ) for .iterating this for implies that there exists a sequence of smooth functions satisfying ( [ eq : sequence_evolution ] ) .next , we establish some uniform energy estimates on .let and suppose satisfy ( [ eq : sequence_evolution ] ) with .then , for all and .[ prop : unif_l1_est ] since we know that for all and all , we can proceed exactly as in the proof of lemma [ lem : l1 ] . in this case , instead of ( [ eq : l1_gronwall ] ) we have .\end{aligned}\ ] ] since is smooth , the integral in the exponential is finite , hence we take the limit to obtain for all .let and suppose satisfy ( [ eq : sequence_evolution ] ) with , and .then , and in for all and for all .[ cor : positivity_n ] since the functions are all periodic , we have ; hence the proof is identical to corollary [ cor : positivity ] .let and suppose satisfy ( [ eq : sequence_evolution ] ) with , and .then , there exists a constant such that [ prop : unif_l2_est ] we proceed as in section [ sec : apriori ] by multiplying ( [ eq : sequence_evolution ] ) by and integrating by parts .this gives us using proposition [ prop : unif_l1_est ] and corollary [ cor : positivity_n ] we have and hence ( [ eq : unif_l2_main1 ] ) becomes which implies , by integration , that for all , and let and suppose satisfy ( [ eq : sequence_evolution ] ) with , and .then , there exists a constant such that [ prop : unif_l2_est_higher ] multiplying equation ( [ eq : sequence_evolution ] ) by and integrating by parts over , it follows from cauchy - schwarz , young s inequality , and ( [ grhoub ] ) that now , \nonumber \\ & = & -r\left(\rho_{n-1}\left(x+r , t\right)+\rho_{n-1}\left(x - r , t\right)\right ) \nonumber \\ & & + \int_{x - r}^{x+r}\rho_{n-1}\left(y , t\right)dy .\label{eq : g_diff_first } \end{aligned}\ ] ] by ( [ rhoncub ] ) and morrey s inequality , it follows that ( [ eq : regularity_exp_1 ] ) becomes integrating over , we have applying the estimates in proposition [ prop : unif_l2_est ] , we find that with the uniform estimates above , we can now show that converges strongly to a limit .let and suppose that satisfies ( [ eq : sequence_evolution ] ) with , and .then there exists such that in .[ lem : cauchy ] we set for . for ,the evolution equation for reads let . multiplying the equation above by ( see definition ( [ eq : chi_def ] ) ) and integrating by parts yields ^{1/2}\phi_{n_{x}}\left(t\right)\right\vert _ { 2}^{2}\nonumber \\ & \leq & \int_{u } \left\vert\chi^{\prime\prime}_{\epsilon}\left(\phi_n\right ) \phi_{n_{x}}\phi_{n } g_{\rho_{n-1}}\right\vert dx + \int_{u}\left\vert\chi^{\prime}_{\epsilon}\left(\phi_n\right ) \left(\rho_{n-1}g_{\phi_{n-1}}\right)_x\right\vert dx\nonumber \\ & \leq & \frac{\sigma^{2}}{2 } \left\vert \left[\chi^{\prime\prime}_{\epsilon}\left(\phi_n\left(t\right)\right)\right]^{1/2}\phi_{n_{x}}\left(t\right)\right\vert _ { 2}^{2 } \nonumber \\ & & + \frac{1}{2\sigma^2 }\left\vert g_{\rho_{n-1}}\left(t\right)\right\vert^{2}_{\infty } \left\vert \left[\chi^{\prime\prime}_{\epsilon}\left(\phi_n\left(t\right)\right)\right]^{1/2 } \phi_{n}\left(t\right)\right\vert^2_{2 } \nonumber \\ & & + \int_{u}\left\vert\chi^{\prime}_{\epsilon}\left(\phi_n\right ) \left(\rho_{n-1}g_{\phi_{n-1}}\right)_x\right\vert dx .\label{eq : phi_n_main1 } \end{aligned}\ ] ] by corollary [ cor : positivity_n ] , . also , as in ( [ eq : second_one ] ) from the proof of lemma [ lem : l1 ] , we have ^{1/2}\phi_{n}\left(t\right)\right\vert _ { 2}^{2 } \leq c \epsilon.\ ] ] to estimate the last integral in ( [ eq : phi_n_main1 ] ) , observe that , hence but we know that and that ( see expression ( [ eq : g_diff_first ] ) ). moreover , morrey s inequality implies .hence , it follows that , as tends to , ( [ eq : phi_n_main1 ] ) becomes where in the last line we used proposition [ prop : unif_l2_est_higher ] and the shorthand now , for we define by ( [ eq : phi_n_main2 ] ) and corollary [ cor : positivity_n ] , moreover , the s coincide at , so .thus by grnwall s inequality , uniformly in and .furthermore , for each , is a bounded monotone sequence in , hence there exists such that , pointwise in . by the monotone convergence theorem , result immediately implies that is a cauchy sequence in .indeed , for we can pick such that .hence , for all , therefore , is a cauchy sequence and there exists such that in .note that we can extract from a subsequence that converges weakly in smaller spaces .we denote by the dual space of .since periodic boundary conditions allows integration by parts without extra terms , most characterizations of carries over to .we have , with , and the estimate moreover , there exists a subsequence such that and [ lem : weakconv ] from proposition [ prop : unif_l2_est ] , we have next , observe that from the evolution equation of , we have hence , where in the last step we used proposition [ prop : unif_l2_est ] .therefore , we have the uniform estimate hence , , with and they satisfy the same estimate ( [ eq : unif_3_ineq ] ) .furthermore , there exists such that following , we can deduce from lemma [ lem : weakconv ] the following result : suppose with , then up to a set of measure zero .further , the mapping is absolutely continuous , with for a.e . . here, denotes the pairing between and .[ thm : evans ] the proof is identical to the proof in evans section 5.9 theorem 3 .the only difference here is that we are considering and , instead of and . since periodic conditionsstill guarantees integration by parts without extra terms , all proofs follow through . now , we are ready to prove the existence of a weak solution to equation ( [ eq : evo_equation ] ) .we say that with is a _ weak solution _ of equation ( [ eq : evo_equation ] ) if for every , and .note that since ( theorem [ thm : evans ] ) , the last condition makes sense as an initial condition .[ thm : existnece](existence and uniqueness ) let , and .then , there exists a unique weak solution , with , to equation ( [ eq : evo_equation ] ) with the estimate for each , we multiply equation ( [ eq : sequence_evolution ] ) ( with ) by and integrate over to obtain there are no boundary terms due to periodic boundary conditions .now , we know from lemma [ lem : weakconv ] that in .moreover , is uniformly bounded so that .thus , also , but .hence , by the strong convergence result in lemma [ lem : cauchy ] , we have and thus combining ( [ eq : conv0 ] ) , ( [ eq : conv1 ] ) and ( [ eq : conv2 ] ) , we have by the weak convergence results established in lemma [ lem : weakconv ] , we also have putting together ( [ eq : conv_1 ] ) and ( [ eq : conv_2 ] ) , we obtain in the limit , for every . finally , we have to show that .pick some with .then , we have from ( [ eq : weak_form ] ) that similarly , we also have where we have used the fact that for all . taking the limit and comparing ( [ eq : comp_1 ] ) and ( [ eq : comp_2 ] ) , we have since is arbitrary , we conclude that .this completes the proof of the existence of a weak solution .now , we prove its uniqueness .let and be weak solutions to ( [ eq : evo_equation ] ) and set .then , for every , we have adding and subtracting , we obtain but , and so that now , set , and use theorem [ thm : evans ] , we have since this holds for all , we must have and hence for a.e . . but and the s are continuous in time , we have for all . finally , the energy estimate is from lemma [ lem : weakconv ] . the strong convergence result ( lemma [ lem : cauchy ] ) is important here because without it , we could not have concluded that expression ( [ eq : strongconv ] ) converges to , because it involves a different subsequence .strong convergence ensures that all subsequences converge in . throughout this section we assumed that the initial condition is smooth , i.e. .we can in fact relax this condition to by mollifying the initial data .[ thm : existnece-1](existence and uniqueness with relaxed regularity assumption on the initial condition ) let , and .then , there exists a unique weak solution , with to equation ( [ eq : evo_equation ] ) with the estimate let and consider the modified problem \\ \rho_{n}^{\epsilon } & = \rho_{0}^{\epsilon } & & \text{on}\ u\times\left\ { t=0\right\ } \end{aligned } \end{cases}\label{eq : sequence_evolution_mollify } \end{aligned}\ ] ] where and . here, is a standard positive mollifier with compact support on and =1 . with mollification , is now smooth and we can apply theorem [ thm : existnece ] to conclude that there exists a unique weak solution , with to equation ( [ eq : sequence_evolution_mollify ] ) with the estimate but for all , we have .hence , there exists , with , satisfying and a sequence , with , such that as .we now show that is in fact a weak solution to ( [ eq : evo_equation ] ) . since each solves the weak formulation of ( [ eq : sequence_evolution_mollify ] ) ( albeit with different initial data ), we have using ( [ eq : weak_conv_mollify ] ) , we can replace by in the first two integrals above in the limit .moreover , as in ( [ eq : conv0 ] ) , we write the last integral as since , we have and hence next , we can write dxdt \nonumber \\ & = & \int_{0}^{t } \int_{u } h \left(\rho^{\epsilon_k}-\rho\right ) dy dt , \label{eq : h } \end{aligned}\ ] ] where we have defined clearly , so that in particular , and from ( [ eq : h ] ) we obtain thus , we have shown that satisfies to show that , we again take with . since uniformly , we have ( c.f . expressions ( [ eq : comp_1 ] ) and ( [ eq : comp_2 ] ) ) since is arbitrary , we have .the uniqueness follows from exactly the same argument in the proof of theorem [ thm : existnece ] and we omit writing it again here .in this section , we prove improved regularity of the weak solution to ( [ eq : evo_equation ] ) .this allows us to put the results in section [ sec : apriori ] on a rigorous footing . as in the previous section, we always mollify by so that the resulting evolution equations ( [ eq : sequence_evolution ] ) admit smooth solutions .this allows us to differentiate the equation as many times as required , and we take the limit at the end . for simplicity of notation, we drop the superscripts on and implicitly assume that we perform the limit at the end .first , we prove a useful estimate . [prop : induction_estimate]let .then for we have the estimate we have where denotes the derivative with respect to . applying the leibniz rule , we have but , +\int_{x - r}^{x+r}v\left(y\right)dy & i=1\\ -r\left[v^{\left(i-1\right)}\left(x+r\right)+v^{\left(i-1\right)}\left(x - r\right)\right]\\ + v^{\left(i-2\right)}\left(x+r\right)-v^{\left(i-2\right)}\left(x - r\right ) & i\geq2 \end{cases}\label{eq : diff_g_high_order}\ ] ] hence we have the bound for , and for , keeping only the highest sobolev norms , we have now , we assume that for some and prove the corresponding regularity of .[ thm : x_regular](improved regularity ) let and suppose with and . then the unique solution to ( [ eq : evo_equation ] ) satisfies with the estimate where we prove the statements by proving uniform estimates on by induction on . the base case is provided in proposition [ prop : unif_l2_est ] . the case is proposition [ prop : unif_l2_est_higher ] .suppose for some , for all .we differentiate equation ( [ eq : sequence_evolution ] ) times with respect to , multiply it by and integrate over to get using proposition [ prop : induction_estimate ] with and , we have integrating over time , we get ^{4}\nonumber \\ & \leq & c\left(\rho_{0};k+1,t\right)^{2 } \end{aligned}\ ] ] this completes the induction . taking limits , we obtain with the estimate sofar we have only considered regularity in space .the same can also be done in the time domain .[ thm : reg_xt](improved regularity ) let and suppose with and .then , \(i ) for every , the unique solution to ( [ eq : evo_equation ] ) satisfies with the estimate where \(ii ) moreover , with the estimate let us prove that for all , where we have defined this is done by induction on up to .the case is theorem [ thm : x_regular ] .suppose we have for some the estimate ( [ eq : inductive_hyp-1 ] ) .differentiating equation ( [ eq : sequence_evolution ] ) times with respect to and using the leibniz rule , we have where we used the shorthand .thus , we have using proposition [ prop : induction_estimate ] with and , we have integrating over time then gives since , we can apply the inductive hypothesis ( [ eq : inductive_hyp-1 ] ) to conclude that similarly , this completes the induction on up to .putting into ( [ eq : inductive_hyp-1 ] ) and taking limits proves part ( i ) . to prove the second part , notice that hence , taking limits then proves part ( ii ) .[ cor : last_reg ] let and with and .then the unique solution to ( [ eq : evo_equation ] ) satisfies after possibly being redefined on a set of measure zero . by theorem [ thm : x_regular ] , , i.e. . hence there exists a version of with , so that in particular , .next , using theorem [ thm : reg_xt ] , we have and , hence by theorem [ thm : evans ] there is a version of so that .hence we have up to a set of measure zero .this result allows us to restate the results in section [ sec : apriori ] without the a priori smoothness assumption .we summarize the main results of this paper in the following : [ thm : mainresult ] let with and .then , there exists a unique weak solution to equation ( [ eq : evo_equation ] ) , with \(i ) ( regularity ) \(ii ) ( nonnegativity ) for all .\(iii ) ( stability ) furthermore , if , then in exponentially as . existence and uniqueness follow from theorem [ thm : existnece-1 ] .( i ) follows from corollary [ cor : last_reg ] .having established ( i ) , ( ii ) and ( iii ) then follows from corollary [ cor : positivity ] and theorem [ thm : apriori_stability ] respectively .this work was completed while q. jiu was visiting the department of mathematics at princeton university .the authors are grateful for many discussions with prof .
this paper establishes the global well - posedness of the nonlinear fokker - planck equation for a noisy version of the hegselmann - krause model . the equation captures the mean - field behavior of a classic multiagent system for opinion dynamics . we prove the global existence , uniqueness , nonnegativity and regularity of the weak solution . we also exhibit a global stability condition , which delineates a forbidden region for consensus formation . this is the first nonlinear stability result derived for the hegselmann - krause model . hegselmann - krause model , nonlinear fokker - planck equation , well - posedness , global stability 35q70 , 35q91
random numbers are important for an array of applications from encryption and authentication systems , to monte carlo simulations for molecular dynamics , nuclear reactors , and others . as a result , a variety of classical methods ( computational pseudo - random number generators , sampling stochastic physical processes , etc . ) to generate random number sequences have been developed .an attendant host of tests to certify that a given data sequence is `` random '' has been also been created . while pseudo - random numbers are useful for many of these applications , including simulations and encryption with suitably high quality sources , their inherent determinism means that any encryption or authentication scheme is in principle breakable with sufficient computational power .this principle applies to any deterministic system , including processes described by classical physics . on the other hand ,the only nondeterministic physical theory with experimentally accessible applications is quantum mechanics .the additional security provided by non determinism is a requirement for quantum key distribution , for instance , whose security proofs often rely on the concept of true , nondeterministic randomness in order to guarantee successful secret key sharing .thus , a wide array of so - called quantum random number generators ( qrng ) have been developed . from radioactive decay to quantum optical techniques , a host of methods involving photon arrival time and vacuum noise measurements have been demonstrated . despite the prevalence of qrngs and their acknowledged need , many implementations use extractors ( such as hashes ) to remove large amounts of bias computationally , exposing a potential weakness in their physical implementations .for instance , if an adversary is able to computationally reverse the extractor function that a given qrng implements in order to achieve random number uniformity and the underlying ( `` physical '' ) distribution is strongly biased then he or she will have a best - guess strategy against the qrng device .therefore , one s ability to detect and remove bias before applying an extractor function improves the qrng s security .one of the major sources of bias in qrngs , aside from environmental noise , is the lack of knowledge of precise values of the qrng s physical parameters .the best one may do is to estimate the parameters statistically .but because the estimates are statistical they are intrinsically noisy , and thus assigning a single value to a parameter can lead to errors and bias .nevertheless , parameter estimator errors are usually ignored in qrng design and simple point estimators are used . here ,we show that using point estimators may introduce possible binning bias .we argue that using a bayesian statistical inference method removes this type of bias and propose a binning scheme that extracts the optimum number of bits possible for a given entropy from a given physical random number distribution .when used as a diagnostic for qrngs in combination with maximum likelihood estimators ( mle ) , uniform distributions can be generated from sources of quantum randomness .using bayesian hypothesis updating techniques , our scheme allows for a test of the quantum model that produced a given set of numbers , potentially allowing for a fast , on - line quantum test of randomness .this technique has applications to high bit rate qrngs which need testing and verification to ensure the device remains bias - free during use .let be a continuous random variable with probability density function ( _ pdf _ ) , where is a fixed- ( but unknown- ) value parameter . the particular form and parametric dependence of determined by the experimental setup at hand .our goal in this section is to introduce a typical problem of physical random number generation that can be formulated as follows : provided independent samples of , are measured in an experiment , convert , if possible , each measurement outcome into a discrete random variable with the probability mass function ( _ pmf _ ) and corresponding domain .a uniform distribution is often important in applications and here we will also concentrate on the case of . then the problem essentially reduces to constructing a surjection from the set onto the set . traditionally , the problem is solved by dividing the domain of , , into mutually non - intersecting bins such that . when bins are selected such that the probability of the random variable to fall into the -th bin is then the surjection can be constructed by following a simple rule : if a measurement result for some then we assign . of coursethis mapping works only if the value of the model parameter is known .since it is usually not the case in the majority of experimental situations , the first order of business is to find a good estimate of the value of . in many cases ,the number of possible ways to construct an estimator that provides an unbiased estimate of is infinite .moreover , it is not always possible to find an estimator that has minimal uncertainty , and often one is forced to choose one from a set of almost optimal candidates . in practicethe maximum likelihood estimator ( mle ) is a common choice .given a set of independent samples of , , we can introduce the _ likelihood _ function , the likelihood indicates which values of are more likely given measurement data .we can also compute , at least numerically , the value of that maximizes , provided the likelihood function is convex .the resulting estimator is mle , i.e. .using as the `` true '' parameter value for binning purposes in eq.([eq : binprobability1 ] ) might at first appear as a reasonable choice , and this approach is a mainstay in qrng design .but what happens if instead of one uses some other estimate that differs from only slightly in the value of the likelihood , i.e. ?choosing over will have an effect on the size of bins generated via eq.([eq : binprobability1 ] ) .we illustrate this situation in fig .[ fig : binningexample1 ] , where the random variable follows gamma distribution ( i.e. ) and we are interested in converting each measurement outcome into a uniformly distributed discrete random variable k that can take on values \{0,1,2,3}. we fit the same measurement data using two slightly different values of the parameter .the red solid line represents the fit with and the blue solid line has .the vertical dashed blue ( red ) lines represent bins calculated using eq.([eq : binprobability ] ) with and ( ) .the green circle is a particular measurement outcome that we would like to assign a discrete value to . according to our previous discussion , and if we use values and respectively .now imagine that and are such that the likelihood function does not provide a reliable differentiation between them , i.e. .which value of , if any , should we then adopt ?there are four possible options : * choose when is the true estimate ( ) .* choose when is the true estimate ( ) .* choose when is the true estimate ( ) .* choose when is the true estimate ( ) .the first two choices are trivial since they obviously result in a uniform pmf . the last two choices , however , generate a bias that distorts the uniformity of . to see that we calculate the probability of occupying the -th bin provided that is chosen when is the true estimate , ^{\frac{\theta_{1}}{\theta_{2 } } } - [ \frac{n - i-1}{n}]^{\frac{\theta_{1}}{\theta_{2}}},\label{eq : binprobunder}\end{aligned}\ ] ] where , , and .we notice that , by definition , .similarly , if we choose when is the true estimate then the -th bin probability reads , ^{\frac{\theta_{2}}{\theta_{1 } } } - [ \frac{n - i-1}{n}]^{\frac{\theta_{2}}{\theta_{1 } } } , \label{eq : binprobover}\end{aligned}\ ] ] where . finally , the plot of pmfs and in fig .[ fig : binprobability ] , calculated using eqs.([eq : binprobunder ] ) and ( [ eq : binprobover ] ) respectively for , illustrates the effect of parameter under(over)-estimation on the uniformity of the random numbers generated using the continuous distribution binning method .the horizontal axis represents the bin number where a measurement outcome is placed as the result of binning .the vertical axis is the probability for different values of to occur .ideally , if the value of was known exactly , the probability of or would be the same at .this situation is represented by the solid blue line .when the value of is overestimated , the corresponding pmf in eq.([eq : binprobover ] ) depicted by green crosses , exhibits a bias towards placing measurement outcomes into the first two bins . in a similar fashion , in eq.([eq : binprobunder ] ) , represented by red circles , corresponds to the situation when the parameter is underestimated and demonstrates bias towards . to quantify the amount of introduced bias we compute values of kullback - leibler ( kl ) divergence and between the bin pmf ( ) and the ideal uniform pmf respectively . by definition ,kl divergence measures the information lost when the uniform pmf is used to approximate or .we find that bits and bits .this example shows that discrete random number generation procedures relying on binning a continuous probability distribution with a parametric dependence potentially introduces bias .this happens because the point parameter estimation approach is prone to over(under)-estimating the true value of the parameter .hence , the question arises : is there a binning method that does not introduce bias ?the short answer is yes , and such a method will be introduced in section [ sec : bayesianinference ] . in the next section , a slightly different approach to binning is shown in order to motivate the discussion .a measurement outcome does not depend on the value of the pdf parameter . however , the probability of the outcome does .as we have already seen , this means that the size of the bins also depends on , which makes binning procedure problematic .the reverse situation would be more practical , in which the bin size is fixed ( independent of ) but the measurement outcome depends on the pdf parameter .of course , this does not remedy the problem of bias discussed earlier , but it will be useful in formulating a solution in the next section . for a given fixed value , the probability that the continuous random variable is less than reads , where we have assumed that .by definition ] interval provided is a continuous function of .the proof is straightforward and can be found elsewhere . on the other hand , if the value of is fixed , e.g. , and the value of is unknown then is clearly a function of with the range ] interval into uniform bins , each of the size , then for every measurement outcome a discrete random number can be generated by finding such that .this is exactly what we were looking for . by replacing the random variable with using the integral transform in eq.([eq : inttrans ] )we switched from having bins that explicitly depended on the model parameter to having constant bin size .the parametric dependence is now shifted to the random variable that we bin , i.e. , and now we need to figure out a way to assign a value to which does not create bias .we could try to fix the value of by using an estimate of ( e.g. mle ) as was done previously in section [ sec : directbinning ] . however , this approach is inherently flawed because any finite data sample estimator though it can be very close to the true parameter value will over(under)-estimate the true parameter value . however , the concept of likelihood , or , more precisely , the concept of treating the distribution parameter as an unknown ( but not random ) variable given a set of measurements can be inverted using bayesian inference to compute the probability of occupying a given bin .indeed , the bayesian approach treats as a quantity whose variation is described by a probability distribution usually referred to as the _prior_. the prior is a subjective distribution determined by experimenter s personal beliefs and knowledge about the system of interest prior to any observations on the system .once is formulated , an observation on the system is made .the prior is then updated with the result of the observation using bayes rule and the next measurement is taken with the updated prior , often called _ posterior _ , as the new prior . if the sampling distribution , i.e. the distribution we draw measurement outcomes from , is ( the pdf to observe as a result of our measurement , given the parameter value ) and the measurement result is then the posterior distribution is given by where is the marginal distribution of : the posterior distribution can be subjectively interpreted ( since it does depend on the choice of the prior ) as a conditional distribution ( conditioned on the observed sample ) for the parameter . on the other hand , we know that is a function of given the measurement outcome .therefore , can also be interpreted as a random variable on ] ) .it is equivalent to the probability that the random variable falls into the interval given by , this means that we now can assign a bin to a measurement outcome using a simple acceptance / rejection test : we accept into the -th bin if and reject in the -th bin otherwise . here is the user - defined acceptance probability .the binning bias can be completely eliminated by setting the value of high ( ) .this means that only the measurement outcomes that have more than 95% of their distribution function localized within a certain bin will be accepted and converted into a discrete random number .all other measurements will be rejected . on the other hand ,if is set too low , say , then less measurements will be rejected . however, this may lead to conflicting situations when a measurement outcome could be placed into two or more different bins which , in turn , may lead to binning bias .let us consider an example depicted in figure [ fig : binvsparam ] where two distribution functions ( red solid line ) and ( green solid line ) for two independent samples and are plotted .we are interested in converting each measurement outcome into an integer value . using our acceptance / rejection test with conclude that is an acceptable measurement that can be converted to . on contrary , will be rejected and no integer value will be assigned to it .we finally summarize our approach to qrng data processing as the following 5 step algorithm : 1 .run qrng and collect independent samples from the distribution defined by the qrng .2 . construct a prior for all possible values of .3 . update the prior times using the bayes rule eq.([eq : bayesrule ] ) .compute the posterior .4 . for each measurement outcome compute the correspondent distribution using eq.([eq : gofu ] ) and eq.([eq : inttrans ] ) . set the acceptance probability value 5 .use the proposed acceptance / rejection test to convert the measured sequence into integer values .it is worth mentioning that alternatively , instead of waiting to collect a measurement record , one could choose to update the prior on - line i.e. after each measurement . in this caseit is likely that a few first measurement results will be discarded as we accumulate information about the qrng device at hand .however , after enough information is received to narrow down the parameter distribution , it will be possible to convert upcoming measurements into random bit values .to illustrate how our approach works in an experiment we consider two physical implementations of qrngs .we first introduce mathematical models to describe the qrngs of interest in the section [ sec : physmodels ] and then proceed with the analysis of the experimental data and numerical simulations results in section [ sec : randd ] .let us first consider a qrng based on measuring time - of - arrival statistics of a coherent light source .our experimental setup consists of a tapered amplifier , emitting spontaneously and subsequently attenuated to a coherent state , that continuously illuminates the surface of a free - running single photon counting module with 80 ns dead time . using a gated fpga essentially acting as a time to digital converter , we measure the time interval between two consecutive photodetection events .the time interval plays the role of a physical random variable that we would like to convert into a discrete uniform random variable . to determine statistical properties of , a quantum model of phododetection process is needed . for this purposewe introduce a positive - operator valued measure ( povm ) , where is a projection operator that corresponds to `` no - click '' measurement of the detector and represents a `` click '' detection event .note that .then the detector click rate ( i.e the click probability per unit time ) reads , ,\ ] ] here is the density operator of the laser field and describes the overall detection efficiency .therefore , the probability to get a click in a short time interval is ] . in the limit of large obtain , we now can compute the conditional probability to detect a click at given a click was detected at , finally , the probability density for the random variable can be obtained by taking a derivative of , two main assumptions were made in the derivation of eq.([eq : toapdf ] ) .first , the detection events are independent and identically distributed .this assumption is justifiable in case of moderate laser powers .second , we have assumed noiseless detection .the later assumption is , unfortunately , not very realistic .avalanche photodiode detectors usually introduce two main sources of noise that affect the value of afterpulsing and timing jitter .afterpulsing is a false detection event in which electrons that were trapped by quenching in a previous detector gate are rereleased in subsequent detector gates , usually occuring after a true click due to a photon absorption event .the time interval between a true detection and an afterpulse event can be well characterized experimentally and the raw data can be filtered to remove the afterpulsing events by only accepting measurements with .the filtering procedure effectively results in rescaling of the probability density in eq.([eq : toapdf ] ) , where is a characteristic afterpulsing time .the time jitter is a small error in the measurement of .the recorded time interval between two sequential clicks is a sum of two random variables , where is the `` true '' time interval with pdf given in eq.([eq : toapdfrenorm ] ) and is a time jitter random variable .one can show that the probability density for reads , \end{aligned}\ ] ] where denotes the error function .notice that if the time jitter is small ( ) , eq.([eq : toapdfnoise ] ) coincides with eq.([eq : toapdfrenorm ] ) .since the observed time jitter is indeed small we will model the time - of - arrival qrng using the probability density in eq.([eq : toapdfrenorm ] ) with one parameter .let us also discuss how to implement the qrng data processing algorithm described earlier for this model .the model pdf is given in eq.([eq : toapdfrenorm ] ) .an obvious choice for the prior is a non - informative ( uniform ) prior that assigns constant weight to all values of the parameter .it turns out that in this case the posterior distribution after measurements can be calculated even analytically ( instead of standard numerical updating ) as , where , and we assume that the characteristic afterpulsing time is known ( not a parameter ) and denotes the gamma distribution function . using eq.([eq : inttrans ] ) we introduce random variables , where and compute their probability distribution using eq.([eq : gofu ] ) and eq.([eq : posteriortoa ] ) , ^{n}(1-u_{i})^{\frac{1}{t(\tau_{i}-\tau_{a})}-1}}{n!(t(\tau_{i}-\tau_{a}))^{n+1}}.\ ] ] and finally we calculate the probability that falls into the -th bin ( ] .the second system that we consider here is a popular qrng implementation based on vacuum quadrature measurement .quantum vacuum fluctuations of the electromagnetic field are measured routinely at optical wavelengths using homodyne detection techniques .a typical homodyne detector consists of a beam splitter with two input ( ) and two output ports ( ) .suppose that the input port carries a laser field described by a density operator and the port carries the vacuum . by placing a photodetector in each of the output ports we measure the photon number difference operator between and , ^{\dagger}a + [ ( \eta_{1}r)^2 - ( \eta_{2}t)^2]b^{\dagger}b + \nonumber \\ & & rt(\eta_{1}^2 + \eta_{2}^2)[a^{\dagger}b + ab^{\dagger}],\end{aligned}\ ] ] where ( ) are the transmittance ( reflectance ) of the beam splitter , are detector 1,2 detection efficiencies and ( ) are creation / annihilation operators for the input port ( ) .therefore , in a general experimental situation , will depend on three parameters ( note that ) and the laser field .but since we only perform a numerical simulation of an experiment here and thus can `` control '' the parameters perfectly , we will assume that we have a 50/50 beam splitter ( ) and 100 percent efficient detectors ( ) .we will also assume that the laser field is in a coherent state , i.e. .therefore , the expectation value of , is proportional to the expectation value of the vacuum quadrature operator . setting conclude that by measuring the photon number difference in the output ports and we effectively measure the vacuum quadrature , and hence , a particular measurement outcome in a normal random variable with pdf . in reality measurement results are always affected by electronic noise .the noise is usually model by a normal distribution and thus the outcome of the quadrature measurement is a sum of the `` true '' quadrature random variable and the noise i.e. .since and are independent and normally distributed , their sum is also a normally distributed random variable with pdf .therefore , we will model the output of vacuum quadrature measurement based qrng as a continuous random variable with the distribution function , where is an unknown parameter . with the qrng model at handwe can now discuss how to apply the data processing algorithm developed in section [ sec : bayesianinference ] to the vacuum quadrature measurement qrng .once again we start by choosing a prior .we propose to use a non - informative prior as in the previous example .the posterior distribution after measurements can then be calculated analytically and reads , where and denotes the gamma function .the next step in our procedure is to introduce random variables that later on will be binned . unlike the previous example , where eq.([eq : inttrans ] ) was used for that purpose, we will rely on _ box - muller _transform here . recall that and , two independent uniform(0,1 ) random variables , can be converted into two independent normal random variable and using the following transformation , on the other hand , a pair of measurement outcomes and can be converted into two random variables and [ 0,1 ] . since does not depend on the parameter ( it is constant for a given pair ) , it can be immediately placed into the -th bin that satisfies . as to which indeed is a function of , we can derive its probability distribution function using the posterior distribution in eq.([eq : posteriorhqrng ] ) , finally , the probability that falls into the -th bin ( ] .therefore , a pair of normally distributed outputs of the vacuum homodyne measurement and converts into two uniformly distributed integer random numbers and .we collected a sample containing 256,000 measurements of the time interval between two consecutive detection events .the raw data was filtered and all entries were removed from the sample to mitigate the effect of detector afterpulsing .the resulting filtered sample consisted of 221,890 measurements .we binned the filtered data into 100 bins of equal size and calculated the probability of each bin .the corresponding probability distribution is depicted in fig .[ fig : toadatapdf ] with red circles . based on the qrng model discussed in section [ sec : physmodelstoa ] we calculated , mle for the parameter .we used in conjunction with the probability density function in eq.([eq : toapdf ] ) to fit the experimental data .the result is depicted on fig .[ fig : toadatapdf ] with the solid blue line .not surprisingly , given the number of measurements , the ml curve fits the data well .next we applied our data processing algorithm to the filtered data .we set the acceptance probability and proceeded to convert the data into a set of 4-bit random numbers ( i.e. measurement results are binned among bins ) .the number of measurements that passed acceptance / rejection criterion ( ) , and were assigned a bin value ( ) , was 215,538 ( out of 221,890 ) .the resulting bin probability distribution is depicted in fig .[ fig : toa4bitpdf ] using green triangles .the solid blue line corresponds to the ideal 4-bit uniform distribution and the red crosses represent a 4-bit probability distribution obtained from the same data set using the conventional fixed - parameter binning technique with .both methods generate a visually uniform distribution .the uniformity is also confirmed by the values of shannon entropy per bit for each distribution . for the conventional binning bits andthe entropy of the distribution generated by our binning method is bits . in conventional bin assignment methods ,once the distribution parameter value is estimated from a given set of measurements , the number of random bits that can be generated per single measurement is , in principle , only limited by the number of measurements .this is because the mean error ( standard deviation ) of the parameter estimator is ignored in conventional binning .however , if the parameter estimation error is greater than the width of the bin where the measurement result is placed then such a bin assignment is erroneous and this measurement must be ignored and removed from the data .but this is exactly what our bin assignment method with the acceptance probability does .it effectively requires that the bin width should be greater than 4 standard deviations of the random variable .if this requirement is not fulfilled the -th measurement can not be assigned a bin reliably and the measurement is discarded . hence , in contrast to the conventional binning our approach reduces the overall number of measurements .therefore , for a given initial set of data , the number of random bits per measurement is naturally less in our method . in other wordsbayesian updating provides a more conservative estimate of randomness of a qrng when compared to ad - hoc binning .to illustrate this we generated 7- and 8-bit random number distributions from the same filtered data that we used for the 4-bit distribution above and the acceptance probability .the resulting distributions are depicted with green triangles on fig .[ fig : toa7bitpdf ] and fig . [ fig : toa8bitpdf ] . asbefore the solid blue line corresponds to the ideal 7(8)-bit uniform distributions and the red crosses represent 7(8)-bit probability distribution obtained from the same size data sets using the conventional fixed - parameter binning technique with .the number of measurements that have passed acceptance / rejection criteria ( ) is 172,736 ( 122,927 ) for the 7-(8-)bit distribution .we also calculated shannon entropy for the conventional binning , bits , and the entropy of our binning method is bits . on the other handthe entropy in the 8-bit case for conventional binning is bits and for the proposed binning method is bits . as previously suspected , we observe a drop in the entropy of the 8-bit distribution generated using our technique .this implies that the collected data can reliably be converted into random bit sequences up to 7 bits long .note that the conventional binning method does not provide us with such a conclusion .we simulated vacuum homodyne measurements using a pseudo random number generator .two independent sets of 50,000 random numbers were created by sampling the normal distributions and respectively .the first set with represents noiseless vacuum quadrature measurement whereas the second set with corresponds to the electronics noise .thus , the sum of the sets simulates the vacuum homodyning based qrng that we previously modeled using eq.([eq : hqrngpdf ] ) .we used the data to produce sets of 6- and 7-bit random numbers implementing both the conventional ( mle based ) and proposed ( bayesian ) binning methods .the resulting distributions are depicted in fig .[ fig : vachomodyne6bitpdf ] and fig .[ fig : vachomodyne7bitpdf ] .the green triangles correspond to the probability distributions generated using our technique ( ) , the red circles depict the results of the conventional mle based binning and black crosses represent conventional binning with the `` true '' value of the parameter .examining fig .[ fig : vachomodyne6bitpdf ] and fig .[ fig : vachomodyne7bitpdf ] visually we observe that our method fails to produce a uniform 7-bit distribution indicating that the maximum number of random bits per measurement outcome can not exceed 6 for the simulated data sample .this is also confirmed by the values of shannon entropy versus . of course, generating a larger sample of measurements would allow a higher number of bits per measurement outcome as was the case in the previous section .this illustrates the interplay between the number of measurement in a sample , acceptance probability , and the number of random bits that can be extracted from the sample .in this manuscript we have demonstrated a new binning technique for qrngs , as well as a formalized approach to characterize traditional binning methods . in particular , ad - hoc binning approaches are shown to result in possible bias when the model of the physical qrng system is not taken into account .using bayesian hypothesis updating , a physical model can be used to quickly characterize experimental data .this has implications for new types of quantum statistical tests for randomness in a potentially more accessible manner than loop - hole - free bell inequality violation tests .p. l. would like to thank bing qi , ryan bennink and travis humble for useful discussions .this work was performed at oak ridge national laboratory , operated by ut - battelle for the u.s .department of energy under contract no .de - ac05 - 00or22725 .increasing the number of bins for a fixed number of measurements also increases fluctuations in the probability of occurrence of a given bin , ultimately reducing the entropy of the distribution to zero .
the majority of quantum random number generators ( qrng ) are designed as converters of a continuous quantum random variable into a discrete classical random bit value . for the resulting random bit sequence to be minimally biased , the conversion process demands an experimenter to fully characterize the underlying quantum system and implement parameter estimation routines . here we show that conventional approaches to parameter estimation ( such as e.g. _ maximum likelihood estimation _ ) used on a finite qrng data sample without caution may introduce binning bias and lead to overestimation of the randomness of the qrng output . to bypass these complications , we develop an alternative conversion approach based on the bayesian statistical inference method . we illustrate our approach using experimental data from a time - of - arrival qrng and numerically simulated data from a vacuum homodyning qrng . side - by - side comparison with the conventional conversion technique shows that our method provides an automatic on - line bias control and naturally bounds the best achievable qrng bit rate for a given measurement record .
let be a killed markov process with law , taking its values in , where is a cemetery point .we denote by the killing time of . a probability measure on called a * quasi - stationary distribution * ( qsd ) if , for all , the distribution of the process , initially distributed with respect to and conditioned to be not killed before time , is still at time , that is for every and . without loss of generality , we suppose that is an absorbing point , so that .let be a probability measure on . if it exists and provided it is a probability , the limiting conditional distribution is called the yaglom limit for , from the russian mathematician a.m. yaglom .he showed in that the limiting conditional distribution of the number of descendants in the generation of a galton - watson process always exists in the subcritical case .the existence or uniqueness of such invariant conditional distributions have been proved in a host of contexts .when is finite , it is proved in that there exists a unique qsd and that the yaglom limit converges to independently of the initial distribution . in , the case of a birth and death process on is studied .for this process , the set of qsds is either empty , or a singleton , or a continuum indexed by a real parameter and given by an explicit recursive formula .this is an exception : most of the known results on qsds are related with existence or uniqueness problems . in , the existence of a quasi - stationary distribution for a continuous time markov chain on killed at is proved under conditions on moments of the killing time , using an original renewal dynamical approach . in , the case of -dimensional diffusion on with drift andkilled at are studied , with the assumption that is a natural boundary .the dependence between the initial measure and the yaglom limit is explored in ( for a brownian motion with constant drift killed at ) and ( for the orstein - uhlenbeck process killed at ) . in ,the case of -dimensional diffusions with general killing on the interior of a given interval is investigated . in ,the authors study the existence and uniqueness of the qsd for -dimensional diffusions killed at and whose drift is allowed to explode at the boundary , which is the case under study in the present paper .see for a regularly updated extensive bibliography on qsd . in this paperwe are concerned with -dimensional it diffusions with values in ,+\infty[\cup\{\partial\} ] . in , the yaglom limit of this process is studied and the authors give some conditions on the drift , which are sufficient for the existence and the uniqueness of the qsd . in particular , they allow the drift to explode at the origin .as explained in the paper , this diffusion is closely related with some markov mortality models .such applications need the computation of the process qsd , but the tools used in are based on spectral theory s arguments and do nt allow us to get explicit values .our aim is to give an easily simulable approximation s method of this qsd .the problem of qsd s approximation has been already explored in , when is a bounded open set of and is a brownian motion killed at the boundary of .the authors proved an approximation s method exposed in , which is based on a fleming - viot type system of interacting particles whose number is going to infinity . in , it is proved that this method works well for a continuous time markov chain in a countable state space under suitable assumption on the transition s rates ( moreover , the existence of a qsd is a consequence of the approximation s method ) .new difficulties arise from our case with unbounded drift .for instance , the interacting particle process introduced in is nt necessarily well defined .to avoid this difficulty , we begin by proving that one can approximate our qsd by the qsd s of diffusions with bounded drifts .let us denote by the law of a diffusion with values in \epsilon,1/\epsilon[ ] for the family to be tight and to converge , when , to a qsd for the law .we point out the fact that this result remains valid in the case of an unbounded drift diffusion with values in a bounded interval . in a second part , we prove an approximation method for each probability measure , based on the interacting process introduced in .fix and let us describe the interacting particle process of size : each particle moves independently in \epsilon,1/\epsilon[ ] , driven by the stochastic differential equation where is a -dimensional brownian motion and are two positive constants . clearly , is an absorbing state for this diffusion . in a second timewe study in detail the case of the wright - fisher diffusion on ,1[ ] , and is defined by ,1[,\ ] ] where is a -dimensional brownian motion .this diffusion is absorbed at .let be the law of a diffusion process taking its values in ,+\infty[\cup\{\partial\} ] .we denote by the infinitesimal generator associated with .we define , ,+\infty[ ] , we define as the law of the diffusion taking its values in \epsilon,1/\epsilon[ ] .we choose it so that let us recall some results of : [ th9 ] the yaglom limit associated with exists for all initial distributions , \epsilon,1/\epsilon[ ] denotes the space of probability measures on ,+\infty[ ] .the hypotheses ( h1 ) and ( h2 or h3 ) are the assumptions that are made in to prove the existence of the yaglom limit .if a process satisfies the hypotheses of theorem [ th1 ] , then it is killed in finite time a.s or it is never killed a.s . indeed , assume that the process can be killed in finite time with a positive probability . then ( see ) and ( as a consequence of ( h1 ) and ( h2 or h3 ) ) . but this two conditions are fulfilled if and only if the process is killed in finite time almost surely ( see ( * ? ? ?* theorem 3.2 p.450 ) ) .[ re1 ] the existence of a qsd for can be seen as a consequence of theorem [ th1 ] .the existence of the yaglom limit is proved in ( * ? ? ?* theorem 5.2 ) . in part[ par3 ] , we give the counterpart of theorem [ th1 ] for diffusions with values in a bounded interval .the end of the section is devoted to the proof of theorem [ th1 ] .this part is devoted to the proof of the following result , [ pr1 ] assume that the hypotheses ( h1 ) and ( h2 or h3 ) are satisfied .then the family is tight .moreover , every limit point is absolutely continuous with respect to the lebesgue measure .we know that , ) ] is the vector space of infinitely differentiable functions with compact support in \epsilon,1/\epsilon[ ] , then , applying the cauchy - schwarz inequality to the right term above , from ( [ eqc3 ] ) , the integral product is bounded by , thus , independent from , such that where for all ,+\infty[ ] such that ,+\infty[\setminus k_{\delta}}{v_{\epsilon}^2(x)dx}\leq\delta,\ ] ] for all ,1/2[ ] satisfies . [ le6 ]assume that ( h1 ) is satisfied .then is uniformly bounded below by a constant ._ proof of lemma [ le6 ] : _ assume that is nt uniformly bounded below : one can find a sub - sequence , where , which tends to . from lemma [ le5 ] , is uniformly bounded , so that .the family being tight , one can find ( after extracting a sub - sequence ) a positive map such that , for all continuous and bounded , indeed , being uniformly bounded , all limit measure is absolutely continuous with respect to the lebesgue measure .in particular , then but is continuous and positive on , so that vanishes almost every where .finally , by the convergence property ( [ eqc5 ] ) applied to equal to almost everywhere , we have what is absurd .thus , one can define . [ le7 ] assume that ( h1 ) and ( h2 ) are satisfied .then the family is tight ._ proof of lemma [ le7 ] : _ by , we have for all , using cauchy - schwarz inequality , we get on one hand on the other hand , thanks to ( h2 ) , both terms are going to uniformly in , when and tend respectively to and . as a consequence ,the family is tight . [ le8 ] assume that ( h1 ) and ( h3 ) hold .then the family is tight ._ proof of lemma [ le8 ] : _ the first part of the hypothesis ( h3 ) is the same as ( h2 ) s one , then when goes to infinity , uniformly in .moreover , there exists a constant such that , for any ,1] ] , this is a consequence of ( * ? ? ?* proposition 4.3 ) whose proof is still available under our settings .this inequality allows us to conclude the proof of lemma [ le8 ] . thanks to equality ( [ eqb1 ] ) and lemmas [ le6 ] , [ le7 ] and [ le8 ] , the first part of proposition [ pr1 ] is proved .moreover , has a density with respect to the lebesgue measure which is bounded on every compact set , uniformly in . thus every limit point is absolutely continuous with respect to the lebesgue measure .[ pr11 ] assume that hypotheses ( h1 ) and ( h2 or h3 ) are fulfilled and let be a probability measure which is the limit of a sub - sequence , where when .then is a qsd with respect to ._ proof of proposition [ pr11 ] : _ from proposition [ pr1 ] , the family is tight .let be a limit point of the family .there exists a sub - sequence which converges to , where is a decreasing sequence which tends to .we already know that is absolutely continuous with respect to the lebesgue measure .that implies that , for all open intervals , d[\subset\mathbb{r}_+ ] , , d[\subset\mathbb{r}_+ ] and all , and , for all , replacing by , which is decreasing to when , and by monotone convergence theorem , we have ,m[}{\left[\mathbb{p}^{\epsilon_k}_x(\omega_t\in d)-\mathbb{p}^0_x(\omega_t\in d)\right]d\nu_{\epsilon_k}(x)}{\smash{\mathop{\longrightarrow}\limits_{k\rightarrow+\infty}^{}}}0\ ] ] and ,+\infty[}{\left[\mathbb{p}^{\epsilon_k}_x(\omega_t\in d)-\mathbb{p}^0_x(\omega_t\in d)\right]d\nu_{\epsilon_k}(x)}{\smash{\mathop{\longrightarrow}\limits_{k\rightarrow+\infty}^ { } } } 0.\ ] ] finally , the density of being bounded above in every compact set ] is continuous , and is increasing to when .thus the denominator can be treated in the same way . we can now conclude the proof of theorem [ th1 ] : [ pr12 ] assume that ( h1 ) and ( h2 or h3 ) hold .the limit measure in the statement of proposition [ pr11 ] is unique .moreover is the yaglom limit associated with , ,+\infty[ ] when goes to . as a consequence , in this case , the density of with respect to is an eigenfunction of with eigenvalue , where is the adjoint operator of ( this is a consequence of the spectral decomposition proved in ( * ? ? ? * theorem 3.2 ) ) .as defined , is at the bottom of the spectrum of .thanks to ( * ? ? ?* theorem 3.2 ) , this eigenvalue is simple .moreover , ( * ? ? ?* theorem 5.2 ) states that this qsd is equal to , what concludes the proof . theorem [ th1 ] is stated for -dimensional diffusions with values in ,+\infty[ ] , where , defined by the sde , b[,\ ] ] and killed when it hits or . here is a standard brownian motion and , b[) ] , driven by the sde +\epsilon , b-\epsilon[,\ ] ] and killed when it hits or . as proved in , there exists a unique qsd associated with . we define and .the counterpart of theorem [ th1 ] under these settings is [ th10 ] assume that the following hypotheses are fulfilled : is uniformly bounded below by , where is a positive constant . or is integrable on a neighbourhood of . or is integrable on a neighbourhood of .then the family of qsd is tight as family of measures on , b[ ] in place of \epsilon,1/\epsilon[ ] .the law of will be denoted by . from ,the qsd of is unique and equals the yaglom limit .it will be denoted by . for notational convenience ,new notations have been defined for this section , which is totally independent of the previous one .fix and let us define formally the interacting particle process with particles described in the introduction .let be independent brownian motions and ,1[^n ] and satisfies the sde ( and then it is independent of the others ) until . * at time , the path of a particle , denoted by ( it is unique ) , has a left limit equal to or . *a particle is chosen in .the particle jumps on the position of the particle : we set . * after time , each particle evolves in ,1[ ] , so that we have for all almost surely . * at time ( which is then strictly bigger than ) , a unique particle has a path whose left limit is equal to or . * a particle is chosen in .the particle jumps on the position of the particle : we set . * after time , the particles evolve independently from each other and so on . following this way , we define the strictly increasing sequence of stopping times , the time and the interacting particle system for all .the law of will be denoted by .[ [ section ] ] we can now state the main result of this section : [ th2 ] is well defined , that means almost surely .it is geometrically ergodic , with unique stationary distribution .let be the empirical stationary measure of the interacting particle process with particles , that is the empirical measure of a random vector ,1[^n ] .this coupling will be useful in each step of the proof .define ,1[}{|q(x)|} ] by the sde with and as reflecting boundary ( see for the definition of a reflected diffusion ) .the coupling inequality is fulfilled at time .the brownian motion will depend on and on the position of . if belongs to ] , the coupling inequality is obviously fulfilled , thanks to the reflection of on .assume that is in ,1/3] ] for all and /3,1[ ] of , where , and we have , for all , so that , by left continuity , and then . therefore , we have ,1/3] ] , , where and is an increasing process due to the reflecting property of the boundary for ( note that , so that , for small enough , for all ) . then stays non - negative between times and , what contradicts the definition . the time is then a jump time for the particle . if , then , by definition of , and , by left continuity of the process , , so that , what is impossible .therefore , and , by existence of left limits for the two processes , such that .define .we have then , conditionally to , . by symmetry , conditionally to the event , we have , then and therefore .finally , almost surely . [ pr3 ] for all , the interacting particle system is well defined , that is almost surely ._ proof of proposition [ pr3 ] : _ let be the size of the interacting particle system and fix arbitrarily its starting point ,1[^n ] , with . will jump on the position of infinitely often .then it will come back from \epsilon,1-\epsilon[ ] .let ( resp . ) be the empirical measure of the system of particles ( resp . ) , that is we will suppose that , at time , the sequence of empirical measures satisfies the following non - degeneracy property , which ensures that the mass of does nt degenerate at the boundary , uniformly in : the family of random probabilities is said to verify the non - degeneracy property if , for any , where ,r]\cup[1-r,1[ ] , defined by the sde with and as reflecting boundary . herethe are independent brownian motions .the random processes are independent , identically distributed with values in ,\mathbb{r}) ] . with such a coupling, we have by adding the contribution of the which starts in ,a[ ] converge in law to } ] when . here,1[) ] equipped with the weak topology ._ proof of proposition [ pr5 ] : _ for all maps ) ] is finite almost surely : let us denote by the increasing sequence of jump times of the particle .we have because and .that implies by summing over , we obtain where is the continuous martingale and is the pure jump martingale now , we interpret each jump as a killing .then we introduce a loss of of the total mass at each jump : we look at the measure process decreased by a factor at each jump .more precisely , we set where denotes the total number of jumps before time .[ led1 ] the sequence of measure processes converges in law to in the skorokhod topology ,{\cal m}(]0,1[)) ] vanishes on , and is the semigroup associated with the diffusion defined by . from kolmogorov s equation ( see ( * ? ? ?* proposition 1.5 p.9 ) ) , we deduce from , and , that } \left| \nu^n(t , f)-\int_0 ^ 1{p_{t}f(x)}d\mu^n(0,x ) \right|^2\right ) \leq \frac{1}{n } c(f),\end{aligned}\ ] ] where is a positive constant , which only depends on . for each map ) ] is equal to on , 1 - 2r[ ]. then where ( see proposition [ pr4 ] ) and are going to when tends to , uniformly in , and ) ] .that means }{\smash{\mathop{\longrightarrow}\limits_{n\rightarrow\infty}^{law}}}(\mathbb{p}_{\mu(0,dx)}(x_t\in]0,1[),\mathbb{p}_{\mu(0,dx)}(x_t\in dx))_{t\in[0,t]}.\ ] ] the process ,1[) ] .that means } { \smash{\mathop{\longrightarrow}\limits_{n\rightarrow\infty}^{law } } } ( \mathbb{p}_{\mu(0,dx)}(x_t\in dx|x_t\neq\partial))_{t\in[0,t]}\ ] ] in the skorokhod topology ,{\cal m}_1(]0,1[)) ] such that , ,1[^n,\ \forall t\in\mathbb{r}_+,\ ] ] where is finite , and is the total variation norm . in particular , is a stationary measure for the process .when exists , we denote by the empirical stationary measure associated with , that is a random probability which is distributed as , where is a random vector in ,1[^n ] such that ,1[^n,\ \forall n\in\mathbb{n},\end{aligned}\ ] ] where is finite and . to prove the geometrical ergodicity of the 1-skeleton , let us introduce the following definition : ,1[^n ] ( with ) is a small set for the -skeleton .moreover , so that where is the return time to ._ proof of lemma [ le4 ] : _ fix and let be the event `` the process has no jumps between times and '' . define ^n}\mathbb{p}^{ipp}_x({\cal f}) ] , }(x ) dx ] is a small set . for all ,1[^n ] , and then satisfies condition . [ [ section-2 ] ]the chain is aperiodic .moreover , if the lebesgue measure of a subset ,1[^n ] , where is the first hitting time on for the chain .thanks to ( * ? ? ?* theorem 2.1 p.1673 ) , if such a markov chain has a small set which satisfies , then it is geometrically ergodic . as a consequence , lemma [ le4 ] allows us to conclude the proof of proposition [ pr9 ] .we are interested in proving the following result , which is the second part of theorem [ th2 ] [ pr10 ] the sequence of random measures converges in law to the deterministic measure , qsd of the process ._ proof of proposition [ pr10 ] : _ for each ,1/4[ ] to , equal to on and equal to on .we have ,1/4[.\ ] ] thanks to proposition [ pr9 ] , the sequence of random measures converges in law to when tends to .that implies ,1/4[.\ ] ] we denote by the empirical measure of .let us choose monotone on ,2r[ ] . from the coupling inequality , for all and .then which tends to when tends to and to , uniformly in . as a consequence , where is the deterministic measure defined by , for all measurable set . that yields uniformly in .the family of intensity measures is then tight .this is a sufficient condition for the family of random variables to be tight , as shown in ( * ? ? ?* corollary 2.2 ) .we conclude that it exists a sub - sequence which converges in law to a random probability measure .choose .the non - degeneracy property is fulfilled .thanks to proposition [ pr5 ] , } { \smash{\mathop{\longrightarrow}\limits_{n\rightarrow\infty}^{law } } } ( \mathbb{p}_{\cal x}(x_t\in dx|x_t\neq \partial))_{t\in[0,t]},\ \forall t>0\ ] ] in the skorokhod topology ,{\cal m}_1(]0,1[)) ] being almost surely continuous , with respect to the weak topology of ,1[) ] , where is the killing time of the process and its unique qsd ._ proof of lemma [ le3 ] : _ let be a probability measure on ,1[ ] ( see r.g .pinsky s explanations ( * ? ? ?* hypotheses 2 and 3 ) ) : here is uniformly bounded above in the variables and ( see ( * ? ? ?* proof of the equality 7.2 , p27 ) ) , then , by dominated convergence , one can integrate with respect to under the limit in , the same holds for : then , by fubini s theorem , that is , from ( [ eq40 ] ) and ( [ eq41 ] ) with ,1[ ] , driven by the stochastic differential equation and killed when it hits . here is a -dimensional brownian motion and are two positive constants . as proved in , ( h1 ) and ( h3 )are fulfilled in this case .thanks to theorem [ th1 ] and denoting by the yaglom limit associated with , we have in the numerical simulations below , we set equal to . by theorem [ th2 ] , we have where is the empirical measure of the system studied in section [ par2 ] . in the numerical simulations , we set and , because of the randomness of , we approximate using the ergodic theorem : we compute .the graphic below ( see figure [ fig10 ] ) shows this approximation for different values of and .we illustrate the result of section [ par3 ] by an application to the wright - fisher diffusion with values in ,1[ ] , driven by the sde ,\pi[,\ ] ] killed when it hits ( is never reached ) . for all ,\pi/2[]-valued random variables , _ theor ._ * 8 * , 70 - 74 .d. steinsaltz and s.n . evans ( 2004 )markov mortality models : implications of quasi - stationarity and varying initial distributions , _ theo ._ * 65 * , 319 - 337 .d. steinsaltz and s.n .evans ( 2007 ) quasistationary distributions for one - dimensional diffusions with killing , _ trans .soc . _ * 359 - 3 * , 1285 - 1324 .a.m. yaglom ( 1947 ) certain limit theorems of the theory of branching processes ( in russian ) _ dokl .nauk _ * 56 * , 795 - 798 .k. yosida ( 1966 ) _ functional analysis , 2nd edition _ springer - verlag .
the long time behavior of an absorbed markov process is well described by the limiting distribution of the process conditioned to not be killed when it is observed . our aim is to give an approximation s method of this limit , when the process is a -dimensional it diffusion whose drift is allowed to explode at the boundary . in a first step , we show how to restrict the study to the case of a diffusion with values in a bounded interval and whose drift is bounded . in a second step , we show an approximation method of the limiting conditional distribution of such diffusions , based on a fleming - viot type interacting particle system . we end the paper with two numerical applications : to the logistic feller diffusion and to the wright - fisher diffusion with values in ,1[$ ] conditioned to be killed at . _ key words : _ quasi - stationary distribution , interacting particle system , empirical process , yaglom limit , diffusion process . _ msc 2000 subject : _ primary 65c50 , 60k35 ; secondary 60j60
forward simulations of population genetics , track either the genotype of every individual in the population , or the number of individuals that carry a particular genotype .the former has been implemented in a number of very flexible simulation packages . in large populations with a moderate number of loci ,storing the abundance of all possible genotypes is often faster .simulating such large populations with a small number of loci is for example essential when studying the evolution of drug resistance in viral or bacterial pathogens .individual - based population genetic simulations are quite straightforward and usually employ a discrete generation scheme in which processes such as mutation , selection , and migration are applied at every generation to every individual .individuals are then paired up via a mating scheme and recombinant offspring is produced .existing toolboxes often emphasize biological realism and allow the user to specify complex life cycles , see e.g. .our emphasis here is on efficient simulation of large populations . instead of tracking individuals , we keep track of the distribution of gametes across all possible genotypes , denoted by where .this genotype distribution changes due to mutation , selection and recombination .the former two are again straightforward and require at most operations ( each genotype can mutate at any one of the loci ) . in our implementation , selection acts on haploid gametes , precluding dominance effects .recombination , however , is a computationally expensive operation since it involves pairs of parents ( of which there are ) which can combine their genome in many different ways ( ) . as a consequence, a naive implementation requires operations to calculate the distribution of recombinant genotypes .it is intuitive that the complexity of this algorithm can be reduced : given a recombination pattern , only a fraction of the genome is passed on in sexual reproduction and all genotypes that agree on that fraction contribute identically. we will show below that exploiting this redundancy allows to reduce the number of operations from to .after selection , mutation , and recombination , the population distribution contains the expected number of individuals of genotype in the next generation . for stochastic population genetics ,we still need to resample the population in way that mimics the randomness of reproduction .this is achieved by resampling individuals according to a poisson distribution with mean for each genotype , which will result in a population size of approximately .the probability of producing a genotype by recombination is where specifies the particular way the parental genomes are combined : if locus is derived from the mother ( resp .the genotype is summed over ; it represents the parts of the maternal ( ) and paternal ( ) genotypes that are not passed on to the offspring .we can decompose each parent into successful loci that made it into the offspring and wasted loci , as follows : and , where and a bar over a variable indicate respectively the elementwise and and not operators .the function assigns a probability to each inheritance pattern , see for a more detailed explanation . in a facultatively sexual population ,a fraction of is replaced by , while in an obligate sexual population .the central ingredient for the efficient recombination is a fast - fourier algorithm for functions on the l - dimensional binary hypercube .every function on the hypercube can be expressed as where and takes the values . in total, there are coefficients for every subset of loci out of a total of loci .similarly , each coefficient is uniquely specified by these nominally operations can be done in via the fft scheme illustrated in fig .[ fig : fft ] . operations .arrow going up indicate addition , going down substraction . for the general dimensional hypercubes cycles are necessary where terms differing at different bits are combined . ] with some algebra ( see online supplement ) , one can show that the generic fourier coefficient of is given by where the sum runs over all partitions of into groups of and denoted by and .variables such as are the fourier coefficients of the genotype distribution , , and the crossover function is expanded into the latter coefficients can be calculated efficiently by realizing that for , there is exactly one term unequal to zero .all subsequent terms can be calculated by successive marginalization of unobserved loci . in total, calculating requires operations .since there are terms of order , the entire calculation requires operations . in case of single crossover recombination, the algorithm can be sped up further to .ffpopsim is implemented in c++ with a python2 wrapper .documentation , a number of examples , and test routines are provided . as an example, we discuss here the problem of fitness valley crossing , which has received attention recently in the population genetics literature .consider a fitness landscape where the wild - type genotype has ( malthusian ) fitness , while the quadruple mutant has fitness .all intermediate genotypes have the same slightly deleterious fitness ( relative to wild - type ) . the time required for crossing the valleycan be computed by the following routine :.... import ffpopsim l = 4 # number of loci n = 1e10 # population size # create population and set rates c = ffpopsim.haploid_lowd(l ) c.set_recombination_rates([0.01 ] * ( l-1 ) ) c.set_mutation_rate(1e-6 ) # start with wildtype : 0b0000 = 0 c.set_genotypes([0b0000],[n ] ) # set positive relative fitness for wildtype # and quadruple mutant : 0b1111 = 15 c.set_fitness_function([0b0000 , 0b1111 ] , [ s1 , s1+s2 ] ) # evolve until the quadruple mutant spreads while c.get_genotype_frequency(0b1111)<0.5 : c.evolve(100 ) print c.generation .... the runtime and memory requirements of still preclude the simulation of more than loci . for this reason , we also include a streamlined individual based simulation package with the same interface that can simulate arbitrarily large number of loci and has an overall runtime and memory requirements in the worst case scenario .to speed up the simulation in many cases of interest , identical genotypes are grouped into clones .this part of the library was developed for whole genome simulations of large hiv populations ( , ) and a specific wrapper class for hiv applications is provided . as of now , the library does not support dominance effects which would require a fitness function that depends on pairs of haploid genomes .such an extension to diploid populations is straightforward .we would like to thank boris shraiman for many stimulating discussion and pointing us at the fft algorithm .this work is supported by the erc though stg-260686 .12 natexlab#1#1bibnamefont # 1#1bibfnamefont # 1#1citenamefont # 1#1url # 1`#1`urlprefix[2]#2 [ 2][]#2 , , * * ( ) , , issn , http://dx.doi.org/10.1016/s0167-739x(02)00171-1 . , and , , * * ( ) , . , , * * ( ) , ,issn , http://dx.doi.org/10.1109/mcse.2007.55 . , , * * ( ) , . , and , , * * ( ) , . , , __ , volume ( ) . , and , , * * ( ) ,. , and , , * * ( ) , . , , * * , . , , , and , , * * ( ) , . , , and , , * * ( ) ,. , , and , , * * , .the toolbox makes extensive use of the gnu scientific library ( http://www.gnu.org/software/gsl/ ) and the boost c++ library ( http://www.boost.org/ ) .the python wrapper further requires numpy , scipy and matplotlib ( http://www.scipy.org ) .if all of these are installed and the appropriate path are set , ffpopsim can be compiled using make .installation instructions are provided in the ` install ` file .the building process creates files inside the folder ` pkg ` ; c++ headers are created in ` pkg / include ` , the static c++ library in ` pkg / lib ` , and the python module files in ` pkg / python ` .` ffpopsim ` contains two packages of c++ classes and python wrappers for forward population genetic simulations .one for large populations and relatively few loci ( ) , another one for longer genomes .the former is called ` hapoid_lowd ` and tracks the abundance of all possible genotypes .the other one is called ` haploid_highd ` and tracks only genotypes present in the population .the latter only allows for limited complextity of fitness functions and crossover patterns .these two parts of the library have very similar syntax but work quite differently under the hood .we will therefore describe them separately below .a complete documentation in html is generated automatically from the source using doxygen and can be found in ` doc / html / index.html ` . since we assume that each locus is in one of two possible states , the genotype space is an dimensional binary hypercube .the population is a distribution of individuals on that hypercube , and so are the mutant and recombinant genotypes .also the fitness function , which assigns a number to each genotype , is a function on the hypercube .for this reason , ` hapoid_lowd ` makes extensive use of a class ` hypercube_lowd ` that stores an dimensional hypercube and implements a number of operations on the hypercube , including a fast - fourier transform ( fft ) .every function on the hypercube can be expressed as where , i.e. , simply a mapping from to .there are coefficients for every subset of loci out of loci , so in total coefficients .a coefficient is uniquely specified by these nominally operations ( for each coefficient ) can be done in via the fft scheme illustrated in fig .[ fig : fft ] .both the forward and reverse transform are implemented in ` hypercube_lowd ` .arrow going up indicate addition , going down substraction . for the general hypercubes cycles are necessary where terms differing at different bits are combined . ]an instance of ` hypercube_lowd ` can be initialized with the function values of the hypercube or with its fourier coefficients .the population class , ` haploid_lowd ` , holds instances of ` hypercube_lowd ` for the population , the mutant genotypes , the recombinant genotypes and the fitness function . from a practical point of view, an instance of a low - dimensional population is initialized in three steps , a. the class is instantiated + ....haploid_lowd::haploid_lowd(int l=1 , int rng_seed=0 ) .... + where l is the number of loci and ` rng_seed ` is a seed for the random number generator ( a random seed by default ) ; b. the initial population structure is set by the functions + .... int haploid_lowd::set_allele_frequencies(double * freq , unsigned long n ) int haploid_lowd::set_genotypes(vector < index_value_pair_t >gt ) int haploid_lowd::set_wildtype(unsigned longn ) .... + set the population size and composition .the first function initializes random genotypes in linkage equilibrium with the specifiec allele frequencies ` freq ` , the second explicitely sets a number of individuals for each genotype using the new type ` index_value_pair ` , and the last one sets a wildtype - only population of size ` n ` ; c. the fitness hypercube is initialized directly by accessing the attribute + .... haploid_lowd::fitness .... + in the population class . in traditional wright - fisher type models , in each generation ,the expected frequencies of gametes with a particular genotype after mutation , selection , and recombination are calculated and then the population resampled from this gamete distribution .we will now outline the steps required to update the population .a more detailed discussion can be found in .all the following steps are called by either one of the following functions : * .... int haploid_lowd::evolve(int gen=1 ) ....+ updates the population looping over a specified number of generations ` gen ` ; * .... int haploid_lowd::evolve_norec(int gen=1 ) .... + is an alternative version that skips the resampling ( deterministic evolution ) ; * .... int haploid_lowd::evolve_deterministic(int gen=1 ) .... + is another alternative that skips the recombination step ( asexual evolution ) . let be the genotype distribution at the beginning of a generation . denoting the mutation rate towards the or state at locus by , the expected after mutation will be where denotes genotype with locus flipped from to or vice versa .the first term is the loss due to mutation , while the second term is the gain due to mutation from neighbouring genotypes ( in terms of hamming distance ) .the mutation rates can be specified by the folowing set of overloaded functions : either a single double rate ( same for every position and in both forward and backward sense ) , two double rates ( forward and backward rates ) , a dimensional double array ( site - specific , identical forward and backward rates ) , or a dimensional double array , * .... int haploid_lowd::set_mutation_rate(double rate ) ; .... + takes a single rate and sets it for every position and in both forward and backward sense ; * .... int haploid_lowd::set_mutation_rate(double rate_forward , double rate_backward ) ; .... + takes two rates and sets ` rate_forward ` as the forward rate ( ) and ` rate_backward ` as the backward rate ( ) * .... int haploid_lowd::set_mutation_rate(double * rates ) ; .... + takes the pointer to an array of length l and sets site - specific rates , the same for forward and backward mutations ; * .... int haploid_lowd::set_mutation_rate(double * * rates ) .... + takes a pointer to a pair of ( pointers to ) arrays , each of length l , which contain the site - specific rates for forward ( ` rates[0 ] ` ) and backward ( ` rates[1 ] ` ) mutations .selection reweighs different the population of different genotypes according to their fitness as follows where is the population average of , which is required to keep the population size constant .the corresponding function is .... int haploid_lowd::select_gametes ( ) .... for deterministic modeling , one generation would be completed at this point and one would repeat the cycle , starting with mutation again . for stochastic population genetics , we still need to resample the population in a way that mimics the randomness of reproduction . the easiest and most generic way to do this is to resample a population of size using a multinomial distribution with the current as sampling probabilities of different genotypes .alternatively , one can sample individuals according to a poisson distribution with mean for each genotype , which will result in a population of approximately size . for large populations ,the two ways of resampling are equivalent and we chose the latter ( much faster ) alternative .the function .... int haploid_lowd::resample ( ) .... samples the next generation the expected genotype frequencies .the expected population size used in the resampling is the carrying capacity . the computationally expensive part of the dynamics is recombination , which needs to consider all possible pairs of pairs of parents and all different ways in which their genetic information can be combined . in a facultatively sexual population ,a fraction of the individuals undergo mating and recombination . in obligate sexual populations , .the genotype distribution is updated according to the following rule : the distribution of recombinant gametes would naively be computed as follows : where specifies the particular way the parental genomes are combined : if locus is derived from the mother ( resp .father ) . the genotype is summed over ; it represents the part of the maternal ( ) and paternal ( ) genotypes that is not passed on to the offspring .we can decompose each parent into successful loci that made it into the offspring and wasted loci , as follows : and , where and a bar over a variable indicate respectively the elementwise and and not operators ( i.e. , ) .the function assigns a probability to each inheritance pattern . depending on whether the entire population undergoes sexual reproduction or only a fraction of it , the entire population or a fraction replaced with .the central ingredient for the efficient computation of is the fourier decomposition introduced above . the generic fourier coefficient of is given by just as and be expressed as a combination of and , we can invert the relation and express the generic as a function of and , as follows : . using this new basis and exchanging the order of summations ,we obtain notice that can be pulled out of the two inner sums , because the odds of inheriting a certain locus by the mother / father is independent of what their genetic makeup looks like .next we expand the product and introduce new labels for compactness , where is the number of loci inherited from the mother among the in . runs from ( everything happens to be contributed by the father ) to ( everything from the mother ) . and are all ( unordered ) partitions of into sets of size and , respectively .now we can group all in the inner sum with , all with , and all with .the three sums ( over , , and ) are now completely decoupled . moreover, the two sums over the parental genotypes happen to be the fourier decomposition of .hence , we have the quantity can be calculated efficiently , for each pair of partitions , by realizing that ( a ) for , there is exactly one term in the sum on the right that is non - zero and ( b ) all lower - order terms can be calculated by successive marginalizations over unobserved loci .for instance , let us assume that and that the only missing locus is the m - th one .we can compute there are ways of choosing loci out of , which can be inherited in different ways ( the partitions in and in eq .( [ eq : reccoeff ] ) ) such that the total number of coefficients is .note that these coefficients are only calculated when the recombination rates change .furthermore , this can be done for completely arbitrary recombination patterns , not necessarily only those with independent recombination events at different loci . `haploid_lowd ` provides a function to calculate from recombination rates between loci assuming a circular or linear chromosome .the probability of a particular crossover pattern is calculated assuming independent crossovers .the function .... int haploid_lowd::set_recombination_rate(double * rec_rates ) .... assumes a double array of length for a linear chromosome and of length for a circular chromosome . for a linear ( resp .circular ) chromosome , the i - th element of the array is the probability of recombining after ( resp . before ) the i - th locus .furthermore , the mating probability must be specified explicitely via the attribute .... haploid_lowd::outcrossing_rate .... the default is obligate sexual reproduction .the code offers a simpler alternative for free recombination . in this case , only the global mating probability needs to be entered .if the user does not set the recombination rates via ` set_recombination_rate ` , free recombination is the default behaviour .otherwise , this option is controlled by the following boolean attribute ....haploid_lowd::free_recombination .... note that , in a circular chromosome , there is effectively one more inter - locus segment ( between the last and the first locus ) in which crossovers can occur , and the total number of crossovers has to be even . assuming independent crossovers , the global recombination rate of circular chromosomes is lower than a linear chromosome of the same length by a factor of , where is the recombination rate between the first and last loci .the recombination process itself is initiated by .... int haploid_lowd::recombine ( ) .... for more than loci , storing then entire genotype space and all possible recombinants becomes prohibitive .hence we also include a streamlined individual based simulation package that can simulate arbitrarily large number of loci and has an overall runtime and memory requirements in the worst case scenario .the many - loci package uses the same interface as the few - loci one .this makes it easy , for example , to first test an evolutionary scenario using many ( all ) loci and to focus on the few crucial ones afterwards . to speed up the program in many cases of interest, identical genotypes are grouped into clones .this part of the library was developed for whole genome simulations of large hiv populations ( , ) . a specific wrapper class for hiv applicationsis provided . for more than 20 loci ,it becomes infeasible to store the entire hypercube .instead , we store individual genotypes as bitsets .each genotype , together with the number of individuals that carry it , as well as traits and fitness associated with it is stored for as long as it is present in the population .all of this is aggregated in the structure ` clone_t ` .the population is a vector of clones .each generation clones size are updated and added to the new generation , new clones are produced , and empty ones deleted .fitness functions are again functions on the hypercube .the latter is implemented as ` hypercube_highd ` . instead of storing all possible fitness values , `` stores non - zero fourier coefficients .whenever a new genotype is produced , its fitness is calculated by summing the appropriate coefficients . to implement mutation , a poisson distributed number with mean drawn for each locus and mutations are introduced at locus into randomly chosen genotypes .mutations are bit - flip operations in the bitset .only a global mutation rate is currently supported .prior to selection , the population average and a growth rate adjustment are computed .the latter is used to keep the population size close to the carrying capacity .the size of each clone is then updated with a poisson distributed number with mean , where is the recombination rate .another poisson distributed number with mean is set aside for recombination later .the individuals marked for sexual reproduction during the selection step are shuffled and paired .for each pair , a bitset representing the crossover pattern is produced and two new clones are produced from the two parental genomes .alternatively , all loci of the two genomes can be reassorted at random .the c++ library includes python bindings that greatly simplify interactive use and testing .the wrapping itself is done by swig .most notably , the c++ classes ` haploid_lowd ` , ` haploid_highd ` and the hiv - specific subclass are fully exposed to python , including all their public members . the performance speed for evolving a populationis unchanged , since the ` evolve ` function iterates all steps internally for an arbitary number of generations. the bindings are not completely faithful to the c++ interface , to ensure a more intuitive user experience .for instance , c++ attribute set / get members are translated into python properties via the builtin ` property ` construct .furthermore , since direct access to the ` hypercube_lowd ` instances from python is not straightforward , a few utility functions have been written to do common tasks .the fitness hypercube can be set easily by either one of ....haploid_lowd.set_fitness_function(genotypes , fitnesses ) haploid_lowd.set_fitness_additive(fitness_main ) .... the former function is used to set specific points on the hypercube : it takes a sequence of genotypes ` genotypes ` ( integers or binary literals using ` 0b ` notation ) and a sequence of fitness values ` fitnesses ` , corresponding to those genotypes .any missing point on the fitness hypercube will be consider neutral .the second function creates an additive fitness landscape , in which main effects are specified by the l - dimensional input sequence ` fitness_main ` .after installation , the ffpopsim library can be used in python as a module , e.g. .... from ffpopsim import haploid_lowd .... the bindings make heavy use of the numpy library and its swig fragments and typemaps .we therefore recommend to import numpy before ffpopsim , although this is not strictly necessary .moreover , the python binsings include a few functions for plotting features of the population , such as genetic diversity .the python module matplotlib is required for this purpose .the hiv - specific part of the code has been expanded further in python to enable quick simulations of viral evolution under commonly studied conditions .in particular , random genotype - phenotype maps for viral replication capacity and drug resistance can be generated automatically from a few parameters , via the functions ....hivpopulation.set_resistance_landscape .... the input parameters reflect a number of typical properties of hiv populations , such as the fraction of sites carrying potentially adaptive mutations .see the inline python documentation for further details on these functions . moreover , since studies of hiv evolution often involve a large number of genotypic data , a function for saving the genotype of random individuals from the current population in a compressed format has been added .the syntax is the following : .... hivpopulation.write_genotypes_compressed(filename , number_of_individuals ) .... where ` filename ` is the name of the file , in which the data are to be stored , and ` number_of_individuals ` if the size of the random sample .the data can be accessed again by the standard numpy ` load ` function .one of the most striking effects of genetic epistasis is the slowdown of evolution when a combination of mutations is beneficial , but intermediate mutants are deleterious compared to wildtype .such scenario is relevant in applications , for instance for the emergence of bacterial or viral resistance to drugs .not surprisingly , recombination plays a central role in this process . on the one hand, it enhances the production rate of multiple mutants , on the other it depletes the class of complete mutants by back - recombination with deleterious backgrounds .if the script is run with different recombination rates , the effect of this parameter on the time for valley crossing can be investigated , as shown in fig .[ fig : valley ] .the full scripts producing the figures is provides as separate file . during an hiv infection ,the host immune system targets several viral epitopes simultaneously via a diverse arsenal of cytotoxic t - cells ( ctls ) .mutations at several loci are thus selected for and start to rise in frequency at the same time but , because of the limited amount of recombination , end up in wasteful competition ( interference ) at frequencies of order one .the theoretical description of genetic interference is involved and often limited to two - loci models , but ffpopsim makes the simulation of this process straightforward .the following script evolves a 4-loci population under parallel positive selection and tracks its genotype frequencies : # evolve until fixation of the quadruple mutant , # storing times and genotype frequencies times = [ ] genotype_frequencies = [ ] while pop.get_genotype_frequency(0b1111 ) < 0.99 and pop.generation<1e7 : pop.evolve ( ) times.append(pop.generation ) genotype_frequencies.append(pop.get_genotype_frequencies ( ) ) .... hiv evolution during chronic infection is determined by a number of parallel processes , such as mutation , recombination , and selection imposed by the immune system . in combination , these processes give rise to a complicated dynamics and do nt understand how population features such as population diversity depend on model parameters .hence simulations are an important source of insight .ffpopsim offers a specific double c++/python interface to this problem via its class ` hivpopulation ` .the following script simulates an hiv population for one thousand generations , under a random static fitness landscape , and stores a hundred random genomes from the final population in a compressed file : numpy can be used subsequently to analyze the genome sequences .alternatively , the internal python functions can be used , e.g. for calculating the fitness distribution directly using ` hivpopulation.get_fitness_histogram ` , as shown in fig .[ fig : hivfitness ] .the full scripts producing the figures is provides as separate file .
the analysis of the evolutionary dynamics of a population with many polymorphic loci is challenging since a large number of possible genotypes needs to be tracked . in the absence of analytical solutions , forward computer simulations are an important tool in multi - locus population genetics . the run time of standard algorithms to simulate sexual populations increases as with the number of loci , or with the square of the population size . we have developed algorithms that allow to simulate large populations with a run - time that scales as . the algorithm is based on an analog of the fast - fourier transform ( fft ) and allows for arbitrary fitness functions ( i.e. any epistasis ) and genetic maps . the algorithm is implemented as a collection of c++ classes and a python interface . * availability : * http://code.google.com / p / ffpopsim/.
in the past two decades , improved understanding of dimension has been stimulated by studies of nonlinear dynamical systems and their strange attractors .canonical dimensions used to characterize strange attractors have been defined in terms of zero - scale limits .these limit - based dimensions are inferred by varying a partition scale and extrapolating results to an invariant limit . in this paperwe consider _ scale - local _ rnyi dimensions of a strange attractor of the hnon map as explicit functions of scale .we demonstrate that approximations to the zero - scale limit by extrapolation represent scale averages of scale - local dimension .we also contrast dimension defined as a topological invariant with dimension based on numerical analysis of arbitrary distributions and the problem of estimation based on computationally - accessible scale intervals , for which extrapolations based on _ a priori _ considerations are not relevant .in this paper we review limit - based and scale - local dimension definitions , identify limit - based dimensions as scale averages of scale - local dimensions , survey previous limit - based dimension analyses of a hnon attractor , present a scale - local dimension analysis of this attractor , and examine the general structure of scale - local and running - average dimension distributions and their monotonicity with rnyi index .we conclude by examining the status of limit - based dimensions as topological invariants and their relation to scale - local dimension , information and dimension transport .dimensionality is a fundamental property of distributions characterizing their correlation structure .modern dimension theory is based on the work of carathodory and hausdorf .dimension is there defined in terms of asymptotic limits of set measures depending on partition scale .limit - based dimensions are invariant properties of distributions also invariant under certain transformations .hausdorff - besicovitch dimension is based on the set measure where is a partition of the embedding space , is the _ support _ of set and is the size of the partition element .the characteristic scale of a hausdorff partition is an_ upper limit _ on partition - element size .if the point measure or dimension function is assumed to have a power - law form with arbitrary , then hausdorff - besicovitch dimension is the value of for which remains finite , .box - counting dimension is based on partition elements ( boxes ) which have a common size ( in contrast to hausdorff partitions ) .the number of boxes of size required to cover a set is for self - similar sets is expected to follow a power law on scale : , where is the fractal or box - counting dimension , in which case with the boundary size of . by analogyone can define a box - counting set measure the limit should remain finite for = . box - counting dimension can be evaluated with the computationally convenient eq .( [ boxdim ] ) , whereas hausdorff set measure , an _ infinite _ sum over partition elements , can not be factored as in the second line of eq .( [ hausbox ] ) .hausdorff and box - counting dimensions were initially applied to strange attractors with the goal of better understanding the nonlinear processes which define such distributions .the dimension concept was later expanded to information , correlation and higher rnyi dimensions , reflecting multipoint ( -point ) correlations of measure distributions .these dimensions are also conventionally defined in terms of zero - scale limits .box - counting and hausdorff dimensions correspond to .information dimension is defined application of lhpital s rule in the limit .correlation dimension measures two - point ( ) correlations as defined by the correlation integral c_2(e , e ) & = & 1 n^2(e ) _i , j=1^n ( e-|x_i - x_j| ) + & & _ i=1^m(e , e ) n_i^[2](e , e)/n^[2](e ) , with as in the case of a fractal . in general , , and , with the rnyi dimensions defined by and , \end{aligned}\ ] ] where we approximate the rank- normalized correlation integral by } / n^{[q]} ] , is the normalized contents ( point count or measure total ) of the partition element , }/n^q \rangle \approx \langle n_i / n \rangle^q ] then ranges from the limit - based ] and may be chosen to minimize boundary - scale and void - bin biases in a scale - local dimension distribution .variation of with decreasing lower bound may give a misleading impression of convergence to .integration over an increasing scale interval attenuates scale - local variation in proportion to the averaging interval , suggesting convergence to a zero - scale dimension value ( if it exists ) which is not necessarily the limit of the mean .for illustration we consider limit - based and scale - local dimensions of a well - known strange attractor of the hnon map . the attractor we analyze corresponds to parameter values and , whose properties have been extensively described in the literature , and falls in the bounded region ] [ null ] d_i & & = i .these six values are compared ( solid points ) to other averaging schemes and measurements in fig .[ dimsurv ] and to the full scale - local dimension distributions in fig .[ henplotsa1 ] , where we compare to obtained from a scale - local distribution .value was obtained in by extrapolating the from eq .( [ null ] ) to a ` zero - scale ' limiting value , driving the estimate somewhat high compared to a uniformly - weighted average over the entire ] also used to form running - average distributions in fig .[ henplotsa2 ] . within this intervalthe distributions closely approximate those of the parent .extensions beyond this optimum interval ( gray curves ) are subject to significant bias ( boundary and void - bin ). dotted curves at smaller scale ( for 1 m iterations ) illustrate void - bin bias .also plotted are the local - average values from ( solid dots ) shown in fig .[ dimsurv ] .averages of the scale - local distribution over nearly matching scale intervals ( open boxes ) are in good agreement .( solid ) and ( dashed ) over four scale decades for 10 m map iterations .dotted curves are for 1 m map iterations showing void - bin bias .solid dots are local scale averages from .open boxes are comparable averages of scale - local dimensions showing good agreement .horizontal lines correspond to limit - based estimates of ( solid ) and of and = 1.26 ( dashed).,width=384 ] the distributions are highly structured ( structure in fig .[ henplotsa1 ] limited by scale resolution ) and very reproducible ( typical variation with different mapping seeds is within two line widths ) .scale - local dimension is clearly not point - wise monotonically ordered with ( discussed further in sec .[ mono ] ) .there is no trend within this scale interval for decreasing structure at smaller scale .it is suggested in another high - resolution study over 11 decades that the dimension distribution may itself be self - similar on scale _ resolution _ and does not obviously converge to a limiting value at smaller scale . for for distributions in fig .[ henplotsa1 ] with .open triangles show data from also as a running average according to eq .( [ double ] ) .lower curves show boundary bias due to starting point .uppermost curves show results from modified limit - based definition eq .( [ single ] ) and .open triangles show data from as a running average according to eq .( [ double]).,width=384 ] in fig .[ henplotsa2 ] are shown running averages of two scale - local distributions of the form eqs .( [ dlim],[double ] ) based on distributions in fig . [ henplotsa1 ] .middle curves ( solid : , dashed : ) correspond to chosen to minimize bias near the boundary scale .these running averages continued over a semi - infinite scale interval would terminate in limit - based dimension values .the low - scale limiting values in this plot furnish the best _ estimates _ of limit - based dimensions given finite computing resources .lower curves correspond to and are severely biased .horizontal lines correspond to limit - based estimates of ( solid ) and of and = 1.26 ( dashed ) .open triangles are data from treated as a running average according to eq .( [ double ] ) .uppermost two curves illustrate eq .( [ single ] ) for showing a different aspect of boundary bias for this dimension definition .similar results obtain for eq .( [ avdim ] ) ( there is a small vertical shift due to the different scale weighting ) . for ] ( increasing from top to bottom curve ) .we observe large excursions in dimension values ( up to 30% over the scale interval ) .these results compare well with in fig . 2 of . increasing acts as a contrast adjustment , enhancing sensitivity to large density variations on the attractor .there is a suggestion of periodicity on scale for larger values .the right panel shows running averages , also for ] .if a transformation generates dimension transport over a _ finite _ scale interval its integral in eq . ( [ diminv ] ) , which is information , should be zero in the asymptotic limit , and limit - based dimensions remain invariant .the scale dependence of dimension transport reflects the scale structure of transformations and thus provides a basis for classifying them .an analogy can be drawn between the logarithmic system of entropy , information , dimension and dimension transport which describes correlation over arbitrary scale intervals and the linear system of correlation integral , cumulant , autocorrelation and autocorrelation difference suitable for correlation structure restricted to small scale intervals .difference quantities which compare two distributions ( possibly related by a transformation ) can be defined in the two systems by c_q(e ) & = & ^e_0 a_q(e ) d e + c_q(e ) & = & ( 1-q ) _e^l d_q(e ) d e , where is an autocorrelation difference or _net _ autocorrelation , is dimension transport , can be related to cumulants and to information .autocorrelation and scale - local dimension play analogous roles as local or differential correlation measures on scale , in linear and logarithmic systems respectively .entropies and correlation integrals are the corresponding integral quantities .changes in correlation result in _ transport _ of autocorrelation or dimension on scale as conserved quantities .choice of linear or logarithmic correlation measures depends on the scale structure of distributions .the linear system ( with a long history of development ) is better suited for linear periodic systems ; the logarithmic system for complex , nonlinear aperiodic systems .topology seeks static properties of transformation - invariant sets amenable to rigorous treatment .science seeks to describe the evolution of transient complex systems viewed imperfectly .science thus requires a broader descriptive system , a generalized , self - consistent treatment containing topological invariance and limit - based measures as a special case .studies of nonlinear dynamics have provided a fruitful interface between mathematical idealization and real - world arbitrariness . after a period of rapid developmentone could pursue a path toward greater rigor which emphasizes a subset of interesting dynamics and methods and favors limit - based measures .alternatively , one could strive to encompass arbitrarily scale - dependent systems in a generalized scale - local theory of dimension , with limit - based dimension as a special case . in this paperwe compare scale - local and limit - based dimensions through analysis of a well - known distribution a strange attractor of the hnon map .we find that limit - based dimension is an asymptotic scale average of scale - local dimension .the former is not , a limiting scale - local value at zero scale , but rather , an average over a semi - infinite scale interval .the general quantity is , an average over arbitrary scale interval with limiting cases ( limit - based dimension ) and ( scale - local dimension at scale and resolution ) . running average descending from a boundary scale _ estimates _ limit - based dimension in experimental or computational contexts .scale - local rnyi dimensions of the hnon map attractor analyzed here are _ highly scale dependent _ , with variations on scale up to 30% of mean values , no indication of convergence to a fixed limit and no consistent monotonicity on index .running averages based on scale - local distributions do exhibit monotonicity on index for significant averaging intervals . running scale averagesprovide the best means for estimating limit - based dimension values from numerical or experimental data , but as statistical estimators these averages are subject to several significant sources of bias , as demonstrated in this and other analyses .we re - express dimension invariance in terms of scale averages , information and dimension transport .limit - based dimension defined on a semi - infinite scale interval is invariant under transformations corresponding to _ finite information _ , or equivalently dimension transport restricted to a finite scale interval ( recursive transformations for example do not satisfy this condition ) . by the same argument ,dimension _ estimators _ derived from finite scale intervals are not generally invariant under transformations , in fact may be used to study the scale - dependent structure of transformations .dimension transport provides an alternative classification basis .7 pesin y b 1997 _ dimension theory in dynamical systems _ ( the university of chicago press ) .hausdorff f 1919 math .ann . * 79 * 157 .czy j 1994 _ paradoxes of measures and dimensions originating in felix hausdorff s ideas _ ( world scientific , singapore ) .falconer k 1990 _ fractal geometry _( wiley , chichester ) .rudolph o 1994 _ thermodynamic and multifractal formalism and the bowen - series map _ , desy preprint ( issn 0418 - 9833 ) 94 - 122 .baker g l and gollub j p 1990 _ chaotic dynamics _ ( cambridge , cambridge university ) .grassberger p 1983 _ phys .lett . _ * 50 * 346 - 349 rnyi a 1960 _ mta iii .* 10 * 251 - 282 .reid j g and trainor t a math - ph/0304010 , submitted to _hnon m 1976 _ commun .phys . _ * 50 * 69 .hunt b r 1996 _ nonlinearity _ * 9 * 845 - 852 .grassberger p 1983 _ phys .lett . _ a * 97 * 227 - 230 .russell d a , hansen j d and ott e 1980 _ phys .* 45 * 1175 .grassberger p 1983 _ phys .lett . _ a * 97 * 224 - 226 . , edited by shlesinger m f , zaslovsky g m and frisch u 1995 ( springer - verlag , berlin ) .feller w 1971 _ an introduction to probability theory and its applications vol ii _( wiley , ny ) .sprott j c 2003 _ chaos and time - series analysis _ ( oxford university press ) .grassberger p 1988 _ phys .lett . _ a * 128 * 369 - 373 .trainor t a 1998 _ scale - local topological measures _ preprint , university of washington .
we compare limit - based and scale - local dimensions of complex distributions , particularly for a strange attractor of the hnon map . scale - local dimensions as distributions on scale are seen to exhibit a wealth of detail . limit - based dimensions are shown to be averages of scale - local dimensions , in principle over a semi - infinite scale interval . we identify some critical questions of definition for practical dimension analysis of arbitrary distributions on bounded scale intervals . .2 in keywords : scale , dimension , information , correlation integral , hnon map
* samples . * both als and flash experiments used samples microfabricated on silicon nitride membranes supported in a silicon window frame .the ura and test pattern for the als experiments were fabricated using center for x - ray optics nanowriter , lawrence berkeley national lab .polymethyl methacrylate , a positive resist , was patterned on si membrane substrate and subsequently gold electroplated ( 10 nm au ) .patterning dose was achieved in a fraction of a second , enabling potential mass production .following glutaraldehyde fixation , spiroplasma cells were air dried from a solution containing s. melliferum on a si3n4 window covered by a 10 nm poly - l - lysine .the ura was fabricated next to a spiroplasma cell using focused ion beam milling at the national center for electron microscopy .* at the als , diffaction patterns were collected using a coherent portion of a soft x - ray beam ( =2.3 nm ) from an undulator source selected by a 5 m pinhole .the hologram was collected with an in - vacuum back illuminated ccd ( 1300x1340 20 m pixels ) placed at 200.5 mm from the specimen .the direct beam is blocked by a beamstop placed in front of the detector to prevent damage to the camera .the beamstop is moved during data collection to recover a large portion of the diffraction pattern .total collection time was 5 seconds .at flash the same ccd is placed at 54.9 .mm from the specimen and collects elastically scattered x - rays filtered by a graded multilayer mirror .a hole in the mirror allows the direct beam through , removing the need for a beamstop . * reconstruction . * the autocorrelation map , obtained by fourier transform of the diffraction pattern ,was multiplied by a binary mask 0 in the region outside the cross correlation between the object and the ura ( the hologram ) . ) .the reconstruction was performed by applying the same processing used for pinhole camera images , by replacing the recorded intensity image with the masked complex valued cross correlation term . a cyclical convolution with a ura decodes the hologram .the ura autocorrelation is a delta function in periodic or cyclical systems , not when surrounded by empty space . to mimic the cyclical correlation, a mosaic of 3x3 binary uras is convolved with the holographic cross term .the reconstruction procedure retrieves the hologram of the object to a resolution determined by the size the ura elements ( at sub - array spacing resolution ) .however when the spacing between dots is larger than the dots , the array no longer scatters optimally the available light , decreasing the snr .the signal is proportional to the number of elements in the array , and the noise is proportional to the square root of the number of elements used to deconvolve the image .only 2x2 uras contribute to the reconstructed image since the holographic term is at most twice the size as the array itself ( the object is smaller than the array ) .the mosaic uras are made of + 1 and -1 ( instead of 1 and 0 ) terms , yielding an additional factor of the noise .the signal to noise ratio therefore increases as . for a phase urathe signal to noise ratio would increase by a factor of 2 .* phase retrieval .* the reference points in the ura and a few dust particles where fixed in space throughout the optimization process , facilitating phase retrieval optimization .the addition of a redundant linear constraint yields more reliable reconstructions , de facto increasing the resolution of the retrieved complex valued images .the rest of the illuminated sample was reconstructed with dynamic support .the reproducibility of the image as a function of spatial frequency drops at 75 nm resolution .99 stroke , g. w. _ introduction to coherent optics and holography _ , ( academic press , new york , ny 1969 ) .mcnulty , i. et al .high - resolution imaging by fourier transform x - ray holography. _ science _ * 256 * , 1009 - 1012 ( 1992 ) .bajt , s chapman , h. n. spiller , e. a. alameda , j. b. woods , b. w. frank , m. bogan m. j. , barty a. , boutet s. , marchesini s , hau - riege s. p. , hajdu j. , shapiro d. a camera for coherent diffractive imaging and holography with a soft - x - ray free electron laser , appl .opt . in press .
advances in the development of free - electron lasers offer the realistic prospect of high - resolution imaging to study the nanoworld on the time - scale of atomic motions . we identify x - ray fourier transform holography , ( fth ) as a promising but , so far , inefficient scheme to do this . we show that a uniformly redundant array ( ura ) placed next to the sample , multiplies the efficiency of x - ray fth by more than one thousand ( approaching that of a perfect lens ) and provides holographic images with both amplitude- and phase - contrast information . the experiments reported here demonstrate this concept by imaging a nano - fabricated object at a synchrotron source , and a bacterial cell at a soft x - ray free - electron - laser , where illumination by a single 15 fs pulse was successfully used in producing the holographic image . we expect with upcoming hard x - ray lasers to achieve considerably higher spatial resolution and to obtain ultrafast movies of excited states of matter . , the ideal microscope for the life and physical sciences should deliver high - spatial - resolution high - speed snapshots possibly with spectral , chemical and magnetic sensitivity . x - rays can provide a large penetration depth , fast time resolution , and strong absorption contrast across elemental absorption edges . the geometry of fourier transform holography is particularly suited to x - ray imaging as the resolution is determined by the scattering angle , as in crystallography . the problem with conventional fourier holography is an unfavourable trade off between intensity and resolution . we show here how recent technical developments in lensless flash photography can be used to reduce this effect . around the century , european painters used the camera obscura , a dark camera with a pinhole to form an image on a canvas . even earlier , pinhole cameras had been used to image solar eclipses by chinese , arab and european scientists . scientists and painters knew that a small pinhole was required for reaching high resolution , but the small pinhole also dimmed the light and the image . it was eventually discovered that lenses could collect a larger amount of light without degrading resolution . shutters and stroboscopic flash illumination allowed recording of the time evolution of the image . the development of the computer allowed a resurgence of pinhole techniques and random arrays of pinholes were used , initially in x - ray astronomy . each bright point of a scene deposits a shadow - image of the pinhole array on the viewing screen . depth information about the object is encoded in the scaling of the shadow image of the object points . knowledge of the geometry of the pinhole array ( the `` coded aperture '' ) allowed numerical recovery of the image . eventually the pinholes were replaced by binary uras which were shown to be optimal for imaging . their multitudes of sharp features contain equal amounts of all possible spatial frequencies , thereby allowing high spatial resolution without sacrifice of image brightness . ura coded apertures are now commonly used in hard x - ray astronomy , medical imaging , plasma research , homeland security and spectroscopy to improve brightness where lenses are not applicable . the forerunner of our x - ray holography with a ura reference source is conventional visible - light fth . the interference pattern between light scattered by an object and light from a nearby pinhole is recorded far downstream ( fig . 1 ) . when this recording ( the hologram ) is re - illuminated by the pinhole reference wave , the hologram diffracts the wavefront so as to produce an image of the object . a second inverted ( `` twin '' ) copy of the object is produced on the opposite side of the optical axis . under far - field measurement conditions , a simple inverse - fourier transform of the fth recording produces an image of the specimen convolved with the reference pinhole source . as in the pinhole image of the camera obscura , the brightness of the image ( or equivalently , the signal to noise ratio ) increases as the reference pinhole increases in size , at the expense of image resolution . in general the solution to the problem of weak signal from a single pinhole ( and resulting long exposure time ) is to use multiple reference sources . for example , a unique mesoscale object has been imaged by x - ray fth using 5 pinholes sources . no two pairs of pinholes were the same distance apart , so that each holographic term could be isolated in the autocorrelation map . this geometry blocks much of the available light and limits the number of pixels available to image the specimen . hitherto efforts to produce strong signal have been pursued using complicated reference objects . the possible improvement in signal to noise of fourier - transform holograms with a strong reference saturates at 50% of that for an ideal lens - based amplitude image with loss of phase information . however there is still the difficulty of deconvolution due to the missing frequency content of the reference . the flat power spectrum of the ura is designed to optimize the reference to fill the detector with light uniformly . in summary , the optimum method of boosting the holographic signal is to use a ura as the reference object . the gain in flux compared to a single pinhole is the number of the opaque elements in the ura ( twice as much for phase uras ) , in our case , n=2592 and n=162 . the signal to noise ratio ( snr ) of the ura - produced image increases initially with the square root of the number of pixels in the ura ( see methods ) , in our case by a factor of 18 and 4.5 for n=2592 and n=162 respectively . the x - ray fth experiment is conceptually simple : a coherent beam of x - rays impinges on a specimen and the coded array , and the diffraction pattern is recorded far downstream ( fig 1 ) . we report here two experiments both using an area detector fitted to an existing experimental end station , and already used in several recent coherent x - ray diffractive imaging experiments . the first experiment was carried out at beam line 9.0.1 at the advanced light source ( als ) at the lawrence berkeley national laboratory . an x - ray beam defined using a 4 m coherence - selecting pinhole was used to image a test object placed next to a 44 nm resolution array ( figure 2 ) . the flux advantage of the ura method is illustrated by the calculations shown in figs . 2 ( ) and ( f ) and explained in the caption is made evident from our results . 73 array of 43.5 nm square gold scattering elements , imaged by scanning electron microscopy . scale bar is 2 m . ( b ) diffraction pattern collected at the advanced light source ( =2.3 nm , photons in 5 s exposure , 200 mm from the sample ) . ( c ) real part of the reconstructed hologram ( linear grayscale ) , the smallest features of the sample of 43.5 nm are clearly visible . ( d ) simulation with photons . ( e ) simulation with the same number of photons , but a single reference pinhole . ( f ) cut through two dots separated by 120 nm . [ fig:2 ] , scaledwidth=48.0% ] the second series of experiments was carried out at the flash soft x - ray free electron laser ( =13.5 nm ) in hamburg . a 15 fs pulse of photons traverses the sample and ura , and is diffracted just before they both turn into a hot plasma , and become vapourised . a bacterium was imaged to demonstrate that the experiment was feasible with the lower scattering strength of biological material ( fig . 3 ) . more information on these experiments is given in the figure captions and methods section . in fth , the final image is produced by an especially simple one - step calculation . a fourier transformation of the measured intensity pattern delivers the autocorrelation of the wavefield that exits the object plane . in standard fth with a point reference source such an autocorrelation would already contain the image . for fth with a ura reference source , it includes the convolution of the object and the ura . positioning the sample and the array with a sufficient spacing between them ensures that this information - containing term will be separated from the autocorrelations of the array and the object . to extract the final image , we convolved the holographic term ( with the rest of the autocorrelation map set to zero ) using a mosaic of uras , that is using the same delta - hadamard transform used to reconstruct the pinhole camera image intensities , but replacing the intensity image with the complex valued cross correlation term . the reconstructed images made by this method are shown in figure 2 using 44-nm ura elements . the forward or small angle scattering was discarded during data collection , yielding edge enhancement in the image . for the biological image additional refinement was demonstrated . by using the fourier - hadamard - transform image provided by a 150 nm resolution ura as the starting point for a phase retrieval algorithm , we refined the 15-fs flash image of a small helical bacterium ( spiroplasma melliferum ) to the full extent of the recorded diffraction pattern at half the pinhole - size resolution ( 75 nm ) ( fig . 3 and methods section ) . m ) . ( b ) diffraction pattern collected at flash in a single 15 fs ( =13.5 nm ) pulse . ( c ) reconstructed image by phase retrieval methods . ( d ) the reproducibility of the recovered image as a function of spatial frequency drops at 75 nm resolution . [ fig:3 ] , scaledwidth=48.0% ] the resolution of the holograms corresponds to the resolution of the fabricated uras : 44 nm for the lithographic pattern used at als , and 150 nm ( refined to 75 nm ) for the bacterium used at flash . these values are among the best ever reported for holography of a one - micron - sized object , and we believe resolution will improve in the future with the development of nano - arrays . uras with 25 nm resolution and 9522 elements have already been produced by conventional methods . in conclusion , we have successfully demonstrated holographic x - ray imaging with uras and obtained amplified x - ray holographic images at attractive resolutions . these images were orders of magnitude more intense than those from conventional fourier transform holography , enabling the potential use of novel tabletop sources . since ura diffraction uniformly filled the detector with light , image reconstruction could be performed by a fourier - hadamard transform in a single step without iterations . the technique shows good prospects for improvements in the spatial resolution , and the results verify the predicted high performance values with respect to high brightness and ultrafast time resolution . imaging with coherent x - rays will be a key technique for developing nanoscience and nanotechnology , and or massively parallel holography will be an enabling tool in this quest . we are grateful to the staff of flash and als for help , and to d. a. fletcher for the spiroplasma samples . this work was supported by the following agencies : the u.s . department of energy by lawrence livermore national laboratory under contract w-7405-eng-48 and de - ac52 - 07na27344 ; the advanced light source ; national center for electron microscopy ; center for x - ray optics at lawrence berkeley laboratory under doe contract de - ac02 - 05ch11231 ; the stanford linear accelerator center under doe contract de - ac02 - 76-sf00515 ; the european union ( tuixs ) ; the swedish research councils , the dfg - cluster of excellence through the munich - centre for advanced photonics ; the natural sciences and engineering research council of canada to m. b. , and the sven and lilly lawskis foundation of sweden to m.m.s .
the relativistic contribution to the rate of precession of perihelion of mercury is calculated accurately using general relativity .however , the problem is commonly discussed in undergraduate and graduate classical mechanics textbooks , without introduction of an entirely new , metric theory of gravity .one approach is to define a lagrangian that is consistent with both newtonian gravity and the momentum - velocity relation of special relativity .the resulting equation of motion is solved perturbatively , and an approximate rate of precession of perihelion of mercury is extracted .this approach is satisfying in that all steps are proved , and a brief introduction to special relativity is included . on the other hand , one must be content with an approximate rate of precession that is about one - sixth the correct value .another approach is that of a mathematical exercise and history lesson .a modification to newtonian gravity is given , without proof , resulting in an equation of motion that is the same as that derived from general relativity .the equation of motion is then solved using appropriate approximations , and the correct rate of precession of perihelion of mercury is extracted .both approaches provide an opportunity for students of classical mechanics to understand that relativity is responsible for a small contribution to perihelic precession and to calculate that contribution .we present a review of the approach using only special relativity , followed by an alternative solution of the equation of motion derived from lagrange s equations. an approximate rate of perihelic precession is derived that agrees with established calculations .this effect arises as one of several small corrections to kepler s orbits , including reduced radius of circular orbit and increased eccentricity .the method of solution makes use of coordinate transformations and the correspondence principle , rather than the standard perturbative techniques , and is approachable by undergraduate physics majors . a relativistic particle of mass orbiting a central mass is commonly described by the lagrangian where : ; ; and .( is newton s universal gravitational constant , and is the speed of light in vacuum . )the equations of motion follow from lagrange s equations , for each of , where .the results are : = 0 , \label{eq_angmom}\ ] ] which implies that ; and the first of these [ eq . ( [ eq_angmom ] ) ] is the relativistic analogue to the newtonian equation for the conservation of angular momentum per unit mass , and is used to eliminate in eq .( [ eq_eom ] ) , time is eliminated by successive applications of the chain rule , together with the conserved angular momentum : and , therefore , substituting eqs .( [ eq_part1 ] ) and ( [ eq_part2 ] ) into the equation of motion eq .( [ eq_eom ] ) results in we anticipate a solution of eq .( [ eq_eom2 ] ) that is near keplerian and introduce the radius of a circular orbit for a nonrelativistic particle with the same angular momentum , .the result is where is a velocity - dependent correction to newtonian orbits due to special relativity .the conic - sections of newtonian mechanics are recovered by setting : which implies that where is the eccentricity .the planets of our solar system are described by near - circular orbits and require only small relativistic corrections .mercury has the largest eccentricity , and the next largest is that of mars .therefore , [ defined after eq .( [ eq_sr ] ) ] is taken to be a small relativistic correction to near - circular orbits of newtonian mechanics ( keplerian orbits ) . neglecting the radial component of the velocity , which is small compared to the tangential component , and expanding to first order in results in ( see sec .[ sec_discussion ] and app .[ app_series ] for a thorough discussion of this approximation . ) once again using the angular momentum to eliminate , this may be expressed as , or the equation of motion eq .( [ eq_sr ] ) is now expressed approximately as where .the conic - sections of newtonian mechanics , eq . ( [ eq_newton0 ] ) and eq . ( [ eq_newton ] ) , are now recovered by setting .the solution of eq .( [ eq_sr2 ] ) for approximately describes keplerian orbits with small corrections due to special relativity . if is taken to be a small relativistic correction to keplerian orbits , it is convenient to make the change of variable .the last term on the right - hand - side of eq .( [ eq_sr2 ] ) is then approximated as , resulting in a linear differential equation for : the additional change of variable results in the familiar form : where .the solution is similar to that of eq .( [ eq_newton0 ] ) : where is an arbitrary constant of integration . in terms of the original coordinates ,( [ eq_sr4 ] ) becomes where according to the correspondence principle , kepler s orbits , eq .( [ eq_newton ] ) , must be recovered in the limit , so that is the eccentricity of newtonian mechanics .to first order in , eqs .( [ eq_s_coeff_r0])([eq_s_coeff_phi0 ] ) become so that relativistic orbits in this limit are described concisely by this approximate orbit equation has the same form as that derived from general relativity , and clearly displays three characteristics : precession of perihelion ; reduced radius of circular orbit ; and increased eccentricity .the approximate orbit equation eq .( [ eq_class_rel ] ) predicts a shift in the perihelion through an angle per revolution .this prediction is identical to that derived using the standard approach to incorporating special relativity into the kepler problem , and is compared to observations assuming that the relativistic and keplerian angular momenta are approximately equal . for a keplerian orbit , where , kg is the mass of the sun , and and are the semi - major axis and eccentricity of the orbit , respectively .therefore , the relativistic correction defined after eq .( [ eq_sr2 ] ) , is largest for planets closest to the sun and for planets with very eccentric orbits . for mercury m and , so that . ( the speed of light is taken to be . ) according to eq .( [ eq_s_def_precess ] ) , mercury precesses through an angle per revolution .this angle is very small and is usually expressed cumulatively in arc seconds per century .the orbital period of mercury is 0.24085 terrestrial years , so that the general relativistic ( gr ) treatment results in a prediction of 43.0arcsec / century , and agrees with the observed precession of perihelia of the inner planets .historically , this contribution to the precession of perihelion of mercury s orbit precisely accounted for the observed discrepancy , serving as the first triumph of the general theory of relativity .the present approach , using only special relativity , accounts for about one - sixth of the observed discrepancy , eq .( [ eq_prec_num ] ) .precession due to special relativity is illustrated in fig .[ fig1 ] . the approximate relativistic orbit equation eq .( [ eq_class_rel ] ) [ or eq .( [ eq_s_eoo ] ) together with eqs .( [ eq_s_coeff_r])([eq_s_coeff_phi ] ) ] predicts that a relativistic orbit in this limit has a reduced radius of circular orbit .this characteristic is not discussed in the standard approach to incorporating special relativity into the kepler problem , but is consistent with the gr description .an effective potential naturally arises in the gr treatment of the central - mass problem , that reduces to the newtonian effective potential in the limit . in the keplerian limit, the gr angular momentum per unit mass is also taken to be approximately equal to that for a keplerian orbit .minimizing with respect to results in the radius of a stable circular orbit , so that the radius of circular orbit is predicted to be reduced : ( there is also an unstable circular orbit .see fig .[ fig2 ] . )this reduction in radius of circular orbit is six times that predicted by the present treatment using only special relativity , for which [ see eq .( [ eq_s_coeff_r ] ) . ]most discussions of the gr effective potential eq .( [ eq_schwarz_eff_pot ] ) emphasize relativistic capture , rather than reduced radius of circular orbit .the term in eq .( [ eq_schwarz_eff_pot ] ) contributes negatively to the effective potential , resulting in a finite centrifugal barrier and affecting orbits very near the central mass ( large - velocity orbits ) .( see fig .[ fig2 ] . )this purely gr effect is not expected to be described by the approximate orbit equation eq .( [ eq_class_rel ] ) , which is derived using only special relativity and assumes orbits very far from the central mass ( small - velocity orbits ) .an additional characteristic of relativistic orbits is that of increased eccentricity .equation ( [ eq_class_rel ] ) predicts that a relativistic orbit will have increased eccentricity , when compared to a keplerian orbit with the same angular momentum : .[ see eq .[ eq_s_coeff_e ] .] this characteristic of relativistic orbits also is not discussed in the standard approach to incorporating special relativity into the kepler problem , but is consistent with the gr description . the gr orbit equation in this keplerian limit( [ eq_gen_rel ] ) predicts an increase in eccentricity , which is six times that predicted by the present treatment using only special relativity .the approximate orbit equation in eq .( [ eq_class_rel ] ) provides small corrections to kepler s orbits due to special relativity .[ compare eqs .( [ eq_class_rel ] ) and ( [ eq_newton ] ) . ] a systematic verification may be carried out by substituting eq .( [ eq_class_rel ] ) into eq .( [ eq_sr2 ] ) , keeping terms of orders , , and only .the domain of validity is expressed by subjecting the solution eq .( [ eq_class_rel ] ) to the condition for the smallest value of . evaluating the orbit equation eq .( [ eq_class_rel ] ) at the perihelion results in the substitution of this result into eq .( [ eq_dov ] ) results in the domain of validity : therefore , the relativistic eccentricity , and eq .( [ eq_class_rel ] ) is limited to describing relativistic corrections to near - circular ( keplerian ) orbits .also , the relativistic correction , and thus the orbit equation eq .( [ eq_class_rel ] ) is valid only for small relativistic corrections . in sec .[ sec_keplerianlimit ] the relativistic correction to keplerian orbits [ defined after eq .( [ eq_sr ] ) ] is approximated by : neglecting the radial component of the velocity , ; and keeping terms only up to first order in the expansion . neglecting the radial component of the velocity in the relativistic correction is consistent with the assumption of near - circular , approximately keplerian orbits , and is complementary to the assumption preceding eq .( [ eq_linearized ] ) .it is emphasized that the radial component of the velocity is neglected only in the relativistic correction ; it is not neglected in the derivation of the relativistic equation of motion eq .( [ eq_sr ] ) . that there is no explicit appearance of in the relativistic equation of motion eq .( [ eq_sr ] ) , other than in the definition of , is due to a fortunate cancellation after eq .( [ eq_part2 ] ) .furthermore , the approximate orbit equation eq .( [ eq_class_rel ] ) has the same form as that [ eq .( [ eq_gen_rel ] ) ] arising from the gr description , in which the radial component of the velocity is not explicitly neglected .a first - order series approximation for is consistent with the keplerian limit .interestingly , however , the problem is soluble without truncating the series .( see appendix [ app_series ] . )this more elaborate derivation yields diminishing returns due to the complementary assumption , which implicitly constrains the velocity to be much smaller than the speed of light .the resulting approximate orbit equation [ eq . ( [ eq_ssr ] ) ] , is almost identical to that derived using a much simpler approach in sec .[ sec_keplerianlimit ] [ eq .( [ eq_class_rel ] ) ] .although this alternative orbit equation predicts a relativistic correction to eccentricity that is nearly one - half that of the gr result , it lacks the symmetry of the gr orbit equation eq .( [ eq_gen_rel ] ) , and it does not provide significant further qualitative understanding of relativistic corrections to keplerian orbits .the present approach to incorporating special relativity into the kepler problem results in an approximate orbit equation [ eq . ( [ eq_class_rel ] ) ] that has the same form as that derived from general relativity in this limit [ eq . ( [ eq_gen_rel ] ) ] and is easily compared to that describing kepler s orbits [ eq .( [ eq_newton0 ] ) ] .this orbit equation clearly describes three corrections to a keplerian orbit due to special relativity : precession of perihelion ; reduced radius of circular orbit ; and increased eccentricity .the predicted rate of precession of perihelion of mercury is identical to established calculations using only special relativity .each of these corrections is exactly one - sixth of the corresponding correction described by general relativity in the keplerian limit .this derivation of an approximate orbit equation is complementary to existing calculations of the rate of precession of perihelion of mercury using only special relativity .the central - mass problem is described by a lagrangian that is consistent with both newtonian gravity and the momentum - velocity relation of special relativity .however , coordinate transformations and the correspondence principle are used to solve the equations of motion resulting from lagrange s equations , rather than the standard perturbative methods .the resulting closed - form , approximate orbit equation exhibits several characteristics of relativistic orbits at once , but is limited to describing small relativistic corrections to approximately newtonian , near - circular orbits .this orbit equation , derived using only special relativity , provides a qualitative description of corrections to keplerian orbits due to general relativity .exact solutions to the special relativistic kepler problem require a thorough understanding of special relativistic mechanics and are , therefore , inaccessible to most undergraduate physics majors .the present approach and method of solution is understandable to nonspecialists , including undergraduate physics majors whom have not had a course dedicated to relativity . for near - circular orbitsthe radial component of the velocity is small compared to the tangential component and is neglected in the relativistic correction .[ see eq .( [ eq_sr ] ) . ] an exact infinite - series representation , is used , rather than the approximate first - order truncated series used in sec .[ sec_keplerianlimit ] .( the remainder of this derivation parallels that of sec .[ sec_keplerianlimit ] . ) conservation of angular momentum is used to eliminate , resulting in } - 1 , \label{eq_series_lambda}\ ] ] in terms of which the equation of motion eq .( [ eq_sr ] ) is now expressed approximately as the conic - sections of newtonian mechanics , eqs .( [ eq_newton0 ] ) and ( [ eq_newton ] ) , are recovered by setting . for near - circular ,approximately keplerian orbits it is convenient to make the change of variable , so that . in terms of this new variable , the relativistic correction eq .( [ eq_series_lambda ] ) is where , and the following series is identified : thus , the equation of motion eq .( [ eq_sr5 ] ) is linearized : the additional change of variable results in the familiar form : where .the solution is similar to that of eq .( [ eq_newton0 ] ) : where is an arbitrary constant of integration . in terms of the original coordinates ,( [ eq_sr6 ] ) becomes where ( including first - order approximations ) according to the correspondence principle , kepler s orbits [ eq .( [ eq_newton ] ) with must be recovered in the limit , so that is the eccentricity of newtonian mechanics .therefore , relativistic orbits in this limit are described concisely by this alternative orbit equation differs from that [ eq . ( [ eq_class_rel ] ) ] derived using the much simpler approach in sec .[ sec_keplerianlimit ] only in the relativistic correction to eccentricity .see the discussion at the end of sec .[ sec_discussion ] .a. einstein , `` explanation of the perihelion motion of mercury from the general theory of relativity , '' in _ the collected papers of albert einstein _ , translated by a. engel ( princeton university press , princeton , 1997 ) , vol .this article is the english translation of ref . .k. schwarzschild , `` ber das gravitationsfeld eines massenpunktes nach der einsteinschen theorie , '' sitzungsber ., phys .- math .* 1916 * , 189196 ( 1916 ) .reprinted in translation as `` on the gravitational field of a mass point according to einstein s theory , '' arxiv : physics/9905030v1 [ physics.hist-ph ] .j. droste , `` het veld van een enkel centrum in einstein s theorie der zwaartekracht , en de beweging van een stoffelijk punt in dat veld , '' versl .* 25 * , 163180 ( 1916 - 1917 ) .reprinted in translation as `` the field of a single centre in einstein s theory of gravitation , and the motion of a particle in that field , '' proc .. wetensch .* 19 * ( 1 ) , 197215 ( 1917 ) , link:<adsabs.harvard.edu / abs/1917knab ... 19 .. 197d>[<adsabs.harvard.edu / abs/1917knab ... 19 .. 197d > ] .a. m. nobili and i. w. roxburgh , `` simulation of general relativistic corrections in long term numerical integrations of planetary orbits , '' in _ relativity in celestial mechanics and astrometry : high precision dynamical thoeries and observational verifications _ , edited by j. kovalevsky and v. a. brumberg ( iau , dordrecht , 1986 ) , pp .105110 , link:<adsabs.harvard.edu / abs/1986iaus .. 114 .. 105n>[<adsabs.harvard.edu / abs/1986iaus .. 114 .. 105n > ] .n. ashby , `` planetary perturbation equations based on relativistic keplerian motion , '' in _ relativity in celestial mechanics and astrometry : high precision dynamical thoeries and observational verifications _ , edited by j. kovalevsky and v. a. brumberg ( iau , dordrecht , 1986 ) , pp .4152 , link:<adsabs.harvard.edu / abs/1986iaus .. 114 ... 41a>[<adsabs.harvard.edu / abs/1986iaus .. 114 ... 41a > ] .d. brouwer and g. m. clemence , `` orbits and masses of planets and satellites , '' in _ the solar system : planets and satellites _ , edited by g. p. kuiper and b. m. middlehurst ( university of chicago press , chicago , 1961 ) , vol .iii , pp .3194 . relativistic orbit in a keplerian limit ( solid line ) , as described by eq .( [ eq_class_rel ] ) , compared to a corresponding keplerian orbit ( dashed line ) [ eq .( [ eq_newton ] ) ] .the precession of perihelion is one orbital characteristic due to special relativity and is illustrated here for .this characteristic is exaggerated by both the choice of eccentricity and relativistic correction parameter for purposes of illustration .precession is present for smaller ( non - zero ) reasonably chosen values of and as well .( the same value of is chosen for both curves . ) ] effective potential commonly defined in the newtonian limit to general relativity ( solid line ) [ eq .( [ eq_schwarz_eff_pot ] ) ] , compared to that derived from newtonian mechanics ( dashed line ) .the vertical dotted lines identify the radii of circular orbits , and , as calculated using general relativity and newtonian mechanics , respectively .general relativity predicts [ eq . ( [ eq_s_rc_eff_pot ] ) ] a smaller radius of the circular orbit than that predicted by newtonian mechanics .the value is chosen for purposes of illustration . ]
beginning with a lagrangian that is consistent with both newtonian gravity and the momentum - velocity relation of special relativity , an approximate relativistic orbit equation is derived that describes relativistic corrections to keplerian orbits . specifically , corrections to a keplerian orbit due to special relativity include : precession of perihelion , reduced radius of circular orbit , and increased eccentricity . the prediction for the rate of precession of perihelion of mercury is in agreement with existing calculations using only special relativity , and is one sixth that derived from general relativity . all three of these corrections are qualitatively correct , though suppressed when compared to the more accurate general - relativistic corrections in this limit . the resulting orbit equation has the same form as that derived from general relativity and is easily compared to that describing kepler s orbits . this treatment of the relativistic central - mass problem is complementary to other solutions to the relativistic kepler problem , and is approachable by undergraduate physics majors whom have not had a course dedicated to relativity .
multilayer neural networks ( mln ) are more powerful devices for information processing than the single - layer perceptron because of the possibility of _ different _ activation patterns , so - called internal representations ( ir ) , at the hidden units for the _ same _ input - output mapping .it is well known that the correlations between the activities at the hidden units are crucial for the understanding of the storage and generalization properties of a mln . a particular simple situation to study these correlationsis the implementation of random input - output mappings by the network , the so - called storage problem , near the storage capacity . using the replica trick and assuming replica symmetry the correlation coefficients building up in this casewere calculated in and shown to be characteristic for the prewired boolean function between hidden layer and output .conversely , _ prescribing _these correlations the storage properties of the networks change . the assumption of replica symmetry ( rs ) in this calculation is somewhat doubtful .in fact it is well known that the storage capacity of mln is strongly modified by replica symmetry breaking ( rsb ) , which is due to the very possibility of different internal representations . moreover, even the distribution of the output field of a simple perceptron is influenced by rsb effects . in the present paperwe elucidate the impact of rsb on the correlation coefficients between the activity of different hidden units in mln with one hidden layer and nonoverlapping receptive fields .the central quantity of interest is the joint probability distribution for the local fields at the hidden units . in the general part of this paperwe show how this distribution can be calculated both in rs and in one - step rsb . for a detailed analysiswe than specialize to mln with hidden units and discuss , in particular , the parity , committee and and machines . together with the corrections from one - step rsbthe rs results give insight in the division of labor between different subperceptrons in mln and the role of rsb .calculating finally the correlation coefficients we find that although modifying the local field distribution markedly rsb gives rise to minor corrections to the correlation coefficients only .we consider feed - forward neural networks with inputs , one hidden layer of units and a single output .the hidden units have nonoverlapping receptive fields of dimension ( tree structure ) .they are determined by the inputs via spherical coupling vectors according to with denoting the local fields .we call an activation pattern of the hidden units an internal representation ( ir ) .the output of the mln is a fixed boolean function of the ir .examples of special interest include the parity machine , , the committee machine , , and the and machine , .all ir consistent with a desired output are called legal internal representations ( lir ) .the number of and similarity between lir to a given output specifies the division of labor taking place between the different perceptrons forming the mln .it is quantitatively characterized by the correlation coefficients , where denotes the average over the inputs and the output and is a subset of natural numbers between and . for permutation symmetric boolean functions ,the only depend on and not on the particular choice of this subset .we focus on the so - called storage problem in which the inputs and the outputs are generated independently at random according to the probability distributions and where , and .the basic quantity which gives us access to the probability of the lir and to the correlation coefficients is the distribution of the local fields at the hidden unit .it is given by denotes the average over all stored input - output patterns . denotes the partition function the measure on the gardner sphere and the integration measure we use the replica trick in eq .( [ p_h ] ) to perform the average over the inputs and introduce the overlaps between different replicas of a coupling vector of hidden unit .we will consider only permutation symmetric booleans .hence all hidden units have the same statistical properties implying and with .equation ( [ p_h ] ) takes on the form in terms of the -dimensional order parameter matrix where and .here -\sum_{k , a < b}x_k^{a}x_k^{b}q^{ab}\right ) \nonumber\\ & \times&\prod\limits_{a}\theta(\sigma f(\{{\text{sgn}}(\lambda_k^{a})\}))\delta(h_j-\lambda_1^{1 } ) \label{ph_central},\end{aligned}\ ] ] and the expression for is specified in the appendix , eq .( [ g1 ] ) , together with some more details of the calculation . in the limit the integral ( [ ph_1 ] ) is dominated by the saddle point values of the order parameters which extremize the partition function in the following , we simplify eqs .( [ ph_central ] ) and ( [ sad_cond ] ) using the assumption that the order parameter matrix is either replica symmetric or describes one - step replica symmetry breaking .we will always consider the saturation limit since the expressions then simplify and the correlations become most characteristic in this limit .the rs case is specified by the saturation limit is characterized by the existence of a unique solution , e.g. , .we then get for the conditional probability to find a specific value of the postsynaptical potential under the constraint of a given output .the terms abbreviated by ensure that only lir for the respective value of contribute to the sum in eq .( [ ph_rs ] ) . as usualwe have used the error function with .let us now turn to main features of the solution within the ansatz of one - step rsb .then the following form for the order parameter matrix is assumed : accordingly there are two overlap scales characterizing the similarity between coupling vectors belonging to the same and different regions of the solution space , respectively . using this ansatz we find after standard manipulations for the probability distribution of the local field for a specific output where now these expressions simplify in the saturation limit in which one finds and .the remaining order parameters are given by the saddle point equations corresponding to the following expression for the storage capacity : + q_0 w/[1+w(1-q_0 ) ] } { -2\lim\limits_{q_1\rightarrow 1 } \left\langle\!\!\left\langle{\displaystyle \int}\prod\limits_{k}dy_k\ln\left\ { { \displaystyle \int}\prod\limits_{k}dz_k\left(\phi_{\rm lir}(\sigma)\right)^m\right\ } \right\rangle\!\!\right\rangle_{\sigma } } \right]\quad .\label{saddle2}\end{aligned}\ ] ] as in the rs case the analytical and numerical analysis of these expressions for concrete situations needs some care ( see next section ) . to finally obtain must average eqs .( [ ph_rs ] ) and ( [ ph_rsb ] ) over the two possible outputs , from this probability distribution we find the distributions of the lir according to the correlation coefficients , , are then given by the kronecker in eq .( [ corr1 ] ) restricts the sum to all lir of the output .equation ( [ corr1 ] ) is valid as long as the pattern load of the mln does not exceed its saturation threshold .in this section we apply the general formalism developed above to the analysis of simple versions of three popular examples of mln , namely , the parity , committee and and machines , each with hidden units .we start with the rs results . in committee andparity machines there is for every lir of output an ir with all signs reversed that realizes output . therefore and the final average over in eq .( [ ph_rs_av ] ) is trivial . analyzing eqs .( [ lir1_rs ] ) and ( [ lir2_rs ] ) in the limit one realizes that they depend on both the sign and values of all integration variables .expression ( [ lir1_rs ] ) as well as eq .( [ lir2_rs ] ) are either equal to one or exponentially small in some or all integration variables .the quotient of both figuring in eq .( [ ph_rs ] ) can hence become one , zero , or singular with respect to .whenever it is one the integral in eq .( [ ph_rs ] ) gives rise to for .whenever the quotient is singular a contribution results . keeping track of the different contributions arising in this way we find for the committee machine and for the parity machine note that for the parity machine is an even function due to the additional symmetry of the boolean function for this case . in the and machine the output can be realized by one lir only whereas the output results from all the remaining ir .hence and differ significantly .in fact we find for the and machine and /2 $ ] .note that we have introduced two different singular contributions and in eqs .( [ ph_com_rs ] ) , ( [ ph_par_rs ] ) and eqs .( [ ph_andplus_rs ] ) , ( [ ph_andmin_rs ] ) .the reason for this is that the weight of adds to the probability of positive local fields whereas the weight of adds to that of negative local fields .this distinction will be important later when calculating the correlation coefficients from ( cf .( [ h1 ] ) ) . the results ( [ ph_com_rs ] ) , ( [ ph_par_rs ] ) and ( [ ph_andplus_rs ] ) , ( [ ph_andmin_rs ] ) are shown as the dashed lines in figs .[ fig1]-[fig3 ] respectively .these rs results are in fact very intuitive and can be even quantitatively understood by assuming that the outcome of a gardner calculation corresponds to the result of a learning process in which the initially wrong ir are eliminated with least adjustment . due to the permutation symmetry between the hidden units we may consider only the local field of the first unit of the hidden layer . before learning the couplings are uncorrelated with the patterns and the local field is consequently gaussian distributed with zero mean and unit variance .now consider , e.g. , the parity machine . due tothe discussed symmetries it is sufficient to analyze the case and .if and are equal in sign , which will occur with probability , there is no need to modify the couplings at all .this gives rise to the first term in eq .( [ ph_par_rs ] ) which is just the original gaussian and describes the chance that a randomly found ir with is legal .if and differ in sign the ir is illegal and the couplings have to be modified until one of the hidden units changes sign . in an optimal learning scenario the local field with the smallest magnitudewould be selected and the corresponding coupling vector would be modified such that the field just barely changes sign .hence remains still unmodified if either or is smaller in absolute value which gives rise to the second term in eq .( [ ph_par_rs ] ) .finally , if really is selected for the sign change , which will happen with probability for symmetry reasons , it will after learning be either slightly smaller or slightly larger than zero , which is the origin of the last two terms in eq .( [ ph_par_rs ] ) . with a similar reasoning it is possible to rederive the rs result for the committee machine .again it is sufficient to consider the case . if initially it will not be modified , which gives rise to the last term in eq .( [ ph_com_rs ] ) .if , on the other hand , , prior to learning it will not be modified only if both and are either positive from the start or easier to make positive than .hence a negative survives the learning process if the other two fields are both larger .this is described by the first term in eq .( [ ph_com_rs ] ) .finally , with probability we find that and either or is even smaller than and therefore harder to correct . in this casethe learning would shift to positive values as described by the second term in eq .( [ ph_com_rs ] ) .the resulting distribution of local fields will hence have a dip for negative values of small absolute value clearly visible in fig .[ fig1 ] .the case of the and machine is the simplest .the output requires all local fields to be positive .hence positive fields are not modified , negative ones are shifted to resulting immediately in eq .( [ ph_andplus_rs ] ) which is , of course , identical to the result for the single - layer perceptron . in the case of a negative output the ir is illegal and must be eliminated which is again done by changing the sign of the smallest field .this gives rise to eq .( [ ph_andmin_rs ] ) .it is finally interesting to compare the distribution of local fields found above with that for a single perceptron above saturation .the individual perceptrons in a mln certainly operate above their storage limit even when the storage capacity of the mln is not yet reached .the most remarkable feature of the distribution of local fields for a perceptron above saturation minimizing the number of misclassified inputs is a _gap _ separating positive from negative values .being intimately related to the failure of any finite level of rsb for this problem this gap is believed to exist even in the solution with continuous rsb . on the other hand , none of the distributions for mln showed a gap .as should be clear from the above qualitative discussion the reason for this is quite simple .the single perceptron above saturation has to reject some inputs as not correctly classifiable . in order to keep the number of these errors smallestit chooses those with negative fields of large absolute value .inputs with initially only slightly negative local fields will be learned whereby their local fields shift to values just above zero . in this waythe gap occurs .in mln , on the other hand , there is no reason to shift all negative local fields of small absolute value because the correct output may be realized by the other hidden units. therefore one will not find an interval of values for which is strictly zero . on the other hand , the tendency that predominantly fields of small absolute value will be modified in the learning processis clearly shown by the dips of the distribution functions around ( cf . figs .[ fig1]-[fig3 ] ) .let us now discuss how the above results get modified by rsb .the analytical and subsequent numerical analysis of eqs .( [ ph_rsb]-[saddle2 ] ) for the machines under consideration needs some care in order not to miss the various singular contributions .we have first to determine the values of the order parameters at the saddle point using eq .( [ saddle2 ] ) . in the saturation limit , dominated by one specific lir which is selected among all other lir by the sign and absolute value of the compound variables . either tends to 1 or becomes exponentially small in one or more compound variables .transforming the integration from space to space allows us to reduce the -fold integral to a one - dimensional integral .this is performed numerically by rhomberg integration whereas the outer integrals are done using gauss - legendre quadrature .the saddle point equation ( [ saddle2 ] ) is solved with a standard minimization routine ( powells method in two dimensions ) .the values we get for the order parameters and for the storage capacity are consistent with those obtained earlier . for the parity machine we find , , and in agreement with . in the case of the committee machinewe get , , and , a result somewhat larger than reported previously .the and machine finally does not show rsb at all and we find accordingly , together with . in a second step, we use this values of the order parameters to calculate the respective distribution of local fields ( [ ph_rsb ] ) .the distribution functions obtained in this way are included as full lines in figs .[ fig1]-[fig3 ] .table [ tab1 ] quantifies the main changes .the main modification of the distribution functions of local fields that occurs in one - step rsb is a redistribution of probability from the peaks at to the continuous part of the distribution around zero resulting in a reduction of the weight of the singular parts of roughly 50% .this gives rise to a less pronounced dip of the distribution functions around and is qualitatively similar to the rsb modifications for a single perceptron above saturation . from the results for the parity machineit is conceivable that the central peak may get reduced further if higher orders of rsb are included and that it might eventually disappear completely in the full parisi solution using continuous rsb .for all machines the probability of fields with large absolute values is hardly affected by rsb . for the and machine we did not find rsb at all .the numerical solution of the saddle point equations only gave the rsb result , .we therefore suspect that replica symmetry is correct for the and machine .this is also in accordance with the rule of thumb that rsb is necessary if the solution space is disconnected . in the and machine the output be realized only by one lir which clearly corresponds to a connected ( even convex ) solution space .the output is realized by all remaining ir , which as the complement of the previous solution space must be connected too .we have finally to clarify how much the modifications found for the distributions of local fields will change the probabilities of the internal representations and the correlation coefficients depending only on the _ sign _ of the local fields .this question is , in fact , nontrivial only in the case of the committee machine . forthe and machine no rsb occurs at all and for the parity machine the correlation coefficients are completely determined by the symmetry of the boolean function between hidden units and output . for the committee machinewe find that the probability of the lir is shifted from its rs value to , which is an increase by roughly 13% whereas the probability of the three remaining lir ( consisting of two pluses and one minus each ) is reduced by 1.9% from 0.2917 to 0.2861 .qualitatively this means that more inputs are stored with the lir than the fraction that had this lir already by chance before learning .the learning process hence does not shift illegal ir just up to the decision boundary of the boolean but in some cases the correlations between inputs and couplings neglected in rs allow even the safer lir .( [ corr1 ] ) we can now also calculate the correlation coefficients and find that increases by 2.7% from its rs value 5/12 , decreases in absolute value by 13.3% from its rs value 1/6 and decreases in absolute value by 4.5% from its rs value 3/4 .this confirms the prediction of that although crucial for the storage capacity rsb will have only a minor influence on the correlation coefficients in mln .generalizing the calculation of the distribution function of local fields for the single - layer perceptron we introduced a general formalism to determine the joint probability distribution of local fields at the hidden units of a two - layer neural network of tree architecture with fixed boolean function between hidden layer and output both in replica symmetry and in one - step replica symmetry breaking .explicit results were obtained for the parity , committee , and and machine with hidden units in the saturation limit .although the individual perceptrons are by far overloaded there is no gap in the distribution of local fields as known from a single perceptron above saturation .there is no rsb for the and machine which we attribute to the connected solution space for this architecture . for the parity and committee machinewe find as a result of rsb a slight redistribution of probability from the singular parts at to the continuous part around the origin .the correlation coefficients characterizing the correlations between the legal internal representations are not modified by rsb for the parity machine since in this case they are fixed already by symmetries . for the committee machinethe changes of the correlation coefficients are rather small and the rs results derived in may serve as useful approximations .in this appendix we give some more details on the calculation of the distribution function of the local fields at the hidden units following gardner s approach . introducing the replica trick into eq .( [ p_h ] ) yields with replica index . in the integration measures ( [ measure_j ] ) , ( [ measure_fields ] ) we replace the functions by their integral form we now perform the average over the gaussian distributed patterns and introduce the overlaps of different replicas of the same perceptron as well as its conjugated variable . from the assumed permutation symmetry of the boolean function with respect to all hidden units we infer , , and for all .this gives rise to the form where and denote the symmetric matrices , and , . moreover , -\sum_{k , a < b}x_k^{a}x_k^{b}q^{ab}\right ) \nonumber\\ & \times&\prod\limits_{a}\theta(\sigma f(\{{\text{sgn}}(\lambda_k^{a})\}))\delta(h-\lambda_1^{1}),\end{aligned}\ ] ] -\sum_{k , a < b}x_k^{a}x_k^{b}q^{ab}\right ) \nonumber\\ & \times&\prod\limits_{a}\theta(\sigma f(\{{\text{sgn}}(\lambda_k^{a})\ } ) ) , \label{g1}\\ g_2(a)&=&\int\!\prod\limits_{k , a}\frac{dj_k^a}{\sqrt{2\pi e } } \exp\left(-\frac{1}{2}\sum\limits_{k;a , b } j_k^aa^{ab}j_k^b\right).\end{aligned}\ ] ] in the limit the integral in eq . ( [ ph_replic2 ] )is dominated by the saddle point values of the order parameters , , and . solving the saddle point equation with respect to and yields .( [ ph_replic2 ] ) takes the form can be calculated by assuming either rs or one - step rsb for the matrix resulting in eqs .( [ ph_rs ] ) and ( [ ph_rsb ] ) , respectively .the remaining saddle point condition for the matrix has in one - step rsb ( [ rsb ] ) the form } + \frac{1}{2m}\ln\left(1+\frac{m(q_1-q_0)}{1-q_1}\right ) + \frac{1}{2}\ln(1-q_1)\right .\nonumber\\ & + & \left.\frac{\alpha}{m } \left\langle\!\!\left\langle\int\prod\limits_{k}dy_k\ln\left\ { \int\prod\limits_{k}dz_k\left(\phi_{\rm lir}(\sigma)\right)^m\right\ } \right\rangle\!\!\right\rangle_{\sigma } \right ] .\label{saddle1}\end{aligned}\ ] ] it determines a set of order parameters for every pattern load below the storage capacity . the abbreviation is defined by eq .( [ sum_lir_1 ] ) .the angular brackets indicate the average over the two possible outputs . [99 ] m. mezard and s. patarnello , lptens report , 1989 ( unpublished ) .m. griniasty and t. grossman , phys .a * 45 * , 8924 ( 1992 ) .a. priel , m. blatt , t. grossman , e. domany and i. kanter , phys .e * 50 * , 577 ( 1994 ) .b. schottky , j. phys .a * 28 * , 4515 ( 1995 ) .r. monasson and r. zecchina , phys .lett . * 75 * , 2432 ( 1995 ) .a. engel , j. phys .a * 29 * , l323 ( 1996 ) .d. malzahn , a. engel and i. kanter , phys .e * 55 * , 7369 ( 1997 ) .e. barkai , d. hansel and i. kanter , phys .lett . * 65 * , 2312 ( 1990 ) .e. barkai , d. hansel and h. sompolinsky , phys .a * 45 * , 4146 ( 1992 ) .a. engel , h. m. khler , f. tschepke , h. vollmayr and a. zippelius , phys .a * 45 * , 7590 ( 1992 ) .p. majer , a. engel and a. zippelius , j. phys .a * 26 * , 7405 ( 1993 ) , whyte and sherrington , _ ibid . _* 29 * , 3063 ( 1996 ) .g. gyrgyi and p. reimann , phys .lett . * 79 * , 2746 ( 1997 ) .e. gardner , j. phys .a * 21 * , 257 ( 1988 ) , e. gardner , b. derrida , _ ibid . _* 21 * , 271 ( 1988 ) .m. mzard , g. parisi and m. virasoro , _ spin glass theory and beyond _( world scientific , singapore , 1987 ) .e. gardner , j. phys .a * 22 * , 1969 ( 1989 ) .m. opper , phys .a * 38 * , 3824 ( 1988 ) .d. j. amit , m. r. evans , h. horner and k. y. wong , j. phys .a * 23 * , 3361 ( 1990 ) . w. h. press , s. a. teukolsky , w. t. vetterling and b. p. flannery , _ numerical recipes _ ( cambridge university press , cambridge , england , 1992 ) . .saturated machines : integrated features of the probability distribution of the local field .corrections by one - step rsb are given in percent of the respective rs value .dashes indicate that a respective singular contribution does not occur ( committee ) or that we found no rsb ( and ) .[ cols="<",options="header " , ]
we consider feed - forward neural networks with one hidden layer , tree architecture and a fixed hidden - to - output boolean function . focusing on the saturation limit of the storage problem the influence of replica symmetry breaking on the distribution of local fields at the hidden units is investigated . these field distributions determine the probability for finding a specific activation pattern of the hidden units as well as the corresponding correlation coefficients and therefore quantify the division of labor among the hidden units . we find that although modifying the storage capacity and the distribution of local fields markedly replica symmetry breaking has only a minor effect on the correlation coefficients . detailed numerical results are provided for the parity , committee and and machines with k=3 hidden units and nonoverlapping receptive fields .
let be the graph with set of vertices and set of ( unoriented ) bonds , where denote the vectors in the canonical basis of .let be a sequence in the interval ] and } = b^{(\vec{x},\vec{y } ) } \cap [ a , b] ] that is right - continuous , constant between jumps and satisfies : , & \gamma(r ) \notin d^{\gamma(r)},\\ & r \in b^{(\gamma(r-),\gamma(r ) ) } \text { if } \gamma(r ) \neq \gamma(r-),\\ & |\gamma(r ) - \gamma(r-)| { \leqslant}k .\end{array } \end{aligned}\ ] ] we then define is then a markov process on the space for which the configuration that is identically equal to 0 ( denoted here by ) is absorbing . in case only for , is the contact process of harris ( ) . for all , if , then it is enough to prove the case .fix and .let , for .fix such that . for and ,let be the event } = \varnothing\}\cap \bigcup_{a \in \mathbb{z } } \left\{\begin{array}{l } d^{\vec{x } + a \vec{e}_1}_{[t_n , t_{n+1 } ] } = d^{\vec{x } + a \vec{e}_1 + b \vec{e}_2}_{[t_n , t_{n+1 } ] } = \varnothing,\\[.4 cm ] b^{(\vec{x } , \vec{x } + a \vec{e}_1)}_{[t_n , t_n + \delta/2]}\neq \varnothing,\ ; b^{(\vec{x}+a\vec{e}_1 , \vec{x } + a \vec{e}_1 + b\vec{e}_2)}_{[t_n+\delta/2 , t_{n+1 } ] } \neq \varnothing \end{array } \right\}.\ ] ] then , by first taking small and then taking large , the probability of these events can be made arbitrarily close to 1 .moreover , the proof is then completed with a comparison with oriented percolation almost identical to the one that established proposition [ prop : anis ] . in this sectionwe consider a graph .once more , the vertex set is .the set of bonds consists of two disjoint subsets ; one of them , denoted , only contains oriented bonds , and the other , , only unoriented bonds .these subsets are given by that is , we are considering the hypercubic lattice where there are nearest neighbour , oriented bonds along the vertical direction and long range , unoriented bonds parallel to all other coordinate axes .we consider an anisotropic oriented bernoulli percolation on this graph . given and a sequence in the interval $ ] , each bond is open with probability or , if or , respectively. given two vertices and with , we say that and are connected if there exists a path such that or ( and ) for all , and the bonds are open for all .that is , all allowed paths use vertical bonds only in the upward direction .we use the notation to denote the set of configurations in which there is an infinite open path starting at .we use also the notations and to denote the non - truncated and the truncated ( in the range ) probability measures , respectively . [ hex ] for any , any and any sequence such that , we have .a weaker result was proven in ( see theorem 6 therein ) in the context of non - oriented and isotropic percolation .the proof of theorem [ hex ] is inspirated by the proof thereof ( ) .it is sufficient to prove the theorem for .let be the function satisfying define the events clearly , for all .also note that , if , then the line that contains and does not share any bonds of with the line that contains and .hence , the events defined above are independent .moreover , we have ( a proof of this can be found in the first few lines of the proof of theorem 6 in ) .now , fix and .let be an integer satisfying . then , using , choose such that . then let for each , let be the indicator function of the event then , the elements of the sequence of random variables are independent and , by the choice of , each of them is equal to 1 with probability .now note that an infinite sequence such that , and for each necessarily corresponds to an infinite open path in .moreover , the probability of existence of such a sequence can be taken arbitrarily close to 1 since is arbitrary .this work was done during b.n.b.l.s sabbatical stay at impa ; he would like to thank rijksuniversiteit groningen and impa for their hospitality .the research of b.n.b.l . was supported in part by cnpq and fapemig ( programa pesquisador mineiro ) .
we consider different problems within the general theme of long - range percolation on oriented graphs . our aim is to settle the so - called truncation question , described as follows . we are given probabilities that certain long - range oriented bonds are open ; assuming that the sum of these probabilities is infinite , we ask if the probability of percolation is positive when we truncate the graph , disallowing bonds of range above a possibly large but finite threshold . we give some conditions in which the answer is affirmative . we also translate some of our results on oriented percolation to the context of a long - range contact process . keywords : contact processes ; oriented percolation ; long - range percolation + msc numbers : 60k35 , 82b43
this paper deals with radial solutions of the system of equations where and is an ignition type nonlinearity , that is , there exists such that system arises as a model problem for some reaction diffusion systems in combustion theory. we will describe the connection with these models at the end of the section .we call the trivial solution of . note that for any solution of one has as can be deduced from the maximum principle .we are mainly interested in existence and multiplicity of nontrivial radial solutions of .the first observation in this direction is that for large has no nontrivial solutions .indeed , by the maximum principle and then if is large then for all ] is continuous , satisfies and for some .\end{aligned}\ ] ] then and there exist such that for there are at least 2 solutions of .one of them has bounded as and the other has in the range as , where is fixed .a very natural and interesting question is the stability of the solutions constructed in theorem [ thm existence1 ] .based on the works we conjecture that the solution with bounded is unstable and that in part of branch of solutions with large the solution is stable , at least with respect to radial perturbations . in some cases one may want to consider a discontinuous nonlinearity , such as the heavisde function } and exists } .\end{aligned}\ ] ] [ thm existence2 ] assume \to [ 0,+\infty) ] is interesting since it provides a situation where explicit calculations are possible .theorem [ thm existence2 ] shows that in part the conclusions obtained for } ] explicit calculations lead to an equation for and in order for a radial solution to exist . in figure [ fig1 ]we show the numerical solution for this relation when , with in the vertical axis and in the horizontal axis .it shows that for there are 2 solutions .solutions in the lower branch satisfy in that is , the reaction takes place in the ball of radius .the same happens for points in the upper branch which are to the right of the special point marked in the graph . to the left of that point the solution satisfies in an annulus of the form .thanks to the explicit form of the relation bewteen and we can compute the asymptotic behavior of the curve as , and we find that where is the unique solution of the system of equations [ fig1 ] because of the information on the heaviside nonlinearity one can conjecture that for general there should be a similar relation for and as .we present in section [ apriori estimates ] nonexistence results for general ignition nonlinearities satisfying and , that capture this relation , and roughly speaking say that no solution can exist if is either too large or too small , provided is taken large enough .using these nonexistence results and degree theory we can give a proof of theorem [ thm existence1 ] .this is done in section [ existence of solutions ] . in section [sect exsit discont ] we give the proof of theorem [ thm existence2 ] by approximating the discontinuous nonlinearity by continuous ones .section [ the heaviside ignition ] is devoted to the explicit computations for the heaviside function .finally section [ estimate on eps * ] contains a finer estimate of . as mentioned before , system arises in connection with some models in combustion theory , more precisely , in the flame ball problem for a weakly premixed gas sensitive to radiative heat losses . in such a mixture , it is known that apparently stationary spherical structures appear , which are called flame balls . in following reaction diffusion system has been used to model a combustion process where flame balls can appear : where is the temperature , the reactant concentration , , are the reactant concentration and tempereature at infinity , and , , and are respectively the specific heat capacity at constant pressure , the perfect gas constant , the chemical heat release and the molecular mass of the reactant .the term represents radiative losses .the reaction is characterized by the one - step arrhenius kinetics where is a constant .furthermore , the hydrodynamics effects are neglected , i.e the density , the thermal conductivity and the diffusion coefficient are constant .see also numerical simulations in . after the seminal work , the traveling front problem for systems like has been investigated by several authors , for example , . in absence of radiation ,i.e. when , there are many works dealing with , see for instance and the references therein .also we remark that the stationary version of without radiation , leads to system with which reduces to a scalar equation , since .there is a huge amount of literature concerning existence of radial ground states for semilinear equations , so we mention here only some classical references on the problem in entire space .when the problem is treated in a bounded domain see the book and for multiplicity results in the case of arrehnius non - linearity .the paper contains interesting numerical computations of the bifurcation diagram in the case of the full coupled system in a interior of a sphere. a common simplification of under the assumption of large activation energy , that is , is to assume that the source term for the reaction is concentrated on a very thin layer , typically a sphere .this approach is taken for example in and leads to the free boundary problem where is the radius of the front where the reaction takes place , is the dirac measure and is the front temperature . in authors analyze the stability of stationary solutions of . in a similar framework , existence and stability of flame balls and travelling flame ballshave also been studied in .we arrive at by introducing the following simplifications : * assume the radiative loss to be linear , i.e. , where , * approximate by where is an activation temperature and is a cut - off function satisfying in and in . as in one can model radiative heat losses using stefan s law for some constant .when is close to we can write . as a step towars understanding more general situations , we assume that this linear relation holds for all , that is , we assume a ) .other linear or piecewise linear approximations have been used before , for instance in .assumption b ) corresponds the a standard approximation in combustion theory to avoid the _ cold boundary difficulty _ , see . after introducing dimensionless variables , , corresponding to temperature and reactant concentration ,the stationary version of becomes where is the lewis number , , and is an ignition type function , that is , there is such that if .we stress that our results are valid for any value of . indeed ,since we are considering stationary solutions , the following change of variables will allow us to assume that .letting , transforms system into , where and .we observe that is still is an ignition type nonlinearity .the purpose in this section is to establish nonexistence results in some ranges of the parameters . given a nontrivial solution of let be such that setting , these new functions satisfy in the sequel we will study in the following set of functions let be such that we consider now functions such that [ nonexistence large epsilon beta2 ] there is no solution in to if since we have an explicit formula > from this we find similarly , since for and we have where .this yields integrating the equation for in ( 0,1 ) implies let be such that . then integrating the equation for in yields this formula together with and gives and it follows that [ nonexistence small epsilon beta2 ] there is , depending only on , , such that for all , and all satisfying , , there is no solution in to the system .we treat the case since the situation is similar .as before > from this we find similarly where .this yields integrating the equation for we see that > from this and it follows that integrating the equation for in ( 0,1 ) implies and combined with yields integrating the equation for in we obtain integrating this on with yields in particular , with this and give now going back to we obtain now consider the function which satisfies in . integrating the relation in so that it follows that observe that integrating on we find and since in particular , using , * step 1 . * for any there exists depending on , and only such that } u < \frac{1+\theta}{2 } \quad \forall \beta \ge \beta_1 .\end{aligned}\ ] ] to prove this , suppose that } u \ge \frac{1+\theta}{2 } ] be such that .let so that from then \quad \hbox{such that } \quad |s - r| \le \frac{1-\theta}{4 m}.\end{aligned}\ ] ] using and since if let ] . using and : we take small only depending on such that then and the claim follows . * step 2 .* for any there exists such that } u -\theta ) h_0\left(\frac{\max_{[r_0,1 ] } u-\theta}{2 } + \theta \right ) \le \frac{c}{\beta^2}.\ ] ] the constant depends only on and .the conclusion from this is that } ( u - \theta ) \le 0,\end{aligned}\ ] ] and this is uniform with respect to and . as before , setting we have for by .let ] and let us write } u ] ,\ ] ] where .let us consider with nonlinearity that is to apply the non - existence results of the previous section we need to exhibit a function satisfying and . for this purpose define \ : \f \ge t \hbox { in } [ r,1 ] \ } \quad \hbox{for all } t \ge 0,\ ] ] and .\ ] ] the following properties then follow from these definitions : 1 . is strictly increasing , continuous from the left and satisfies .2 . for ] , is nondecreasing and continuous in ] .the function satisfies and and the nonlinearity satisfies , , for all ] . then find and such that admits a solution .this problem has a unique solution which furthermore is ) ] then is a bounded set in ) \times \r ] . [ lemma unique eps=0 t=0 ] the operator has a unique fixed point in .in this situation and hence the system reduces to with the additional requirement that let .then the equation for becomes this equation is of logistic type and many properties are well known ( see ) : 1 .any solution to satisfies , and either or in .2 . has a nontrivial solution if and only if where is such that and denotes the first dirichlet eigenvalue of in the unit ball .3 . for is a unique non - trivial solution , which we write as .4 . is monotone increasing with respect to and uniformly on compact sets of and > from the above properties it follows that moreover it follows that there is a unique such that we call this value and the let .then is the fixed point of .[ sol is nondegenerate ] let denote the unique fixed point of found in lemma [ lemma unique eps=0 t=0 ] .then is nondegenerate .let us write the solution of of lemma [ lemma unique eps=0 t=0 ] .we have to verify that the linearization of around this solution is an invertible operator .note that the operator involves only the nonlinearity and that the fixed point from lemma [ lemma unique eps=0 t=0 ] satisfies in and .therefore for in a neighborhood of in the topology of ) ] there is no solution in to the system .this means that has no fixed point if and .let and define ) } < r_1 , \ ; 0 < \beta < \beta_1 \ } .\end{aligned}\ ] ] let . then is a bounded open set of and for and the operator has no fixed point in . indeed , suppose is a fixed point of .it is not possible that by lemma [ nonexistence small epsilon beta2 ] .the case is also impossible .this means that and ) } = r_1 ] . by lemmas [ lemma unique eps=0 t=0 ] and [ sol is nondegenerate ] we have and by homotopy invariance .\ ] ] this shows has at least one fixed point in for and ] . then using a homotopy along ] .since is uniformly bounded we may extract a further subsequence such that weakly- * in .the function then satisfies } f ] .since satisfies a linear equation for and vanishes at infinity it has an explicit formula setting for we see that satisfy the set is open ( relative to and for any we have as .it follows that in .outside we have a.e .indeed , let , .then } f_n(u_n ) \psi + \int_{d^c \cap [ 0,1 ] } f_n(u_n ) \psi.\ ] ] since in by dominated convergence we have } f_n(u_n ) \psi \to \int_{d \cap [ 0,1 ] } f(u ) \psi .\ ] ] on the other hand , if so that on . then } ( f_n(u_n ) -\eta ) \psi = \int_{d^c \cap[0,1 ] } ( f_n(u_n ) -\eta)^+ \psi - \int_{d^c \cap[0,1 ] } ( f_n(u_n ) -\eta)^- \psi\le o(1)\ ] ] where denotes a sequence converging to 0 as .it follows that } f_n(u_n ) \psi \le \int_{d^c \cap [ 0,1 ] } \eta \psi + o(1)\ ] ] and hence } f(u ) \psi + \int_{d^c \cap [ 0,1 ] } \eta \psi.\ ] ] this shows that a.e . in .our main objective is to show that the complement of is finite .if whenever then is discrete and since it is contained in ] we have and .let then and , .we assert that there is a small interval , such that in . to prove this we start ruling out the possibility that for some infinite sequence .we actually may assume that if is even then on and if is odd on .let us see that the following holds where and denotes a sequence bounded by with independent of as .suppose first that on and define by .\ ] ] then .\ ] ] this equation implies that and are uniformly bounded on ] .thus .\ ] ] multiplying this equation by and integrating in ] this means that a.e . in ] .actually in by the strong maximum principle and this finishes the proof in this case . if is not constant , then for all .let us verify that is discrete .suppose that for some we have .if then is isolated . if then integrating the equation for we have for near we have in the second line above we have used that , and . in the third line above we may say that if is sufficiently close to by continuity , and ( we may regard as continuous here since we are working with values above ) .this shows that is strictly convex in a neighborhood of and hence there are no other points in close to .this shows that is discrete , hence finite , and finishes the proof of the theorem .in this section we perform explicit calculations for the ignition nonlinearity } ] , we obtain the value .this estimate can be sharpened . for this , given us introduce for define also as the smallest zero of in the interval .we be the largest in ]where with a similar argument we can show that for the system has no nontrivial solution .indeed , given any choose a smooth function such that for and for . with the same argument as before , has no nontrivial solution for where is given by with replaced by .a computation then shows that as , . * acknowledgment . *j. coville warmly thanks professor sivashinsky for proposing the formulation of this problem and enlightening discussions .he is also indebted to professor dolbeault for useful conversations on this problem .j. coville was partially supported tel aviv university , universit paris dauphine and max planck institute for mathematics in the science .j. dvila was supported by fondecyt 1057025 , 1090167 .both authors acknowledge the support of ecos - conicyt project c05e04 .h. berestycki , b. larrouturou , _ quelques aspects mathmatiques de la propagation des flammes prmlanges ._ nonlinear partial differential equations and their applications . collge de france seminar , vol .x ( paris , 19871988 ) , 65129 , pitman res . notes math . ser ., 220 , longman sci . tech . ,harlow , 1991 .h. berestycki , b. larrouturou , j .-roquejoffre , _ mathematical investigation of the cold boundary difficulty in flame propagation theory ._ dynamical issues in combustion theory ( minneapolis , mn , 1989 ) , 3761 , i m a vol . math ., 35 , springer , new york , 1991 .j. brindley , n. a. jivraj ; j. h. merkin ; s. k. scott , _ stationary - state solutions for coupled reaction - diffusion and temperature - conduction equations .ii . spherical geometry with dirichlet boundary conditions .london ser . a 430 ( 1990 ) , no .1880 , 479488 .j. buckmaster , g. joulin , p. ronney , _ the structure and stability of nonadiabatic flame balls ._ combustion and flame * 79 * ( 1990 ) , no . 3 - 4 , 381392 .v. hutson , j. lpez - gmez , k. mischaikow , and g. vickers , _ limit behaviour for a competing species problem with diffusion _ , dynamical systems and applications , world sci ., vol . 4 , world sci .publ . , river edge , nj , 1995 , pp .343358 . c. lederman , j .-roquejoffre , n. wolanski , _ mathematical justification of a nonlinear integrodifferential equation for the propagation of spherical flames ._ ann . mat .pura appl .( 4 ) 183 ( 2004 ) , no .2 , 173239 .
in this paper , we construct radially symmetric solutions of a nonlinear noncooperative elliptic system derived from a model for flame balls with radiation losses . this model is based on a one step kinetic reaction and our system is obtained by approximating the standard arrehnius law by an ignition nonlinearity , and by simplifying the term that models radiation . we prove the existence of 2 solutions using degree theory .
minimizing energy consumption is one of the primary technical challenges in sensor networking .many sensor applications such as habitat monitoring and industrial instrumentation envisage scenarios in which a large number of sensor nodes , powered by tiny batteries , will be actively deployed for months and even years . in many instances, it may not be possible to replace these sensor nodes once they run out of energy because the sensor nodes could be inaccessible ( for example , embedded in concrete structures to sense stress levels ) .replacing dead batteries in a sensor network consisting of a large number of nodes may also not be economically feasible .there is a now a broad consensus that aggressive system level strategies impacting many layers of the protocol stack need to be devised to meet the lifetime requirement of extant and future sensor networks . in this paper, we consider a single - hop sensor cluster .nodes in the cluster periodically sample a field and transmit the data directly to a central location or base - station .we are interested in minimizing the energy spent by these nodes in transmitting , with the objective of maximizing cluster lifetime .sensor nodes also spend energy in receiving data , sensing / actuating and computation . the energy spent insensing / actuating represents a fixed cost that can be ignored .the energy cost of receiving data can be easily incorporated in our optimization model .we assume computation costs are negligible compared to radio communication costs .this is debatable assumption in dense networks ; we intend to incorporate computation costs in future work .we believe that our model is useful because many popular proposals recommend organizing a sensor network into clusters . hereeach cluster elects a cluster - head ( which we call base - station ) .nodes communicate only through their respective cluster - heads .approaches that maximize cluster lifetime can be thought of as being complementary to network - wide approaches such as energy - efficient routing .moreover , our model is applicable to scenarios where a roving base - station moves from one cluster to another , gathering data .we define cluster lifetime as the time until the first node in the cluster runs out of energy .while this is a somewhat pessimistic definition , we argue that a cluster will consist of relatively few nodes .the failure of even one such node can have disastrous consequences on the cluster s performance ( for example , coverage ) .this definition also has the benefit of being simple and popular .other definitions proposed for network lifetime such as mean expiration time and time until a certain fraction of nodes fail are not appealing from a cluster viewpoint . somewhat to our surprise, we find that analyzing the performance of this simple model is far from trivial .the complexity arises from several competing system - level opportunities to be harnessed to reduce the energy consumed in radio transmission .first , sensor data in a cluster is spatially and temporally correlated . in , slepian and wolfshow that it is possible to compress a set of correlated sources down to their joint entropy , without explicit communication between the sources .this surprising existential result shows that it is enough for the sources to know the probability distribution of data generated .recent advances in distributed source coding allow us to take advantage of data correlation to reduce the number of bits that need to be transmitted , with concomitant savings in energy .second , it is also well known that channel coding can be used to reduce transmission energy by increasing transmission time . finally ,sensor nodes are cooperative , unlike nodes in an ad hoc network that are often modeled as competitive .this collaborative nature allow us to take full advantage of the first two opportunities for the purpose of maximizing cluster lifetime . motivated by our definition of cluster lifetime, we pose the problem of maximizing lifetime as a max - min optimization problem subject to the constraint of successful data collection and limited energy supply at each node .this turns out to be an extremely difficult optimization to solve .consequently , we employ a notion of instantaneous decoding to shrink the problem space .we show that the computational complexity of our model is determined by the relationship between energy consumption and transmission rate as well as model assumptions about path loss and initial energy reserves .we provide some algorithms , heuristics , and insights for several scenarios . in some situations , our problem admits a greedy solution while in others , the problem is shown to be -hard .the chief contribution of the paper is to illustrate both the challenges and gains provided by source - channel coding and scheduling .there is much related work in this area .energy conscious networking strategies have been proposed by many researchers mainly at the mac and routing layer .our study was motivated by previous research in , which explicitly incorporate aggregation costs in gathering sensor data . in ,the authors consider the problem of correlated data gathering by a network with a sink node and a tree communication structure .their goal is to minimize the total transmission ( energy ) cost of transporting information .the first part of considers a model similar to ours , namely , that of several correlated nodes transmitting directly to a base station .however , both and are interested in minimizing total energy expenditure , as opposed to maximizing network lifetime . in the latter case ,the optimal solution is shown in both papers to be a greedy solution based on ordering sensors according to their distance ( which reflects data aggregation cost ) from the base station .however , we show that this solution is not optimal for maximizing network lifetime. an early version of our ideas appeared in .this paper generalizes the work presented in and provides proofs of some key conjectures there . in section [ model ], we present our system model and describe our notion of instantaneous decoding . in section [ gc ], we consider a general channel model which allows us to consider the joint impact of cooperative nature of the sensor nodes and source and channel coding on system lifetime .we prove that the both , the static and dynamic scheduling problems for the general channel model , are -hard and the optimal dynamic scheduling strategy does better than optimal static scheduling strategy , in general .we also provide the geometric interpretation of the optimal solutions and the solution search procedures . as a special case of this problem , in section [ srra ], we consider a scenario which allows us to neglect the impact of transmission time allocation .this is similar to the scenario considered in and .here we provide some key insights into the nature of the optimal solutions for both , static and dynamic scheduling , derived in .we consider a network of battery operated sensor nodes strewn uniformly in a coverage area .time is divided into slots or rounds . in each slot , sensors take samples of the coverage area and transmit the information directly to the base station .we model the sensor readings at node by a discrete random process representing the sampled reading value at node in the time slot .we assume that sensor readings in any time slot are spatially correlated .we ignore temporal correlation by assuming that sensor readings in different time slots are independent .however , temporal correlation can easily be incorporated in our work for data sources satisfying the asymptotic equipartition property ( aep ) .the entropy of is denoted by .initially , sensor node , has access to units of energy .the wireless channel between sensor and the base station is described by a path loss factor , which captures various channel effects such as distance induced attenuation , shadowing , and multipath fading . for simplicity , we assume s to be constant .this is reasonable for static networks and also in the scenarios where the path loss parameter varies slowly and can be accurately tracked .the network operates in a time - division multiple access ( tdma ) mode . in each slot , every sensor is allotted a certain amount of time during which it communicates its data to the base station . the general problem is to find the optimal rate ( the number of bits to transmit ) and transmission times for each node , which maximize network lifetime .both the rate and time allocation are constrained .the rate allocation should fall within the slepian - wolf achievable rate region and the sum of transmission times should be less than the period of a time - slot ( which is taken to be unity ) . finding the optimal rate allocation is a computationally challenging problem as the slepian - wolf achievable rate region for nodesis defined by constraints .we simplify the problem by insisting that decoding at the base - station be instantaneous in the sense that once a particular node has been polled , the data generated at that node is recovered at the base - station before the next node is polled .this reduces the rate allocation problem to finding the optimal scheduling order , albeit at some loss of optimality .this loss of optimality occurs because our problem formulation assumes the separation between source and channel coding and it is well - known , , that the source - channel separation does not hold for the multi - access source - channel coding problem and slepian - wolf coding followed by channel coding is not optimal for the joint source - channel coding problem .also , in general , turning a multiple - access channel into an array of orthogonal channels by using a suitable mac protocol ( tdma in our case ) is well - known to be a suboptimal strategy , in the sense that the set of rates that are achievable with orthogonal access is strictly contained in the ahlswede - liao capacity region .however , despite this fundamental sub - optimality , we argue like that there are strong economic gains in the deployment of networks based on such technologies , due to the low complexity and cost of existing solutions , as well as available experience in the fabrication and operation of such systems .let be the set of permutations of the set , .the polling schedule followed by the network in any time slot corresponds to a permutation , .let denote the node to be scheduled .instantaneous decoding implies that the amount of data to be transmitted by node is the conditional entropy of the data source at node , given the data generated by all previously polled nodes .we denote the amount of information generated by node by .our aim is to find the scheduling strategy ( scheduling order and transmission time allocation ) that maximizes network lifetime .in this section , we consider the general channel coding scenario where the transmission energy is the convex decreasing function of the transmission time .for example , by inverting shannon s channel capacity formula for the awgn channel , it is straight - forward to show that transmission energy is a strictly decreasing convex function of transmission time .other channel coding situations lead to a similar result .in such a scenario , we not only have to find the optimal scheduling order , but also the optimum transmission times for each node .we consider two kinds of schedules , namely , static and dynamic . in static scheduling, the nodes follow the same fixed scheduling order in all time slots until the network dies . under dynamic scheduling , we allow nodes to collaborate further by allowing them to employ different schedules in different time slots .more specifically , it is _ offline _ dynamic scheduling , where before the actual operation of network starts , the base - station has already computed the optimum set of schedules and the number of slots for which each schedule is used , rather than _ online _ dynamic scheduling , where only at the beginning of every polling slot , the base - station computes _ on fly _ the optimum schedule for that time - slot , based on its knowledge of the latest state of the network .let be the energy required to transmit bits of information in units of time with path loss factor .so , we can interpret as the energy required to transmit bits of information in units of time with unit path loss .based on our discussion , we model the energy function as follows . 1 . is a strictly _ decreasing _ continuous positive convex function in .2 . stated otherwise , we assume to be the one that is obtained by inverting shannon s awgn capacity formula , that is : in static scheduling , each permutation , corresponds to a tdma schedule .let denote the number of information bits transmitted per slot by node under the schedule .let be the corresponding transmission time alloted to node and be the lifetime achievable by the system under the schedule .note that lifetime is integer - valued but we will treat it as a real number .the optimal static schedule is the solution to the following optimization problem . however , using the `` channel aware '' algorithm proposed in for every schedule , we have the maximum lifetime and the corresponding transmission times allocation vector .further , for this transmission time allocation vector , all the sensor nodes achieve the same lifetime .so , the problem in reduces to the following optimization problem : before we analyze the solution of the optimal scheduling problem for the general case , it should be noted that given the cooperative nature of the sensor nodes , the nodes can collaborate with each other to a much greater extent by varying their transmission times .for example , nearby nodes can finish their transmissions sooner , allowing far away nodes more time to transmit in order to improve system lifetime .the structure of the problem in or is such that its computational complexity depends on the sensor node data correlation structure as well as the energy function .the following three examples amply illustrate this ._ example 1 : _ let us consider the following model for spatial correlation of the sensor data .let be the random variable representing the sampled sensor reading at node and denote the number of bits that the node has to send to the base - station .let us assume that each node has at most number of bits to send to the base - station , so .however , due to the spatial correlation among sensor readings , each sensor may send less than number of bits .let us define a data - correlation model as follows .let denote the distance between nodes and .let us define , the number of bits that the node has to send when the node has already sent its bits to the base - station , as follows : figure [ fig1 ] illustrates this for .it should be noted that when , the data of the nodes and differ in at most least significant bits .so , the node has to send , at most , least significant bits of its bit data .also note the following property of the correlation model : however , the definition of the correlation model is not complete yet and we must give the expression for the number of bits transmitted by a node conditioned on more than one node already having transmitted their bits to the base - station .there are several ways in which this quantity can be defined .presently , let us consider the following definition of the conditional information : _ part 1 : _ here let us assume that the ratio of energy of a node and path - loss between base - station and the node is equal for all the nodes .so , without the loss of generality , for every sensor , we can put .this assumption is only to simplify the solution , yet it is not such an unrealistic assumption when we consider the scenarios such as one where given the equal energies of all the nodes , the distance of the base - station from any node is much more than the distance between any two nodes .also , when we have the roving base - station for the data gathering , this assumption holds good .so , using the energy consumption model of , this makes the time to transmit depend only on the number of bits that a node has to send to the base - station , so a sensor polling schedule that results in larger value of , will also result in the larger value of the sum of transmission times of all the nodes . a greedy scheme that assigns information bits to the nodes according to , gives the solution for the static scheduling problem in in .start with any node as the first node of the schedule , then choose the next node in the schedule to be that node that minimizes the conditional number of bits .however , given the definition of correlation model in and , this amounts to finding the nearest node .so , the schedule that selects the nearest neighbor as the next node to be polled is the optimum schedule .we call this algorithm : _nearest neighbor next _ or_ nnn_. for a desired value of network lifetime , the _ nnn _ schedule will give the smallest value of and the smallest sum of the transmission times , so using the `` channel aware '' algorithm proposed in , we can prove that this schedule maximizes the network lifetime .it should be noted that it is the special property of this problem , due to the correlation model and assumption above , that the schedule that minimizes , also minimizes the sum of the transmission times , and subsequently maximizes the network lifetime . in general , it is not true that the schedule that minimizes also minimizes the sum of the transmission times . _part 2 : _ without the assumption in _ part 1 _ , here we consider the general problem .the following theorem proves that for the given spatial correlation model , the problem is -hard . the static scheduling problem in is -hard for the spatial correlation model of .an arbitrary instance of `` shortest hamiltonian path '' problem can be reduced to this problem by following the same sequence of steps as in the proof in _example 2_. _ example 2 : _ let us consider the spatial correlation model of , that is the one where the sensor data is modeled by gaussian random field .thus , we assume that an dimensional _ jointly gaussian multivariate distribution _ models the spatial data of sensor nodes . where is the ( positive definite ) covariance matrix of , and the mean vector .the diagonal entries of are the variances .the rest of depend on the distance between the nodes and : . without any loss of generality , here we use differential entropy instead of entropy , as we assume that the data at each sensor node is quantized with the same quantization step , and under such assumption , differential entropy differs from entropy by a constant .[ gc_ss_thrm ] the static scheduling problem in is -hard for the spatial correlation model of and energy function .let us consider the decision version of this problem : does there exist a schedule , for which the network achieves the lifetime , with the following constraints ?we will prove the -hardness of this problem as follows .we reduce an arbitrary instance of `` shortest hamiltonian path problem '' over euclidean and complete graph , which is well known to be -complete , to some instance of the problem in . for the given instance of shp problem , interpret the edge cost between nodes and , as the spatial distance between the nodes and of our problem .so , as we visit a node in the shp tour , we can compute the conditional entropy of that node using the knowledge of the model for spatial correlation among the sensor nodes as well as the history of the tour so far . with this computed conditional entropy value ,using the first constraint of , we can compute the minimum time that this node needs to transmit bits of information to the base - station .so , for every schedule , we can compute the sum of the minimum transmission times .let us consider an euclidean , complete graph of nodes ] and denote their data samples , respectively , then : where and , denote the covariance matrices of and , respectively . assume that the transmission time of node $ ] is exponentially dependent on the entropy of the node ( this follows from the empirical results obtained after numerically solving the equation ) .let us denote the transmission times of the nodes a , b , c , and d under schedule as and respectively . similarly , for the schedule , let the corresponding times be and respectively .note that and .now after substituting the values of , and and a little algebraic manipulation of the resulting expressions , we show that is true if ( with and as defined in ) : so , if a schedule has smaller hamiltonian path length , then the corresponding sum of the transmission times will be smaller too .this implies that the solution of shp gives the smallest value of the sum of the transmission times .so , for the schedule that gives shortest hamiltonian path , we can compute the sum of the transmission times and if this sum is less than , then we have at least one schedule that achieves the lifetime . in this section ,we explore how network lifetime can be increased by employing multiple schedules .instead of restricting the network to follow a single schedule , we allow the system to employ different schedules over time .there are possible schedules to choose from .let be the number of information bits generated per slot by node under the schedule , .two or more schedules can collaborate by having the nodes use non - optimal transmit energies over two or more data - transmission slots to increase the lifetime of the network .we have a total of schedules .if only schedules are going to cooperate , then there are possible combination of the schedules .let denote the number of time slots for which schedule is employed . the optimal network lifetime under dynamic scheduling is the solution to the following optimization problem \\ \pi_1 , \ldots , \pi_m \in \pi}}\sum\limits_{i = 1}^m \tau_{\pi_i } & \\ & \mbox{s. t. } \sum\limits_{i=1}^m f(h_{\pi_i(k ) } , t_{\pi_i(k)})d_k \tau_{\pi_i } \le e_k , \forall 1 \le k \le n & \nonumber \\ & \sum\limits_{k=1}^{n } t_{\pi_i(k ) } = 1 , \forall 1 \le i \le m & \nonumber\end{aligned}\ ] ] specifically for , we have also note that if , then the problem in reduces to the static scheduling problem in .so , the computational complexity of this problem can not be any less than that of the static scheduling problem , which is proven to be -hard in theorem [ gc_ss_thrm ] . herethe question we are concerned with is that if the dynamic scheduling can indeed increase the network lifetime . in the following , we prove that even for the simplest case of the network of two nodes , it is indeed so .[ gc_ds_thrm ] for , dynamic scheduling performs better than the optimal static scheduling . for the network of two nodes , let us consider two schedules where node is polled before node and , where the nodes are polled otherwise . now using our `` channel aware '' algorithm of the previous section , for a given polling schedule we can find the optimal allocation of the transmission times such that both the nodes spend same amount of energy , dying at the same timelet for schedule , this happens when the node 1 transmits for units of time and node 2 transmits for units of time .similarly , for schedule , let the corresponding times be and .let denote the entropy of first node polled in the schedule and denote the entropy of second node polled .so , for schedule : and for schedule : . given the optimality of and for the schedules and respectively , we have for the energy consumptions of the nodes : if we assume the schedule to be the optimum static schedule , then the following holds true : let us consider the plot where the horizontal axis corresponds to the energy consumption of the node 1 and the vertical axis corresponds to the energy consumption of the node 2 . in this plot , we draw the energy consumption curves for both the schedules and , for different values of and respectively .given the form of the energy consumption curves , it is easy to verify that these two curves corresponding to two different schedules , are convex and will intersect at one and only one point .let us consider the `` equal energy line '' which passes through the pair of points and , so the equation of this lines is now let us also consider a line that passes through the point on the curve corresponding to the schedule with , and the point with on the curve for schedule .the equation for such a line is now let us consider the point of intersection of these two lines .at the point of intersection , we have : now , if we want to prove that with the dynamic scheduling we can perform better than the static scheduling , then we must prove that there exists at least one pair of values , for which the following holds substituting the expressions of and from in , and using the properties of the energy consumption function , we prove that the dynamic scheduling performs better than static scheduling for all such that this result implies that two schedule can cooperate to give longer network lifetime compared to optimum static schedule .following subsection , generalizes this result .let us consider the scenario where we have sensor nodes to poll and this polling can be done in ways .let us consider an dimensional euclidean space , where an axis corresponds to the energy consumption of a node .given that the energy consumption can only assume positive real values , we are only concerned with the first orthant of this -dimensional space . for any given schedule , as the transmission time allocation to the different nodes changes , the corresponding energy consumption of the nodes changes .so , the point defining the energy consumption of the nodes in this dimensional space describes an -dimensional convex hypersurface . for possible schedules, we have such hypersurfaces .the computational complexity of the problem of finding the optimum static and dynamic schedules depends on the properties of the intersections of these hypersurfaces .also , note that the general shape of these surfaces is determined by the energy consumption function and the model of the spatial correlation in the sensor data .the optimal static schedule is the one whose energy hypersurface is intersected by the `` equal energy line '' closest to the origin .further , the dynamic scheduling helps us achieve all the points on the convex hull of convex hypersurfaces , as those are all the points achievable with the cooperation of any number of schedules .it is obvious that the network lifetime can not be increased by the cooperation of those schedules and their transmission time allocations , which give any point in the interior of this convex - hull , as there is always a point on the surface of the convex - hull that is closer to the origin and gives better network lifetime .so , when two schedules cooperate , then the optimum transmission time allocation is the one that gives the line connecting the two points on the surface of the convex hull. then the optimum network lifetime is achieved by some point on that line , specifically by the point on this line that is closest to the origin .similarly , when three schedules cooperate , then the respective optimum transmission times allocation for the those three schedules is the one that gives the plane connecting the energy consumption points corresponding to the three schedules , on the surface of the convex hull and then the optimum network lifetime is achieved by some point on this plane . formally , if are the cooperating schedules , then the plane defined by the points on the corresponding hypersurfaces , that is closest to the origin , must belong to the convex hull .for these schedules , the optimal lifetime of the network is obtained by that point on this plane that is closest to the origin . this is pointis obtained as the solution to the following optimization problem : with these optimum values of the parameters , we solve for the network lifetime as follows : the optimum network lifetime for the set of schedules , is obtained by solving above set of equations for all possible combinations of schedules , that is is obtained by : \\ \pi_1 , \ldots , \pi_m \in \pi } } l_{\pi_1\ldots\pi_m}\ ] ] further , to obtain the optimum value of network lifetime over all possible combinations of the schedules is obtained by : it should be noted that the equations - , essentially solve .however , this alternative formulation of the problem in , helps us to prove following two important theorems : [ geothrm1 ] the optimum network lifetime for schedule cooperation is no worse than the optimum network lifetime for schedule cooperation .that is . omitted for brevity .[ geothrm2 ] when , then the cooperation among or more schedules does not improve the network lifetime anymore .omitted for brevity .in this section , we assume that transmission rate is linearly proportional to signal power . this assumption is motivated by shannon s awgn capacity formula which is approximately linear for low data rates .the energy expended by a node to transmit units of information is given by , where is the suitably normalized path loss factor between the node and the base station .the linear rate assumption implies , as shown below , that transmit energy is independent of transmission time .hence , the optimal time allocation problem is trivial and we only need to find the optimal scheduling order . for the small data rates , the energy consumption function for the node under schedule reduces to . for example ,by inverting shannon s awgn channel capacity formula , we get the following as the energy consumption function : for the small data rates , this gives for some constant under the `` small rate region approximation '' , the static scheduling problem in reduces to : the objective function represents the lifetime of node under the given static schedule . in , we describe a greedy static scheduling strategy , minimum cost next ( mcn ) , and prove its optimality .the mcn schedule not only maximizes the minimum lifetime , but also maximizes all lifetimes from minimum lifetime to minimum lifetime .this is desirable in the situations , where the network has to continue to operate even when one or more nodes die out .also , note that the mcn solution is pareto - optimal . given an mcn schedule ,no other schedule can help increase any node s lifetime without decreasing some other node s lifetime . in this section ,we explore how network lifetime can be increased by employing multiple schedules under the small data rate approximation . under this assumption , as the general static scheduling problem in reduced to , the general dynamic scheduling problem in reduces to the following optimization problem that gives the optimal lifetime is the number of slots for which the schedule is used .once more , the constraints ensure that the time assignment is feasible for each node with respect to its energy capability . also as in [ gc_ds ] ,can be treated as a linear program . a dynamic schedule , , is given by the set . given variables , in general , there seems to be no easy way to solve to compute the optimal values .however , in , we exploit the special nature of our problem to propose an algorithm , which we refer to as lifetime optimal clustering algorithm ( local ) , and prove its optimality . in this sectionwe discuss both the static and dynamic scheduling in `` small rate region approximation '' in the spirit of discussion of the nature of the solutions in [ geo_inter ] . under this approximation ,the energy consumption is independent of the transmission time , so the flexibility to change the energy consumption by varying the transmission time is not available .so , in the -dimensional space , where each axis corresponds to the energy consumption of a node , for every schedule , we get a point in this space , rather than a hypersurface . for possible polling schedules ,we get points . the optimal static schedule under this approximationis given the mcn algorithm .given the nature of the problem in , this corresponds to that schedule which gives a point closest to the `` equal energy line '' .this is not difficult to see if one notes that optimal static schedule attempts to ` equalize ' the lifetimes / energy consumption of the nodes .however , as noted above , the flexibility of varying the transmission times of the nodes to ` equalize ' their lifetimes is not available under this approximation , so the optimum static schedule does the best in providing a point closest to this `` equal energy line '' , if not at the line itself .note that the point corresponding to the optimum static schedule may not be the closest to the origin .similarly , the optimal dynamic scheduling under `` small rate region approximation '' corresponds to finding those schedules , the lines , planes , or the hyperplanes connecting which contain the point closest to the `` equal energy line '' .when more than one such points are possible , then one that is closest to the origin is taken to be the point of operation of the network .in this paper , we have considered the problem of maximizing the lifetime of a data gathering wireless network .our contribution differs from previous research in two respects .firstly , we proposed a combined source - channel coding framework to mitigate the energy cost of radio transmission . secondly , we have explicitly maximized network lifetime as opposed to other objective functions such as cumulative energy cost . to the best of our knowledge , boththese aspects have not been explored in the context of sensor networks previously . in our system model , nodes communicate directly to a base station in a time division multiplexed manner . with our notion of instantaneous decoding , we show that the network lifetime maximization problem reduces to finding an optimal scheduling strategy ( polling order and transmission time allocation ) .we considered the general channel problem and proved that there the optimal static and dynamic scheduling problems are -hard and the optimal dynamic scheduling strategy indeed does better than the optimal static scheduling strategy .then we considered the scenario where the energy consumption is independent of transmission time . for both , the general channel problem and its approximated version , we provided the geometric interpretation of the optimal solutions and the solution search processes .this paper assumed that source and channel coding is optimal , quantization is perfect and that a continuum of power levels can be employed .network lifetime obtained under these assumptions is an upper limit to practically achievable performance. it would be useful to consider the network lifetime problem with more realistic constraints .finally , the system model has to be generalized to the multi - hop case .
we consider a single - hop data gathering sensor cluster consisting of a set of sensor nodes that need to transmit data periodically to a base - station . we are interested in maximizing the lifetime of this network . even though the setting of our problem is very simple , it turns out that the solution is far from easy . the complexity arises from several competing system - level opportunities that can be harnessed to reduce the energy consumed in radio transmission . first , sensor data in a cluster is spatially and temporally correlated . recent advances in distributed source coding allow us to take advantage of these correlations to reduce the number of bits that need to be transmitted , with concomitant savings in energy . second , it is also well known that channel coding can be used to reduce transmission energy by increasing transmission time . finally , sensor nodes are cooperative , unlike nodes in an ad hoc network that are often modeled as competitive . this collaborative nature allows us to take full advantage of the first two opportunities for the purpose of maximizing cluster lifetime . in this paper , we pose the problem of maximizing lifetime as a max - min optimization problem subject to the constraint of successful data collection and limited energy supply at each node . this turns out to be an extremely difficult optimization to solve . consequently , we employ a notion of instantaneous decoding to shrink the problem space . we show that the computational complexity of our model is determined by the relationship between energy consumption and transmission rate as well as model assumptions about path loss and initial energy reserves . we provide some algorithms , heuristics and insights for several scenarios . in some situations , our problem admits a greedy solution while in others , the problem is shown to be -hard . the chief contribution of the paper is to illustrate both the challenges and gains provided by source - channel coding and scheduling .
we consider the problem of determining the coefficients and exponents of an unknown sparse multivariate polynomial , given a `` black box '' procedure that allows for its evaluation at any chosen point .our new technique is a variant on the classical kronecker substitution .say is an -variate polynomial with max degree less than and coefficients in a ring .the kronecker substitution produces a univariate polynomial ] , our main technical contribution is a way of choosing integers such that the substitution results in a lower degree than the usual kronecker substitution , while probably not introducing too many term collisions .we begin with the case of variables , where our result is stronger than the general case and we always choose the random substitution exponents to be prime numbers .bivariate polynomials naturally constitute a large portion of multivariate polynomials of interest , and they correspond to the important case of converting between polynomials in ] is an unknown bivariate polynomial , written as we further assume upper bounds on the and , respectively , and on the number of nonzero terms .the general idea here is to perform the substitution for random chosen prime numbers and .we want to choose and as small as possible , so as to minimize , but large enough so that there are not too many collisions .our approach to choosing primes is based on the following technical lemma , which shows how to guarantee a high probability success while minimizing the degree of .[ lem : lambda - pq ] let ] and ] with partial degrees less than and at most nonzero terms .then for any constant error probability , and primes chosen randomly as in lemma [ lem : lambda - pq ] , the substitution polynomial has degree at most the degree of a standard kronecker substitution is . because is always less than this, the randomized substitution will never have degree more than a logarithmic factor larger than .the benefit comes when the polynomial is sparse , i.e. , , in which case the randomized substitution has much smaller degree , albeit at the expense of a few collisions. when has at least 3 variables , the preceding analysis no longer applies .the new difficulty is that potentially - colliding terms could have exponents that differ in two _ or more _variables , meaning that the simple divisibility conditions are no longer sufficient to identify every possible collision .consequently , our randomly - chosen exponents in this case will be somewhat larger , and not necessarily prime .for this subsection , ] with max degree less than and at most nonzero terms , be a chosen bound on the failure probability , and be the index of a nonzero term in . define to be the least prime number satisfying .if integers are chosen uniformly at random from ] with max degree less than and at most nonzero terms . for any constant error probability , and integers chosen randomly as in lemma [ lem : lambda ] , the polynomial has degree at most .consider ] be an unknown subset of nonzero polynomials satisfying and with each having max degree less than .we say is a -_diversifying set _ if the probability is less than that any evaluation point , with entries chosen at random from , is a root of _ any _ of the .that is , \ge 1-\mu.\ ] ] from the discussion above , we see that the set the differences between any of the single terms and any of the sets of collisions , which is at most .let ] , is given by a numerical black box .their proof does not apply here as the polynomials in for us are not always binomials .we hope that a similar result would hold for multivariate diversification , but do not consider the question here .let bounds , , and be given as above , a prime power , and set then the set of -tuples from a size- extension of the finite field with elements is a -diversifying set is actually a -tuple of evaluations , and the coefficients in are actually -tuples in . ] .we show , more generally , that vectorization may be applied to any diversifying set .let be given and an unknown set of polynomials as in definition [ def : divset ] .if is a -diversifying set , then is a -diversifying set , where addition and multiplication in are component - wise . as is a -diversifying set , then by definition , a randomly selected row vector satisfies for all with probability at least .suppose is chosen randomly from , and note that .thus the probability that is the probability that for every , which is at most . rather than rehash the univariate diversification procedures , we refer the reader to the aforementioned results and provide the following connection which shows that univariate diversifying sets , with success probability scaled by a factor of , become multivariate diversifying sets .[ thm : mult - diverse ] let be an integral domain and be given and an unknown set of polynomials as in definition [ def : divset ] . if is a -diversifying set , then is also a -diversifying set .the proof is by induction on .when , the statement holds trivially .so assume and also that any -diversifying set is also a -diversifying set .we know that is a set of -variate polynomials , each with max degree less than .rewrite each as a polynomial in with coefficients in ] , and ] , where is the least prime greater than .given such choices of , the following lemma shows how many substitutions are required so that every term of appears without collisions in at least half of them .[ lem : nu ] let ] with max degree less than and at most nonzero terms .assume substitution vectors are chosen according to lemma [ lem : nu ] .set and choose a -diversifying set . then , with probability at least , any nonzero coefficient that appears in at least of the substitution polynomials is the image of a single term in . from the proof of lemma [ lem : nu ] ,the probability is at least that every term in appears without collision in at least of the substitutions . assuming this is the case, there can be at most sums of terms that collide in any image , since each collision involves at least two terms , there are at least terms that do not collide , and the total number of terms in all images is .hence the set of term differences will consist of the differences of any pair in a set of polynomials .the number of such pairs is less than given in the statement of the lemma . from the definition of , the probability that any of these polynomials in vanish on is less than , so the total probability that each term in is uninvolved in collisions in at least of the images , and all distinct terms and collisions in the image polynomials have distinct coefficients , is at least .a direct consequence is that any fixed subset of terms of must collide in fewer than half of the . for every nonzero coefficient that occurs in at least of the images , we know those terms with coefficient are probably images of the same fixed term of . an alternate method might be to allow the sums of terms that appear in collisions to sometimes share the same coefficient , as long as these coefficients are not the same as any of the coefficients of actual terms in .this would reduce the in determining the diversifying set to , a factor of improvement from the bound above .the cost of such weakened diversifying sets would be that some number of terms in the final recovered polynomial are not actually terms in . by iterating times , such `` garbage terms '' could be eradicated . for each coefficient that appears in at least of the images , we attempt to find linearly independent substitution vectors , call them , such that every substitution polynomial with substitution vector , for , contains the coefficient in a nonzero term . in the bivariate casethis is straightforward .any linear system formed by two substitution vectors must have nonzero determinant since in the bivariate case the entries are all distinct prime numbers .the general multivariate case is more involved , as substitution vectors may not always be linearly independent . for this case we will randomly select vectors ^n ] , and suppose that a term of avoids collision in an image for a randomly chosen ^n ] .given that a term of avoids collision in the images for , then has rank less than with probability at most .since all entries in each , and thereby everything in each and every element in , is less than , we can consider all these objects over the finite field .if has rank less than , then all lie in some dimension- subspace .the number of distinct substitution vectors that could lie in the same subspace is .each dimension- subspace may be specified by a nonzero vector spanning its orthogonal space , unique up to a scalar multiple .thus the number of such subspaces is less than , and so the number of possible -tuples comprised of substitution vectors that do not span is at most meanwhile , there are possible -tuples of substitution vectors , and so the probability that such a tuple does span is at most .furthermore , by the hypothesis , the probability that a term of avoids collision for each substitution is , and thus the conditional probability that is not full rank given that a fixed term of avoids collision for each is at most .given such a high probability of each term producing a rank- system of substitution vectors , it is a simple matter to show that with high probability _ every _ term of admits some such rank- linear system of substitutions without collisions. let ] with , , and , procedure [ proc : interp ] succeeds in finding with probability at least and requires calls to univariate interpolation with nonzero terms and degree , a -diversifying set in , and additional bit operations , where . since , we have .this is the number of calls to the univariate interpolation algorithm , and the degree bound comes from corollary [ cor : deg - bivar ] .the size of in the diversifying set comes from the fact that .two steps dominate the bit complexity .first , we must choose primes in ] with and , procedure [ proc : interp ] succeeds in finding with probability at least and requires calls to univariate interpolation with nonzero terms and degree , a -diversifying set in , and additional bit operations , where is the exponent of matrix multiplication .the analysis of the first two parts is the same as in the bivariate case . for the bit complexity, we do not have to worry about primality testing here .however , the size of all exponents in the polynomials becomes , and the cost of performing each lu factorization on a matrix is operations on integers with bits .as such lu factorizations are required , the total bit cost of the linear algebra is .interpolating entails the probabilistic steps of ( 1 ) selecting a set of randomized substitutions that produce few collisions and ( 2 ) selecting from a diversifying set such that all term sums in all images have distinct coefficients in those images .the case has in addition the probabilistic step ( 3 ) of guaranteeing that we can construct a full - rank linear system in order to solve for every exponent of .the probability of failure in each of these steps has been controlled above so that the overall success probability is at least .if a higher success probability , say , is desired , we simply run the interpolation algorithm described in sections [ sec : fam - subs][sec : lin - sys ] with some times . again using hoeffding s inequality , the probability that the algorithm fails at least times is at most .thus , if we wish to discover with probability , we merely run the algorithm as suggested some times , and select the the polynomial that is returned a majority of the time . with probability at least ,such an exists and is in fact the correct answer .we have presented a new randomization that maps a multivariate polynomial to a univariate polynomial with ( mostly ) the same terms .this improves on the usual kronecker map by reducing the degree of the univariate image when the polynomial is known to be sparse .we have also shown how a small number of such images can be combined to recover the original terms of the unknown multivariate polynomial .there are numerous questions raised by this result .perhaps foremost is whether there is any practical gain in any particular application by using this approach .we know that the randomized kronecker substitution will result in smaller degrees than the usual kronecker substitution whenever the polynomial is sufficiently large and sufficiently sparse , so in principle the applications should include any of the numerous results on sparse polynomials that use a kronecker substitution to accommodate multivariate polynomials . unfortunately , in practice , the situation is not so clear .many of the aforementioned results that rely on a kronecker substitution either do not have a widely - available implementation , or do not usually involve sparse polynomials .however , for the particular applications of sparse gcd and sparse multivariate multiplication , there is considerable promise particularly in the case of bivariate polynomials with degree greater than 1000 or so and sparsity between and .an efficient implementation comparison in these situations would be useful and interesting , and we are working in that direction .there are also questions of theoretical interest . for one ,we would like to know how far off the bounds on the size of primes from lemmata [ lem : lambda - pq ] and [ lem : lambda ] are compared to what is really necessary to avoid collisions .an important question is whether our current results are optimal in any sense . in the bivariate case , when our result gives , which is optimal in terms of .that is because could be as large as , and therefore any monomial substitution exponent less than would by necessity have more than a constant fraction of collisions .however our result for gives each , which in terms of is off by a factor of from the optimal .it may be possible to improve these bounds simply with a better analysis , or with a different kind of randomized monomial substitution . in either case , it is clear that , at least for and in particular for , it should be possible to improve on the results here and achieve univariate reduced polynomials with even lower degree . another interesting question would be whether some of this randomization can be avoided .here we have two randomizations , the diversification and the ( multiple ) randomized kronecker substitutions . andthis is besides any randomization that might occur in the underlying univariate algorithm !it seems plausible that , for example in the application of multivariate multiplication , the known aspects of the monomial structure might be used to make some choices less random and more `` intelligent '' .however , we do not yet know any reasonable way to accomplish this .we wish to thank zeev dvir for pointing out the previous work of klivans and spielman , and the reviewers for their helpful comments .we also thank the organizers of the siam ag13 meeting for the opportunity to discuss preliminary work on this topic .the second author is supported by the national science foundation , award # 1319994 .michael ben - or and prasoon tiwari .a deterministic algorithm for sparse multivariate polynomial interpolation . in _ proceedings of the twentieth annual acm symposium on theory of computing _ , stoc 88 , pages 301309 , new york , ny , usa , 1988 .doi : 10.1145/62212.62241 .jrmy berthomieu and grgoire lecerf .reduction of bivariate polynomials from convex - dense to dense , with application to factorizations . _ math ._ , 81:0 17991821 , 2012 .doi : 10.1090/s0025 - 5718 - 2011 - 02562 - 7. james r. bunch and john e. hopcroft . triangular factorization and inversion by fast matrix multiplication ._ mathematics of computation _ , 280 ( 125):0 231236 , 1974 .url http://www.jstor.org/stable/2005828 .emmanuel j. cands , justin k. romberg , and terence tao .stable signal recovery from incomplete and inaccurate measurements ._ communications on pure and applied mathematics _ , 590 ( 8):0 12071223 , 2006 .doi : 10.1002/cpa.20124 .matthew t. comer , erich l. kaltofen , and clment pernet .sparse polynomial interpolation and berlekamp / massey algorithms that correct outlier errors in input values . in _ proceedings of the 37th international symposium on symbolic and algebraic computation _, issac 12 , pages 138145 , new york , ny , usa , 2012 .doi : 10.1145/2442829.2442852 .mark giesbrecht and daniel s. roche .diversification improves interpolation . in _ proceedings of the 36th international symposium on symbolic and algebraic computation _, issac 11 , pages 123130 , new york , ny , usa , 2011 .doi : 10.1145/1993886.1993909 .mark giesbrecht , george labahn , and wen - shin lee .symbolic - numeric sparse interpolation of multivariate polynomials ._ journal of symbolic computation _ , 440 ( 8):0 943 959 , 2009 .doi : 10.1016/j.jsc.2008.11.003 .dima yu .grigoriev , marek karpinski , and michael f. singer .fast parallel algorithms for sparse multivariate polynomial interpolation over finite fields ._ siam journal on computing _ , 190 ( 6):0 10591063 , 1990 .doi : 10.1137/0219073 .ming - deh a huang and ashwin j rao .interpolation of sparse multivariate polynomials over large finite fields with applications ._ journal of algorithms _ , 330 ( 2):0 204228 , 1999 .doi : 10.1006/jagm.1999.1045 .seyed mohammad mahdi javadi and michael monagan .on factorization of multivariate polynomials over algebraic number and function fields . in _ proceedings of the 2009 international symposium on symbolic and algebraic computation _ , issac 09 , pages 199206 , new york , ny , usa , 2009 .doi : 10.1145/1576702.1576731 .seyed mohammad mahdi javadi and michael monagan .parallel sparse polynomial interpolation over finite fields . in _ proceedings of the 4th international workshop on parallel and symbolic computation _ , pasco 10 , pages 160168 , new york , ny , usa , 2010 .doi : 10.1145/1837210.1837233 .erich l. kaltofen .fifteen years after dsc and wlss2 : what parallel computations i do today [ invited lecture at pasco 2010 ] . in _ proceedings of the 4th international workshop on parallel and symbolic computation _ ,pasco 10 , pages 1017 , new york , ny , usa , 2010 .doi : 10.1145/1837210.1837213 .adam r. klivans and daniel spielman .randomness efficient identity testing of multivariate polynomials . in _ proceedings of the thirty - third annual acm symposium on theory of computing _, stoc 01 , pages 216223 , new york , ny , usa , 2001 .doi : 10.1145/380752.380801 .arnold schnhage .asymptotically fast algorithms for the numerical multiplication and division of polynomials with complex coefficients . in jacques calmet , editor , _ computer algebra _ ,volume 144 of _ lecture notes in computer science _ , pages 315 .springer berlin heidelberg , 1982 .doi : 10.1007/3 - 540 - 11607 - 9_1 .amir shpilka and amir yehudayoff .arithmetic circuits : a survey of recent results and open questions . _ foundations and trends in theoretical computer science _ , 50 ( 3 - 4):0 207388 , 2010 .doi : 10.1561/0400000039 .richard zippel .probabilistic algorithms for sparse polynomials . in edwardng , editor , _ symbolic and algebraic computation _ , volume 72 of _ lecture notes in computer science _ , pages 216226 .springer berlin / heidelberg , 1979 .doi : 10.1007/3 - 540 - 09519 - 5_73 .richard zippel .interpolating polynomials from their values ._ journal of symbolic computation _ , 90 ( 3):0 375403 , 1990 .doi : 10.1016/s0747 - 7171(08)80018 - 1 .computational algebraic complexity editorial .
we present new techniques for reducing a multivariate sparse polynomial to a univariate polynomial . the reduction works similarly to the classical and widely - used kronecker substitution , except that we choose the degrees randomly based on the number of nonzero terms in the multivariate polynomial . the resulting univariate polynomial often has a significantly lower degree than the kronecker substitution polynomial , at the expense of a small number of term collisions . as an application , we give a new algorithm for multivariate interpolation which uses these new techniques along with any existing univariate interpolation algorithm .
the feedback - control method presented in this paper mimics the `` run - and - tumbling '' of _ escherichia coli _ but combined with active steering , thus it is a simple but more efficient method to transport microscopic swimmers under thermal fluctuations .the added ability of deterministic `` active '' reorientation achieves more efficient transportation of particle than the natural `` run - and - tumbling '' .it also has several advantages over other conventionally used micro - manipulation techniques : laser tweezers for example uses high power laser that could damage fragile samples and can be tricky to use as particles often jump out of the confining potential .+ we have also addressed the problem of optimizing the feedback and observed some interesting insights .the active reorientation decouples the abm and the reorientation process , in contrast to passive reorientation . due to this decoupling, we showed that the optimal acceptance angle is a function of .remarkably , since the timescale of passive reorientation is determined by , which scales in the cubic order of the radius , our method becomes particularly effective when the particles are relatively large .for instance in the case of a particle of diameter , in water whereas it can be as small as s in our experiments , as shown in fig .[ u0_omega ] , leading to more than 10 times enhancement of peclet number . for even smaller particles ,the gain of active reorientation becomes less significant as approaches , though the magnitude of is tunable by the applied electric field .+ although the optimal tolerance angle is determined by , our theory also predicts the robustness of the proposed algorithm .this is guaranteed by the fact that has a quite shallow shoulder towards large , at least for the range of parameters relevant to this experiment . in real world , individual particle may possess variable or experience an inhomogeneous from the environment .it may also be possible that the exact position of the target might not be known , or estimation of the orientation might not be precise , contributing to a poor resolution of .the final pclet number however , is weakly affected by these noises thanks to the broad tolerance of optimal . that allowed us to control the motion of particles in spite of the strong variability in their behavior .+ the key of our method lies in our finding of the peculiar rotational motion of janus particle that can be switched on and off by changing the parameters of the electric field .although the mechanism of this rotation is not yet fully understood , experimental measurements of showed that it was proportional to , implying that the torque as well as the force may originate from an asymmetric flow field around the particle generated by iceo .the other parameter , , has been poorly explored in the framework of icep .further experimental studies , as well as theoretical works , should be addressed .our results also suggested that geometrical factors such as chirality of the particle can play an critical role in determining the swimming behavior of the particle .an interesting challenge would be to find a way to artificially fabricate `` brahma particles '' , which as the hindu god would have four `` heads'' .these swimmers would have two well - designed axis of asymmetry , one used to propel the particle and the other to induce `` switchable '' rotations .our experiment provides the first proof - of - principle demonstration of such an idea .our results encourage further quest towards engineering functional artificial swimmers .+ w. f. paxton , p. t. baker , t. r. kline , y. wang , t. e. mallouk and a. sen _ catalytically induced electrokinetics for motors and micropumps ._ , journal of the american chemical society 128 ( 2006 ) , pp.148818 .w. paxton , k. kistler , c. olmeda , a. sen , s. st .angelo , y. cao , t. mallouk , p. lammert and v. crespi _catalytic nanomotors : autonomous movement of striped nanorods ._ , journal of the american chemical society 126 ( 2004 ) , pp.1342431 .b. qian , d. montiel , a. bregulla , f. cichos , and h. yang _ harnessing thermal fluctuations for purposeful activities : the manipulation of single micro - swimmers by adaptive photon nudging ._ , chemical science 4 ( 2013 ) , pp.1420 .f. kmmel , b. ten hagen , r. wittkowski , i. buttinoni , r. eichhorn , g. volpe , h. lwen and c. bechinger _circular motion of asymmetric self - propelling particles ._ , physical review letters 110 ( 2013 ) , pp.198302 .t. cordes , w. moerner , m. orrit , s. sekatskii , s. faez , p. borri , h. prabal goswami , a. clark , p. el - khoury , s. mayr , j. mika , g. lyu , d. cross , f. balzarotti , w. langbein , v. sandoghdar , j. michaelis , a. chowdhury , a. j. meixner , n. van hulst , b. lounis , f. stefani , f. cichos , m. dahan , l. novotny , m. leake and h. yang _plasmonics , tracking and manipulating , and living cells : general discussion ._ , faraday discussions 184 ( 2015 ) , pp.451473 ( see pp.464 - 465 ) . c. peng , i. lazo , s. v. shiyanovskii and o. d. lavrentovich _ induced - charge electro - osmosis around metal and janus spheres in water : patterns of flow and breaking symmetries _ , physical review e , 90 ( 2014 ) , pp.051002 .k. maeda , h. , onoe , m. , takinoue , and s. takeuchi _ controlled synthesis of 3d multi - compartmental particles with centrifuge - based microdroplet formation from a multi - barrelled capillary _ , advanced materials , 24 ( 2012 ) , pp.13401346 .we used polystyrene spheres of diameter m. a droplet of a solution of theses polystyrene spheres is then dragged on a glass slide by a linear motor at the appropriate speed to obtain a monolayer of particles .we do not need a perfect crystal in our case but it is important that there is no particle on top of each other . using thermal evaporation , one of their hemispheres is then coated by thin layers of chromium and gold with nm and nm .the other hemisphere facing the glass slide remains bare polystyrene so that the particles have two hemispheres with different polarizabilities .after the coating process , the particles are detached from their substrate using mild - sonification and suspended in ion - exchange water .the observation of the particles at high magnification shows that they are almost always chiral ( as shown in the inset of fig .[ scheme_janus ] ) , which is due to a fast slightly slanted evaporation process .note that the amount of janus particles exhibiting rotations at low field amplitude increases when the metal layers is quite thick .for example , particles with nm rarely exhibit rotations . making chiral janus particles thus requires to deposit large enough quantities of metal .a droplet of a suspension of janus particles is then put in between two ito electrodes sandwiching a spacer made of stretched parafilm of about m. using a function generator connected to the ito slides , we apply a vertical ac electric field to the solution such that .to prevent the particles from sticking to the bottom electrode , we apply a surface treatment to the ito glass slides : the slides are exposed to a strong plasma for several minutes and are then immersed in a 5 % solution of pluronic f-127 ( a non - ionic copolymer surfactant ) for more than one hour .they are then washed with water to remove the excess of pluronic . by coating the surface of the electrodes with this surfactant, we significantly reduce the risks of adhesion .however , this problem still remains one of the biggest experimental difficulties . throughout the experiment, particles were imaged using 10x , na=0.3 objective lens mounted on a standard inverted microscope .particles were illuminated with an incoherent light source , and the transmitted light was captured using ccd camera with 512x512 pixels at the frame rate of 100 fps .tracking and feedback was done by a home - built labview program .real - time tracking was initiated manually by feeding the program with the position of the particle of interest .then for subsequent frames captured by camera , the small roi around the target particle was extracted , then thresholded to obtain a binary image .the center of the mass was calculated from this binary image , and the updated particle coordinate was passed down to the next acquisition loop . at the same time , coordinate information was sent to the feedback loop , where it calculated the angle .feedback loop directly communicates with the function generator via usb connection , and updated the appropriate control parameters .the particles move at a constant speed , rotate at the frequency and are subjected to translational and rotational noises respectively . if we assume that their motion is overdamped , we can thus write the following system of langevin equations : this model had already been studied to get an analytical expression for the mean - square displacement of l - shaped artificial swimmers but here , we will focus on the autocorrelation function instead .the second equation can easily be integrated to get being a white noise , we have according to eq .[ [ gaussian ] ] , is a sum of gaussian variables and is therefore a gaussian itself . as we just calculated its first and second moments , we can deduce the expression of the probability density and the green function time evolution of the auto - correlation function for different frequencies .the symbols are experimental measurements and the lines of the same colors correspond to a fit using expression [ vcorr ] .we have three fitting parameters : , and .,width=7 ] the velocity autocorrelation function of a particle at time is given by \cdot \left [ u_0 \boldsymbol{\hat{u}}(t+\tau ) + d_t \boldsymbol{\xi}_t(t+\tau ) \right ] \right\rangle\\ \\ = 4 d_t \delta(\tau ) + u_0 ^ 2 \langle \boldsymbol{\hat{u}}(t ) \cdot \boldsymbol{\hat{u}}(t+\tau ) \rangle \\ \\ = 4 d_t \delta(\tau ) + u_0 ^ 2 \left\langle \left ( \begin{array}{l } \cos\phi(t)\\ \sin\phi(t ) \end{array } \right ) \cdot \left ( \begin{array}{l } \cos\phi(t+\tau)\\ \sin\phi(t+\tau ) \end{array } \right ) \right\rangle \\ \\ = 4 d_t \delta(\tau ) + u_0 ^ 2 \left\langle \cos\phi(t)\cos\phi(t+\tau ) + \right . \\\\ \hspace{0.5 cm}\left .\sin\phi(t)\sin\phi(t+\tau ) \right\rangle .\end{array } \label{averages}\ ] ] here , represents an ensemble average given , for an arbitrary function by injecting eqs .[ [ dens_prob ] ] and [ [ green ] ] into [ [ ensemble ] ] , we can calculate the two averages of eq .[ [ averages ] ] and find as a final expression for the velocity auto - correlation function strictly speaking , this expression diverges at but the system of langevin eqs .[ [ lang_sys ] ] is actually only valid at times greater than the typical collision time of the heat bath .we can thus neglect the first term .if we use the expressions of the average velocity of the particles and of the angular velocity , we recover eq .[ [ vcorr ] ] .the agreement with the experimental results is excellent as can be seen on fig .[ correlations ] .
even though making artificial micrometric swimmers has been made possible by using various propulsion mechanisms , guiding their motion in the presence of thermal fluctuations still remains a great challenge . such a task is essential in biological systems , which present a number of intriguing solutions that are robust against noisy environmental conditions as well as variability in individual genetic makeup . using synthetic janus particles driven by an electric field , we present a feedback - based particle guiding method , quite analogous to the `` run - and - tumbling '' behavior of _ escherichia coli _ but with a deterministic steering in the tumbling phase : the particle is set to the `` run '' state when its orientation vector aligns with the target , while the transition to the `` steering '' state is triggered when it exceeds a tolerance angle . the active and deterministic reorientation of the particle is achieved by a characteristic rotational motion that can be switched on and off by modulating the ac frequency of the electric field , first reported in this work . relying on numerical simulations and analytical results , we show that this feedback algorithm can be optimized by tuning the tolerance angle . the optimal resetting angle depends on signal to noise ratio in the steering state , and it is demonstrated in the experiment . proposed method is simple and robust for targeting , despite variability in self - propelling speeds and angular velocities of individual particles . the physics of active suspensions made significant progress during the past decades and it is now possible to build artificial microscopic particles able to self - propel in a fluid . the range of possible applications of such swimmers is wide , with fascinating perspectives : targeted drug delivery , bottom - up assembly of very small structures , mixing or automatic pumping in microfluidic devices , design of new microsensors and microactuators in mems or artificial chemotactic systems to name a few . a lot of man - made microscopic swimmers fall into the category of `` janus '' particles which share the same property : an asymmetric structure inducing a breaking of symmetry of the interactions with the surrounding fluid resulting in a self - propelling force . several physical phenomena can be at the origin of this force : local temperature gradients induced by a defocused laser beam ( thermophoresis ) , enzimatic catalysis of chemical reactions by a coated surface or electrostatic interactions between surface charges and the ions of the solution ( induced - charge electrophoresis or icep ) . + if several methods are known to generate self - propelling forces for janus particles , guiding their motion remains a challenge . the biggest difficulty consists in controlling their orientation , a particularly delicate task when working with microscopic objects subjected to thermal fluctuations . swimmers need to resist rotational diffusion by fixing or steering their orientation to reach specified targets or follow given trajectories . experimental works showed that it was possible to lock the orientation of catalytic nanorods made of ferromagnetic materials using magnetic fields . another interesting method involves visualizing the orientation of the particle at every moment and turn on the self - propelling force only when it is directed to the right direction . in that approach , the reorientation process is `` passive '' in a sense that the experimentalist waits for rotational diffusion to correct the orientation of the particle . + in this paper , we use janus particles driven by icep and introduce a new method to control their trajectory with an `` active '' reorientation process . this new concept consists in switching between two distinct modes of motion exhibited by the particles : a self - propelling state and a regular rotation state . such rotations had already been observed experimentally with l - shaped self - propelling swimmers moving by thermophoresis but the origin and characteristics of these rotations are very different here . the janus particle under feedback control exhibits a motion quite similar to the `` run - and - tumbling '' behavior observed for the bacteria _ _ escherichia coli__ . however , the reorientation is not random but deterministic , which might be compared to the adaptive steering found in evolved organisms , e.g. phototaxis in _ _ volvox carteri__ . such a `` hybrid '' strategy enables a high efficiency while minimizing the complexity of the implementation . in the first part of this article , we will describe in detail the two different behaviors exhibited by our janus particles . based on these properties , the second part will be devoted to the experimental implementation of the proposed particle guiding method . finally in the third part , we will present numerical simulations and analytical calculations showing how it is possible to optimize the feedback process . [ cols="^,^ " , ] _ volvox _ on the other hand is a large multicellular organism which carries photoreceptors and thousands of flagella ( , , , ) . our theory gives implying that continuous steering is the optimal for phototaxis of _ volvox_. in fact , _ volvox _ coordinates thousands of flagella to make steering motion , and even has an adaptation mechanism . + finally , we consider two limiting cases . first , in the limit of , i.e. if steering accompanies no time cost , we have checked that monotonously decreases in ] so that .
the radial component of the position of a distant object is inferred from its cosmological redshift , induced by the expansion of the universe ; the light observed from a distant galaxy appears to us at longer wavelengths than in the rest frame of that galaxy .the most accurate determination of the exact redshift , , comes from directly observing the spectrum of an extragalactic source and measuring a consistent multiplicative shift , relative to the rest frame , of various emission ( or absorption ) features .the rest - frame wavelengths of these emission lines are known to a high degree of accuracy which can be conferred onto the measured spectroscopic redshifts , .however , the demand on telescope time to obtain spectra for every source in deep , wide surveys is prohibitively high , and only relatively small area spectroscopic campaigns can reach faint magnitudes ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) , or at the other extreme , relatively bright magnitudes over larger areas ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?this forces us towards the use of photometric observations to infer the redshift by other means . rather than individual spectra, the emission from a distant galaxy is observed in several broad filters , facilitating the characterization of the spectral energy distribution ( sed ) of fainter sources , at the expense of fine spectral resolution .photometric redshift methods largely fall into two categories , based on either sed template fitting or machine learning .template fitting software such as hyperz ; , zebra ; , eazy ; and le phare rely on a library of sed templates for a variety of different galaxy types , which ( given the transmission curves for the photometric filters being used ) can be redshifted to fit the photometry .this method can be refined in various ways , often with the use of simulated seds rather than only those observed at low redshift , composite seds , and through calibration using any available spectroscopic redshifts .machine learning methods such as artificial neural networks ( e.g. annz ; * ? ? ?* ; * ? ? ?* ) , nearest - neighbour ( nn ) , genetic algorithms ( e.g. * ? ? ?* ) , self - organized maps and random forest , to name but a few , rely on a significant fraction of sources in a photometric catalogue having spectroscopic redshifts .these ` true ' redshifts are used to train the algorithm .in addition to providing a point estimate , machine learning methods can provide the degree of uncertainty in their prediction .both methods have their strengths and weaknesses , with the best performance often depending on the available data and the intended science goals .as such , future surveys may well depend on contributions from both in tandem , but there has been extensive work on comparing the current state of the art in public software using a variety of techniques .artificial neural networks motivate the most commonly used machine learning software , however gaussian processes ( e.g. * ? ? ?* ) have not yet become well established in this area , despite comparison by suggesting that they may outperform the popular annz code , using the rms error as a metric . in this paper, we introduce a novel sparse kernel regression model that greatly reduces the number of basis ( kernel ) functions required to model the data considered in this paper .this is achieved by allowing each kernel to have its own hyper - parameters , governing its shape .this is in contrast to the standard kernel - based models in which a set of global hyper - parameters are optimized ( such as is typical in gaussian process ( gp ) methods ) .the complexity cost of such a kernel - based regression model is , where is the number of basis functions .this cubic time complexity arise from the cost of inverting an by covariance matrix . in a standard gaussian process model ,seen as a kernel regression algorithm , we may regard the basis functions , as located at the points in the training sample .this renders such an approach unusable for many large training data applications where scalability is a major concern .much of the work done to make gps more scalable is either to ( a ) make the inverse computation faster or ( b ) use a smaller representative set from the training sample to reduce the rank and ease the computation of the covariance matrix .examples of ( a ) include methods such as structuring the covariance matrix such that it is much easier to invert , using toeplitz or kronecker decomposition , or inverse approximation as an optimization problem .to reduce the number of representative points ( b ) , an subset of the training sample can be selected which maximizes the accuracy or the numerical stability of the inversion .alternatively , one may search for `` inducing '' points not necessarily present in the training sample , and not necessarily even lying within the data range , to use as the basis set such that the probability of the data being generated from the model is maximized .approaches such as relevance vector machines ( rvm ; * ? ? ?* ) and support vector machines ( svm ; * ? ? ?* ) are basis - function models .however , unlike sparse gps , they do not learn the basis functions locations but rather apply shrinkage to a set of kernels in the form of weight - decay on the linear weights that couple the kernels , located at training data points , to the regression . in this paper, we propose a non - stationary sparse gaussian model to target photometric redshift estimation .the key difference between the proposed approach and other basis function models , is that our model does not use shrinkage ( automatic relevance determination ) external to the kernel , but instead has a length - scale parameter in each kernel .this allows for parts of the input - output regression mapping to have different characteristic length - scales .we can see this as allowing for shrinkage and reducing the need for more basis functions , as well as allowing for non - stationary mappings .a regular gp , sparse gp or rvm does not do this , and we demonstrate that this is advantageous to photometric redshift estimation . furthermore , the model is presented within a framework with components that address other challenges in photometric redshift estimation such as incorporating a weighting scheme as an integral part of the process to remove , or introduce , any systematic bias , and a prior mean function to enhance the extrapolation performance of the model .the results are demonstrated on photometric redshift estimation for a simulated _ euclid_-like survey and on observational data from the 12th data release of the sloan digital sky survey ( sdss ) . in particular, we use the weighting scheme to remove any distribution bias and introduce a linear bias to directly target the mission s requirement .the paper is organised as follows , a brief introduction to gaussian processes for regression is presented in section [ sec - gaussian - process ] followed by an introduction to sparse gps in section [ sec - sparse - gaussian - processes ] .the proposed approach is described in section [ sec - proposed - approach ] followed by an application to photometric redshift estimation in section [ sec - application ] , where the details of the mock dataset are described .the experiments and results are discussed in section [ sec - experiments ] on the simulated survey , and in section [ sec - experiments - sdss ] we demonstrate the performance of the proposed model and compare it to annz on the sdss 12th data release .finally , we summarize and conclude in section [ sec - summary ] .in many modelling problems , we have little prior knowledge of the explicit functional form of the function that maps our observables onto the variable of interest .imposing , albeit sensible , parametric models , such as polynomials , makes a tacit bias . for this reason ,much of modern function modelling is performed using _ non - parametric _ techniques . for regression ,the most widely used approach is that of _ gaussian processes _ .a gaussian process is a supervised non - linear regression algorithm that makes few explicit _ parametric _ assumptions about the nature of the function fit . for this reason ,gaussian processes are seen as lying within the class of bayesian non - parametric models .the underlying assumption in a gp is that , given a set of input and a set of target outputs , where is the number of samples in the dataset and is the dimensionality of the input , the observed target is generated by a function of the input plus additive noise : the noise is taken to be normally distributed with a mean of zero and variance , or . to simplify the notation, it is assumed that ( this can readily be achieved without loss of generality , via a linear whitening process ) and univariate , although the derivation can be readily extended to multivariable problems .the conditional probability of the observed variable given the function is hence distributed as follows : a gp then proceeds by applying a _bayesian _ treatment to the problem to infer a probability distribution over the space of possible functions given the data : this requires us to define a prior , , over the function space . the function is normally distributed with a mean of zero , to match the mean of the normalized variable , with a covariance _ function _ , i.e. . the covariance function captures prior knowledge about the relationships between the observables .most widely used covariance functions assume that there is local similarity in the data , such that nearby inputs are mapped to similar outputs .the covariance can therefore be modelled as a function of the input , , where each element and is the covariance function . for to be a valid covariance it has to be symmetric and positive semi - definite matrix ; arbitrary functions for can not guarantee these constraints . a class of functions that guarantees these structural constraints are referred to as _ mercer kernelsa commonly used kernel function , which is the focus of this work , is the squared exponential kernel , defined as follows : where and are referred to as the height ( output , or variance ) and characteristic length ( input ) scale respectively , which correspond to tunable _ hyper - parameters _ of the model .the similarity between two input vectors , under the squared exponential kernel , is a non - linear function of the euclidean distance between them .we note that this choice of kernel function guarantees continuity and smoothness in the function and all its derivatives . for a more extensive discussion of covariances ,the reader is referred to or . with the likelihood and prior ,the marginal likelihood can be computed as follows : by multiplying the likelihood and the prior and completing the square over , we can express the integration as a normal distribution independent of multiplied by a another normal distribution over .the distribution independent of can then be taken out of the integral , and the integration of the second normal distribution with respect to will be equal to one .the resulting distribution of the marginal likelihood is distributed as follows : the marginal likelihood of the full data set can hence be computed as follows : the aim of a gp , is to maximize the probability of observing the target given the input , eq . .note that the only free parameters to optimize in the marginal likelihood are the parameters of the kernel and the noise variance , collectively referred to as the _ hyper - parameters _ of the model .it is more convenient however to maximize the log of the marginal likelihood , eq ., since the log function is a monotonically increasing function , maximizing the log of a function is equivalent to maximizing the original function .the log likelihood is given as : we search for the optimal set of hyper - parameters using a gradient - based optimization , hence we require the derivatives of the log marginal likelihood with respect to each hyper - parameter . in this paper ,the l - bfgs algorithm was used to optimize the objective which uses a quasi - newton method to compute the search direction in each step by approximating the inverse of the hessian matrix from the history of gradients in previous steps .it is worth mentioning that non - parametric models require the optimization of few hyper - parameters that do not grow with the size of the data and are less prone to overfitting .the distinction between parameters and hyper - parameters of a model is that the former directly influence the input - output mapping , for example the linear coupling weights in a basis function model , whereas the latter affect properties of distributions in the probabilistic model , for example the widths of kernels .although this distinction is somewhat semantic , we keep to this nomenclature as it is standard in the statistical machine learning literature .once the hyper - parameters have been inferred , the conditional distribution of future predictions for test cases given the training sample can be inferred from the joint distribution of and the observed targets .if we assume that the joint distribution is a multivariate gaussian , then the joint probability is distributed as follows : where we introduce the shorthand notations , , and .the conditional distribution is therefore distributed as follows : if we assume a non - zero prior mean over the function , , and an un - normalized with mean , the mean of the posterior distribution will be equal to : the main drawback of gps is the computational cost required to invert the matrix .the _ sparse gaussian process _ allows us to reduce this computational cost and is detailed in the following section .gaussian processes are often described as non - parametric regression models due to the lack of an explicit parametric form .indeed gp regression can also be viewed as a functional mapping parameterized by the data and the kernel function , followed by linear regression via optimization of the following objective : where are the set of coefficients for the linear regression model that maps the transformed features to the desired output .the feature transformation evaluate how `` similar '' a datum is to every point in the training sample , where the similarity measure is defined by the kernel function . if two points have a high kernel response via eq ., this will result in very correlated features , adding extra computational cost for very little or no added information . selecting a subset of the training sample that maximizesthe preserved information is a research question addressed in , whereas in the basis functions are treated as a search problem rather than a selection problem and their locations are treated as hyper - parameters which are optimized .these approaches result in a transformation , in which is the number of basis functions used .the transformation matrix will therefore be a rectangular by matrix and the solution for in eq .is calculated via standard linear algebra as : even though these models improve upon the computational cost of a standard gp , very little is done to compensate for the reduction in modelling power caused by the `` loss '' of basis functions .the selection method is always bounded by the full gp s accuracy , on the _ training _ sample , since the basis set is a subset of the full gp basis set . on the other hand ,the sparse gp s ability to place the basis set freely across the input space does go some way to compensate for this reduction , as the kernels can be optimized to describe the distribution of the data . in other words , instead of training a gp model with all data points as basis functions , or restricting it to a subset of the training sample which require some cost to select them , a set of inducing points is used in which their locations are treated as hyper - parameters of the model to be optimized . in both a full and a low rank approximation gp , a global set of hyper - parameters is used for all basis functions , therefore limiting the algorithm s local modelling capability . moreover, the objective in eq . minimizes the sum of squared errors , therefore for any non - uniformly distributed output , the optimization routine will bias the model towards the mean of the output distribution and will seek to fit preferentially the region of space where there are more data .hence , the model might allow for very poor predictions for few points in poorly represented regions , e.g. the high redshift range , in order to produce good predictions for well represented regions .therefore , the error distribution as a function of redshift is not uniform unless the training sample is well balanced , producing a model that is sensitive to how the target output is distributed . in the next section ,a method is proposed which addresses the above issues by parametrizing each basis function with bespoke hyper - parameters which account for variable density and/or patterns across the input space .this is particularly pertinent to determining photometric redshifts , where complete spectroscopic information may be restricted or biased to certain redshifts or galaxy types , depending on the target selection for spectroscopy of the training sample .this allows the algorithm to learn more complex models with fewer basis functions .in addition , a weighting mechanism to remove any distribution bias from the model is directly incorporated into the objective .in this paper , we extend the sparse gp approach by modelling each basis ( kernel ) function with its own set of hyper - parameters . the kernel function in eq .is hence redefined as follows : where are the set of basis coordinates and is the corresponding length scale for basis . the multivariate input is denoted as . throughout the rest of the paper , denotes the -th row of matrix , or for short , whereas denotes the -th column and refers to the element at row and column in matrix , and similarly for other matrices .note that the hyper - parameter has been dropped , as it interferes with the regularization objective .this can be seen from the final prediction equation , the weights are always multiplied by their associated . therefore , the optimization process will always compensate for decreasing by increasing .dropping the height variance ensures that the kernel functions do not grow beyond control and delegates learning the linear coefficients and regularization to the weights in .the derivatives with respect to each length scale and position are provided in equations eq . andrespectively : the symbol denotes the hadamard product , i.e. element - wise matrix multiplication and denotes a column vector of length with all elements set to 1 . finding the set of hyper - parameters that optimizes the solution , is in effect finding the set of radial basis functions defined by their positions and radii that jointly describe the patterns across the input space . by parametrizing them differently ,the model is more capable to accommodate different regions of the space more specifically .a global variance model assumes that the relationship between the input and output is global or equal across the input space , whereas a variable variance model , or non - stationary gp , makes no assumptions and learns the variable variances for each basis function which reduces the need for more basis functions to model the data .the kernel in eq . can be further extended to , not only model each basis function with its own radius , but also model each one with its own covariance .this enables the basis function to have any arbitrary shaped ellipses giving it more flexibility .the kernel in eq . can be extended as follows : furthermore , to make the optimization process faster and simpler , we define the additional variables : where is a local affine transformation matrix for basis function and is the application of the local transformation to the data . optimizing with respect to directly ensures that the covariance matrix is positive definite .this makes it faster from a computational perspective as the kernel functions for all the points with respect to a particular basis can be computed more efficiently as follows : the exponent in eq .basically computes the sum of squares in each row of .this allows for a more efficient computation of the kernel functions for all the points in a single matrix operation .the derivatives with respect to each and are shown in eq . andeq . respectively . setting up the problem in this manner allows the setting of matrix to be of any size by , where which can be considered as a low rank approximation to without affecting the gradient calculations .in addition , the inverse of the covariance can be set to in the low rank approximation case to ensure that the final covariance can model a diagonal covariance .this is referred to as _ factor analysis distance _ but previously used to model a global covariance as opposed to variable covariances as is the case here . in the absence of observations , all bayesian models ,gaussian processes included , rely on their priors to provide function estimation .for the case of gaussian processes this requires us to consider the prior over the function , especially the prior mean .for example , the first term in the mean prediction in eq . , , is our prior mean in which we learn the deviation from using a gp .similarly , we may consider a mean _ function _ that is itself a simple linear regression from the independent to dependent variable .the parameters of this function are then inferred and the gp infers non - linear deviations . in the absence of data ,e.g. in extrapolative regions , the gp will fall back to the linear regression prediction .we can incorporate this directly into the optimization objective instead of having it as a separate preprocessing step by redefining as a concatenation of the linear and non - linear features , or setting ] , where is the linear regression s coefficients and is the bias .the prediction can then be formulated as follows : furthermore , the regularization matrix , , in eq .can be modified so that it penalises the learning of high coefficients for the non - linear terms , , but small or no cost for learning high linear terms , and , by setting the corresponding elements in the diagonal of to 0 instead of , or the last elements .therefore , as goes to infinity , the model will approach a simple linear regression model instead of fallen back to zero . thus far in the discussion , we make the tacit assumption that the objective of the inference process is to minimize the sum of squared errors between the model and target function values .although this is a suitable objective for many applications , it is intrinsically biased by uneven distributions of training data , sacrificing accuracy in less represented regions of the space . ideally we would like to train a model with a balanced data distribution to avoid such bias .this however , is a luxury that we often do not have .for example , the lack of strong emission lines that are detectable with visible - wavelength spectrographs in the `` redshift - desert '' at means that this redshift range is often under - represented in spectroscopic samples .a common technique is to either over - sample or under - sample the data to achieve balance . in under - sampling, samples are removed from highly represented regions to achieve balance , over - sampling on the other hand duplicates under represented samples .both approaches come with a cost ; in the former good data are wasted and in the latter more computation is introduced due to the data size increase . in this paper , we perform cost - sensitive learning , which increases the intrinsic error function in under - represented regions . in regression tasks ,such as we consider here , the output can be either discretized and treated as classes for the purpose of cost assignment , or a specific bias is used such as . to mimic a balanced data set in our setup, the galaxies were grouped by their spectroscopic redshift using non - overlapping bins of width 0.1 .the weights are then assigned as follows for balanced training : where is the error cost for sample , is the frequency of samples in bin number number , is the number of bins and is the set of samples in set number .eq . assigns a weight to each training point which is the maximum bin frequency over the frequency of the bin in which the source belongs .this ensures that the error cost of source is inversely proportional to its spectroscopic redshift frequency in the training sample .the normalized weights are assigned as follows : after the weights have been assigned , they can be incorporated directly into the objective as follows : the difference between the objectives in eq . andis the introduction of the diagonal matrix , where each element is the corresponding cost for sample . the first term in eq .is a matrix operation form for a weighted sum of squares , where the solution can be found analytically as follows : the only modification to the gradient calculation is to set the matrix . in standard sum of squared errors , or the identity matrix .it is worth emphasising that this component of the framework does not attempt to weight the training sample in order to match the distribution of the test sample , or matching the spectroscopic distribution to the photometric distribution as proposed in , and applied to photometric redshift in , but rather gives the user of the framework the ability to control the cost per sample to serve different science goals depending on the application . in this paper ,the weighting scheme was used for two different purposes , the first was to virtually balance the data to mimic training on a uniform distribution , and the second was to directly target the weighted error of .in this section , we specifically target the photometric bands and depths planned for _ euclid_. _ euclid _ aims to provide imaging data in a broad band and the more standard near - infrared , and bands , while ground - based ancillary data are expected in the optical , , and bands .we use a mock dataset from , consisting of the , , , , , , and magnitudes ( to 10 depths of 24.6 , 24.2 , 24.4 , 23.8 , 25.0 for the former , and 5 depth of 24.0 for each of the latter three near - infrared filters ) for 185,253 simulated sources .we remove all sources with to simulate the target magnitudes set for _ euclid_. in addition , we remove any sources with missing measurements in any of their bands prior to training ( only 15 sources ) .no additional limits on any of the bands were used , however in section [ subsec - prior - mean ] we do explicitly impose limits on the riz band to test the extrapolation performance of the models .the distribution of the spectroscopic redshift is provided in figure [ fig - zspec - histogram ] .for all experiments on the simulated data , we ignore the uncertainties on the photometry in each band and train only on the magnitudes since in the simulated data set , unlike real datasets , the log of the associated errors are linearly correlated with the magnitudes , especially in the targeted range , therefore adding no information .however , they were fed as input to annz to satisfy the input format of the code .1 1 1 we preprocess the data using principle component analysis ( pca ; * ? ? ?* ) to de - correlate the features prior to learning , but retain all features with no dimensionality reduction .de - correlation accelerates the convergence rate of the optimization routine especially when using a logistic - type kernel machines such as neural networks . to understand this ,consider a simple linear regression example where we would like to solve for in , the solution for this is .note that if is de - correlated , therefore learning depends only on the -th column of and it is independent from learning , where . in an optimization approach , the convergence rate is a function of the condition number of the matrix , which is minimized in the case of de - correlated data .this represents a quadratic error surface which helps accelerate the search .this is particularly important in the application addressed in this paper because the magnitudes measured in each filter are strongly correlated with each other .five algorithms are considered to model the data ; artificial neural networks ( annz ; * ? ? ?* ) , a gp with low rank approximation ( stablegp ; * ? ? ?* ) , a sparse gp with global length scale ( gp - gl ) , a gp with variable length scale ( gp - vl ) and a gp with variable covariances ( gp - vc ) . for annz , a single layer network is used , and to satisfy the input format for the code , the data were not de - correlated and the uncertainties on photometry for each band were used as part of the training input . for stablegp , we use the sr - vp method proposed in . in subsequent tests, the variable refers to the number of hidden units in annz , the rank in stablegp , and the number of basis functions in gp - gl , gp - vl and gp - vc .the time complexities for each algorithm are shown in table [ table - time - complexity ] .the data were split at random into 80 per cent for training , 10 per cent for validation and 10 per cent for testing .we note that we investigate the accuracy for various training sample sizes in section [ sec - sizetraining ] .all models were trained using the entire redshift range available , but we only report the performance on the redshift range of to target the parameter space set out in .we train each model for 500 iterations in each run and the validation sample was used for model selection and parameter tuning , but all the results here are reported on the test sample , which is not used in any way during the training process .table [ table - metrics ] shows the metrics used to report the performance of each algorithm .+ annz ( -layers ) & + stablegp & + gp - gl & + gp - vl & + gp - vc & + [ table - time - complexity ] [ cols="<,<,<",options="header " , ] [ table - final - results ] 0.35 0.35 0.35 0.35 0.35 0.35 0.35 0.35in this paper a sparse gaussian process framework is presented and applied to photometric redshift estimation .the framework is able to out perform annz , sparse gp parametrized by a set of global hyper - parameters and low rank approximation gp .the performance increase is attributed to the handling of distribution bias via a weighting scheme integrated as part of the optimization objective , parametrizing each basis function with bespoke covariances , and integrating the learning of the prior mean function to enhance the extrapolation performance of the model .the methods were applied to a simulated dataset and sdss dr12 where the proposed approach consistently outperforms the other models on the important metrics ( and ) .we find that the model scales linearly in time with respect to the size of the data , and has a better generalization performance compared to the other methods even when presented with a limited training set .results show that with only 30 per cent of the data , the model was able to reach accuracy close to that of using the full training sample . even when data were selectively removed based on magnitudes , the model was able to show the best recovery performance compared to the other models .the cost - sensitive learning component of the framework regularizes the predictions to limit the effect caused by the biased distribution of the output and allows for direct optimization of the survey objective ( e.g. ) .again , the algorithm consistently outperforms other approaches , including annz and stablegp , in all reported experiments .we also investigate how the size of the training sample and the basis set affects the accuracy of the photometric redshift prediction .we show that for the simulated set of galaxies , based on the work of , we are able to obtain a photometric redshift accuracy of and using 1600 basis functions which is a factor of seven improvement over the standard annz implementation . we find that gp - vc out - performed annz on the real data from sdss - dr12 , with an improvement in accuracy of cent , even when restricted to have the same number of free parameters . in future workwe will test the algorithm on a range of real data , and pursue investigations of how the algorithm performs over different redshift regimes and for different galaxy types .iaa acknowledges the support of king abdulaziz city for science and technology .mjj and snl acknowledge support from the uk space agency .the authors would like to thank the reviewers for their valuable comments .
accurate photometric redshifts are a lynchpin for many future experiments to pin down the cosmological model and for studies of galaxy evolution . in this study , a novel sparse regression framework for photometric redshift estimation is presented . synthetic dataset simulating the _ euclid _ survey and real data from sdss dr12 are used to train and test the proposed models . we show that approaches which include careful data preparation and model design offer a significant improvement in comparison with several competing machine learning algorithms . standard implementations of most regression algorithms use the minimization of the sum of squared errors as the objective function . for redshift inference , this induces a bias in the posterior mean of the output distribution , which can be problematic . in this paper we directly minimize the target metric and address the bias problem via a distribution - based weighting scheme , incorporated as part of the optimization objective . the results are compared with other machine learning algorithms in the field such as artificial neural networks ( ann ) , gaussian processes ( gps ) and sparse gps . the proposed framework reaches a mean absolute , over the redshift range of on the simulated data , and over the entire redshift range on the sdss dr12 survey , outperforming the standard annz used in the literature . we also investigate how the relative size of the training sample affects the photometric redshift accuracy . we find that a training sample of > 30 per cent of total sample size , provides little additional constraint on the photometric redshifts , and note that our gp formalism strongly outperforms annz in the sparse data regime for the simulated data set . [ firstpage ] methods : data analysis galaxies : distances and redshifts
in order to introduce the problem we analyze in the paper , let us start with some motivating examples . in high frequency trading, an automatic agent decides the next action to be performed as sending or canceling a buy / sell order , on the basis of some market variables as well as private variables ( e.g. , stock price , traded volume , volatility , order books distributions as well as complex relations among these variables ) .for instance in the trading strategy is learned in the form of a discrete function , described as a table , that has to be evaluated whenever a new scenario is faced and an action ( sell / buy ) has to be taken .the rows of the table represent the possible scenarios of the market and the columns represent the variables taken into account by the agent to distinguish among the different scenarios . for each scenario, there is an associated action .every time an action need to be taken , the agent can identify the scenario by computing the value of each single variable and proceed with the associated action .however , recomputing all the variable every time might be very expensive . by taking into account the structure of the function / table together with information on the probability distribution on the scenarios of the market and also the fact that some variables are more expensive ( or time consuming ) to calculate than others , the algorithm could limit itself to recalculate only some variables whose values determine the action to be taken .such an approach can significantly speed up the evaluation of the function .since market conditions change on a millisecond basis , being able to react very quickly to a new scenario is the key to a profitable strategy . in a classical bayesian active learning problem ,the task is to select the right hypothesis from a possibly very large set each is a mapping from a set called the query / test space to the set ( of labels ) it is assumed that the functions in are unique , i.e. , for each pair of them there is at least one point in where they differ .there is one function which provides the correct labeling of the space and the task is to identify it through queries / tests .a query / test coincides with an element and the result is the value each test has an associated cost that must be paid in order to acquire the response since the process of labeling an example may be expensive either in terms of time or money ( e.g. annotating a document ) . the goal is to identify the correct hypothesis spending as little as possible .for instance , in automatic diagnosis , represents the set of possible diagnoses and the set of symptoms or medical tests , with being the exact diagnosis that has to be achieved by reducing the cost of the examinations . in , a more general variant of the problemwas considered where rather than the diagnosis it is important to identify the therapy ( e.g. , for cases of poisoning it is important to quickly understand which antidote to administer rather than identifying the exact poisoning ) .this problem can be modeled by defining a partition on with each class of representing the subset of diagnoses which requires the same therapy .the problem is then how to identify the class of the exact rather than itself .this model has also been studied by golovin et al . to tackle the problem of erroneous tests responses in bayesian active learning .the above examples can all be cast into the following general problem . + * the discrete function evaluation problem * ( dfep ) .an instance of the problem is defined by a quintuple where is a set of objects , is a partition of into classes , is a set of tests , is a probability distribution on and is a cost function assigning to each test a cost a test , when applied to an object , incurs a cost and outputs a number in the set .it is assumed that the set of tests is complete , in the sense that for any distinct there exists a test such that the goal is to define a testing procedure which uses tests from and minimizes the testing cost ( in expectation and/or in the worst case ) for identifying the class of an unknown object chosen according to the distribution the dfep can be rephrased in terms of minimizing the cost of evaluating a discrete function that maps points ( corresponding to objects ) from some finite subset of into values ( corresponding to classes ) , where an object corresponds to the point obtained by applying each test of to .this perspective motivates the name we chose for the problem .however , for the sake of uniformity with more recent work we employ the definition of the problem in terms of objects / tests / classes . +* decision tree optimization . * any testing procedure can be represented by a _ decision tree _ , which is a tree where every internal node is associated with a test and every leaf is associated with a set of objects that belong to the same class .more formally , a decision tree for is a leaf associated with class if every object of belongs to the same class .otherwise , the root of is associated with some test and the children of are decision trees for the sets , where , for , is the subset of that outputs for test . given a decision tree , rooted at , we can identify the class of an unknown object by following a path from to a leaf as follows : first , we ask for the result of the test associated with when performed on ; then , we follow the branch of associated with the result of the test to reach a child of ; next , we apply the same steps recursively for the decision tree rooted at .the procedure ends when a leaf is reached , which determines the class of .we define as the sum of the tests cost on the root - to - leaf path from the root of to the leaf associated with object .then , the _ worst testing cost _ and the _ expected testing cost _ of are , respectively , defined as figure [ fig : decisiontree0 ] shows an instance of the dfep and a decision tree for it .the tree has worst testing cost and expected testing cost . , and .letters and numbers in the leaves indicate , respectively , classes and objects . , title="fig:",width=415 ] [ fig : decisiontree0 ] -0.3 in * our results . *our main result is an algorithm that builds a decision tree whose expected testing cost and worst testing cost are at most times the minimum possible expected testing cost and the minimum possible worst testing cost , respectively .in other words , the decision tree built by our algorithm achieves simultaneously the best possible approximation achievable with respect to both the expected testing cost and the worst testing cost .in fact , for the special case where each object defines a distinct class known as the _ identification problem_ both the minimization of the expected testing cost and the minimization of the worst testing cost do not admit a sub - logarithmic approximation unless as shown in and in , respectively .in addition , in section [ sec : inapprox ] , we show that the same inapproximability results holds in general for the case of exactly classes for any it should be noted that in general there are instances for which the decision tree that minimizes the expected testing cost has worst testing cost much larger than that achieved by the decision tree with minimum worst testing cost . also there are instances where the converse happens .therefore , it is reasonable to ask whether it is possible to construct decision trees that are efficient with respect to both performance criteria .this might be important in practical applications where only an estimate of the probability distribution is available which is not very accurate .also , in medical applications like the one depicted in , very high cost ( or equivalently significantly time consuming therapy identification ) might have disastrous / deadly consequences . in such cases , besides being able to minimize the expected testing cost , it is important to guarantee that the worst testing cost also is not large ( compared with the optimal worst testing cost ) . with respect to the minimization of the expected testing cost , our result improves upon the previous approximation shown in and , where is the minimum positive probability among the objects in . from the result in these papersan approximation could be attained only for the particular case of uniform costs via a technique used in . from a high - level perspective, our method closely follows the one used by gupta _ for obtaining the approximation for the expected testing cost in the identification problem .both constructions of the decision tree consist of building a path ( backbone ) that splits the input instance into smaller ones , for which decision trees are recursively constructed and attached as children of the nodes in the path . a closer look , however , reveals that our algorithm is much simpler than the one presented in .first , it is more transparently linked to the structure of the problem , which remained somehow hidden in where the result was obtained via an involved mapping from adaptive tsp .second , our algorithm avoids expensive computational steps as the sviridenko procedure and some non - intuitive / redundant steps that are used to select the tests for the backbone of the tree .in fact , we believe that providing an algorithm that is much simpler to implement and an alternative proof of the result in is an additional contribution of this paper .* state of the art . *the dfep has been recently studied under the names of class equivalence problem and group identification problem and long before it had been described in the excellent survey by moret .both and give approximation algorithms for the version of the dfep where the expected testing cost has to be minimized and both the probabilities and the testing costs are non - uniform .in addition , when the testing costs are uniform both algorithms can be converted into a approximation algorithm via kosaraju approach .the algorithm in is more general because it addresses multiway tests rather than binary ones .for the minimization of the worst testing cost , moshkov has studied the problem in the general case of multiway tests and non - uniform costs and provided an -approximation in . in the same paperit is also proved that no -approximation algorithm is possible under standard the complexity assumption the minimization of the worst testing cost is also investigated in under the framework of covering and learning . the particular case of the dfep where each object belongs to a different class known as the _ identification problem_has been more extensively investigated . both the minimization of the worst and the expected testing cost do not admit a sublogarithmic approximation unless as proved by and . for the expected testing cost , in the variant with multiway tests , non uniform probabilities and non uniform testing costs , an approximation is given by guillory and blimes in . improved this result to employing new techniques not relying on the generalized binary search ( gbs)the basis of all the previous strategies .an approximation algorithm for the minimization of the worst testing cost for the identification problem has been given by arkin et . for binary tests and uniform cost and by hanneke for case with mutiway tests and non - uniform testing costs . in the case of boolean functions, the dfep is also known as stochastic boolean function evaluation ( sbfe ) , where the distribution over the possible assignments is a product distribution defined by assuming that variable has a given probability of being one independently of the value of the other variables .another difference with respect to the dfep as it is presented here , is that in stochastic boolean function evaluation the common assumption is that the complete set of associations between the assignments of the variables and the value of the function is provided , directly or via a representation of the function , e.g. , in terms of its dnf or cnf .the present definition of dfep considers the more general problem where only a sample of the boolean function is given and from this we want to construct a decision tree with minimum expected costs and that exactly fits the sample . results on the exact solution of the sbfe for different classes of boolean functions can be found in the survey paper . in a recent paper deshpande et al . , provide a -approximation algorithm for evaluating boolean linear threshold formulas and an approximation algorithm for the evaluation of cdnf formulas , where and is the number of clauses of the input cnf and is the number of terms of the input dnf .the same result had been previously obtained by kaplan et al . for the case of monotone formulas and uniform distribution ( in a slightly different setting ) .both algorithms of are based on reducing the problem to stochastic submodular set cover introduced by golovin and krause and providing a new algorithm for this latter problem .other special cases of the dfep like the evaluation of and/or trees ( a.k.a .read - once formulas ) and the evaluation of game trees ( a central task in the design of game procedures ) are discussed in . in ,charikar _ et al ._ considered discrete function evaluation from the perspective of competitive analysis ; results in this alternative setting are also given in .given an instance of the dfep , we will denote by ( ) the expected testing cost ( worst testing cost ) of a decision tree with minimum possible expected testing cost ( worst testing cost ) over the instance when the instance is clear from the context , we will also use the notation ( ) for the above quantity , referring only to the set of objects involved .we use to denote the smallest non - zero probability among the objects in .let be an instance of dfep and let be a subset of .in addition , let and be , respectively , the restrictions of and to the set .our first observation is that every decision tree for is also a decision tree for the instance .the following proposition immediately follows .[ prop : subadditivity ] let be an instance of the dfep and let be a subset of . then , and where is the restriction of to .one of the measures of progress of our strategy is expressed in terms of the number of pairs of objects belonging to different classes which are present in the set of objects satisfying the tests already performed .the following definition formalizes this concept of pairs for a given set of objects .let be an instance of the dfep and we say that two objects constitute a pair of if they both belong to but come from different classes .we denote by the number of pairs of in formulae , we have where for and denotes the number of objects in belonging to class as an example , for the set of objects in figure [ fig : decisiontree0 ] we have and the following set of pairs we will use to denote the initially unknown object whose class we want to identify .let be a sequence of tests applied to identify the class of ( it corresponds to a path in the decision tree ) and let be the set of objects that agree with the outcomes of all tests in .if , then all objects in belong to the same class , which must coincide with the class of the selected object .hence , indicates the identification of the class of the object notice that might still be unknown when the condition is reached . for each test and for each , let be the set of objects for which the outcome of test is for a test the outcome resulting in the largest number of pairs is of special interest for our strategy .we denote with the set among such that ( ties are broken arbitrarily ) .we denote with the set of objects not included in i.e. , we define . whenever is clear from the context we use instead of .given a set of objects , each test produces a tripartition of the pairs in : the ones with both objects in those with both objects in and those with one object in and one object in we say that the pairs in are _ kept _ by and the pairs with one object from and one object from are _ separated _ by we also say that a pair is _ covered _ by the test if it is either kept or separated by analogously , we say that a test covers an object if . for any set of objects probability of is this section , we describe our algorithm dectree and analyze its performance. the concept of the separation cost of a sequence of tests will turn useful for defining and analyzing our algorithm . *the separation cost of a sequence of tests . * given an instance of the dfep , for a sequence of tests we define the separation cost of in the instance denoted by as follows : fix an object if there exists such that then we set if for each we set let denote the _ cost of separating in the instance by means of the sequence _ then , the _ separation cost of _ ( in the instance ) is defined by in addition , we define as the total cost of the sequence , i.e. , * lower bounds on the cost of an optimal decision tree for the dfep . *we denote by the minimum separation cost in attainable by a sequence of tests in which covers all the pairs in and as the minimum total cost attainable by a sequence of tests in which covers all the pairs in the following theorem shows lower bounds on both the expected testing cost and the worst case testing cost of any instance of the dfep .[ theo : lowerbound ] for any instance of the defp , it holds that and let be a decision tree for the instance .let be the nodes in the root - to - leaf path in such that for each the node is on the branch stemming from which is associated with , and the leaf node is the child of associated with the objects in let .abusing notation let us now denote with the test associated with the node so that is a sequence of tests .in particular , is the sequence of tests performed according to the strategy defined by when the object whose class we want to identify , is such that holds for each test performed in the sequence .notice that , by construction , is a sequence of tests covering all pairs of . _ claim ._ for each object it holds that if for each we have that then it holds that conversely , let be the first test in for which therefore , we have that is a prefix of the root to leaf path followed when is the object chosen .it follows that the claim is proved . in order to prove the first statement of the theorem , we let be a decision tree which achieves the minimum possible expected cost , i.e. , then, we have in order to prove the second statement of the theorem , we let be a decision tree which achieves the minimum possible worst testing cost , i.e. , let be such that , for each it holds that then , by the above claim it follows that using ( [ eq : totcost - worstcase ] ) , we have the proof is complete. the following subadditivity property will be useful .[ prop : subadditivity ] let be a partition of the object set we have and , where and are , respectively , the minimum expected testing cost and the worst case testing cost when the set of objects is * the optimization of submodular functions of sets of tests . *[ sec : submodularotimization ] let be an instance of the dfep .a set function is submodular non - decreasing if for every and every , it holds that ( submodularity ) and ( non - decreasing ) .it is easy to verify that the functions are non - negative non - decreasing submodular set functions . in words, is the function mapping a set of tests into the number of pairs covered by the tests in .the function , instead , maps a set of tests into the probability of the set of objects covered by the tests in .let be a positive integer .consider the following optimization problem defined over a non - negative , non - decreasing , sub modular function : in , wolsey studied the solution to the problem provided by algorithm [ greedy - wolsey ] below , called the adapted greedy heuristic .( ) [ line : one ] [ line : two ] [ line : three ] [ line : four ] [ line : five ] [ line : six ] the following theorem summarizes results from [theorems 2 and 3 ] . [ theo : wolsey ] let be the solution of the problem and be the set returned by algorithm [ greedy - wolsey ] .moreover , let be the base of the natural logarithm and be the solution of then we have that .moreover , if there exists such for each and divides then we have [ cor : wolsey ] let be the sequence of all the tests selected by adapted - greedy , i.e. , the concatenation of the two possible outputs in line 7 .then , we have that the total cost of the tests in is at most and our algorithm for building a decision tree will employ this greedy heuristic for finding approximate solutions to the optimization problem over the submodular set functions and defined in ( [ eq : problemp ] ) .we will show that algorithm [ algo : main ] attains a logarithmic approximation for dfep .the algorithm consists of 4 blocks .the first block ( lines 1 - 2 ) is the basis of the recursion , which returns a leaf if all objects belong to the same class . if , we have that and the algorithm returns a tree that consists of a root and two leaves , one for each object , where the root is associated with the cheapest test that separates these two objects .clearly , this tree is optimal for both the expected testing cost and the worst testing cost . the second block ( line 3 ) calls procedure findbudget to define the budget allowed for the tests selected in the third and fourth blocks .findbudget finds the smallest such that adapted - greedy( ) returns a set of tests covering at least pairs .( ) [ b1start ] [ b1end ] [ line : budget ] [ b2start ] [ b3-start ] [ line : main - opt1 - 1 ] [ line : main - ui1 ] [ line : main - call-1 ] [ b3-end ] [ b4-start ] [ line : main - greedy2 ] [ line : main - ui2 ] [ line : main - call-2 ] [ b4-end ] [ line : main - call-3 ] ( ) the third ( lines 4 - 10 ) and the fourth ( lines 11 - 17 ) blocks are responsible for the construction of the backbone of the decision tree ( see fig .2 ) as well as to call dectree recursively to construct the decision trees that are children of the nodes in the backbone .the third block ( the * while * loop in lines 4 - 10 ) constructs the first part of the backbone ( sequence in fig .2 ) by iteratively selecting the test that covers the maximum uncovered mass probability per unit of testing cost ( line 5 ) .the selected test induces a partition on the set of objects , which contains the objects that have not been covered yet . in lines 7 and 8 ,the procedure is recursively called for each set of this partition but for the one that is contained in the subset . with reference to figure 2, these calls will build the subtrees rooted at nodes not in which are children of some node in .similarly , the fourth block ( the * repeat - until * loop ) constructs the second part of the backbone ( sequence in fig .2 ) by iteratively selecting the test that covers the maximum number of uncovered pairs per unit of testing cost ( line 12 ) . the line [ line : main - call-3 ] is responsible for building a decision tree for the objects that are not covered by the tests in the backbone .we shall note that both the third and the fourth block of the algorithm are based on the adapted greedy heuristic of algorithm [ greedy - wolsey ] .in fact , in line [ line : main - opt1 - 1 ] ( third block ) corresponds to in algorithm [ greedy - wolsey ] because , right before the selection of the -th test , is the set of tests and .thus , and so that a similar argument shows that in line [ line : main - greedy2 ] ( fourth block ) corresponds to in algorithm [ greedy - wolsey ] .these connections will allow us to apply both theorem [ theo : wolsey ] and corollary [ cor : wolsey ] to analyze the cost and the coverage of these sequences .while in the lowest - right gray subtree it is at most ( see the proof of theorem [ theo : main]).,title="fig:",width=302 ] [ fig : decisiontree ] -0.1 in let denote the sequence of tests obtained by concatenating the tests selected in the * while * loop and in the * repeat - until * loop of the execution of dectree over instance we delay to the next section the proof of the following key result .[ theo : key ] let be the solution of and there exists a constant such that for any instance of the dfep , the sequence covers at least pairs , and it holds that and applying theorem [ theo : key ] to each recursive call of dectree we can prove the following theorem about the approximation guaranteed by our algorithm both in terms of worst testing cost and expected testing cost .[ theo : main ] for any instance of the dfep , the algorithm dectree outputs a decision tree with expected testing cost at most and with worst testing cost at most .for any instance let be the decision tree produced by the algorithm dectree .first , we prove an approximation for the expected testing cost .let be such that , where is the constant given in the statement of theorem [ theo : key ] .let us assume by induction that the algorithm guarantees approximation , for the expected testing cost , for every instance on a set of objects with let be the set of instances on which the algorithm is recursively called in lines 8,15 and 18 .we have that the first equality follows by the recursive way the algorithm dectree builds the decision tree .inequality ( [ 3 ] ) follows from ( [ 1 ] ) by the subadditivity property ( proposition [ prop : subadditivity ] ) and simple algebraic manipulations .the inequality in ( [ 3 - 1 ] ) follows by theorem [ theo : key ] together with theorem [ theo : lowerbound ] yielding the inequality ( [ 4 ] ) follows by induction ( we are using to denote the number of pairs of instance ) . to prove that the inequality in ( [ 6 ] ) holds we have to argue that every instance has at most pairs .let as in the lines 8 and 15 .first we show that the number of pairs of is at most .we have and is the set with the maximum number of pairs in the partition , induced by on the set .it follows that now it remains to show that the instance , recursively called , in line 18 has at most pairs .this is true because the number of pairs of is equal to the number of pairs not covered by which is bounded by by theorem [ theo : key ] .now , we prove an approximation for the worst testing cost of the tree .let be such that .let us assume by induction that the worst testing cost of is at most for every instance on a set of objects with we have that inequality ( [ 8 ] ) follows from the subadditivity property ( proposition [ prop : subadditivity ] ) for the worst testing cost .the inequality ( [ 9 ] ) follows by theorem [ theo : lowerbound ] .the inequality ( [ 10 ] ) follows from theorem [ theo : key ] , the induction hypothesis ( we are using to denote the number of pairs of instance ) and from the fact mentioned above that every instance in has at most pairs . since it follows that the algorithm provides an approximation for both the expected testing cost and the worst testing cost .the previous theorem shows that algorithm dectree provides simultaneously logarithmic approximation for the minimization of expected testing cost and worst testing cost .we would like to remark that this is an interesting feature of our algorithm . in this respect ,let us consider the following instance of the dfep : let ; , for and ; the set of tests is in one to one correspondence with the set of all binary strings of length so that the test corresponding to a binary string outputs for object if and only if the bit of is 0(1 ) .moreover , all tests have unitary costs .this instance is also an instance of the problem of constructing an optimal prefix coding binary tree , which can be solved by the huffman salgorithm .let and be , respectively , the decision trees with minimum expected cost and minimum worst testing cost for this example . using huffman s algorithm , it is not difficult to verify that and .in addition , we have that .this example shows that the minimization of the expected testing cost may result in high worst testing cost and vice versa the minimization of the worst testing cost may result in high expected testing cost .clearly , in real situations presenting such a dichotomy , the ability of our algorithm to optimize simultaneously both measures of cost might provide a significant gain over strategies only guaranteeing competitiveness with respect to one measure .we now return to the proof of theorem [ theo : key ] for which will go through three lemmas .[ lemma : length ] for any instance of the dfep , the value returned by the procedure findbudget satisfies .let us consider the problem in equation ( [ eq : problemp ] ) with the function that measures the number of pairs covered by a set of tests .let be the number of pairs covered by the solution constructed with adapted - greedy when the budget the righthand side of equation ( [ eq : problemp])is . by construction, findbudget finds the smallest such that .let be a sequence that covers all pairs in and that satisfies .arguing by contradiction we can show that suppose that this was not the case , then would be the sequence which covers pairs using a sequence of tests of total cost not larger than some by theorem 2 , the procedure adapted - greedy provides an -approximation of the maximum number of pairs covered with a given budget .therefore , when run with budget adapted - greedy is guaranteed to produce a sequence of total cost which covers at least pairs . however , by the minimality of it follows that such a sequence does not exist .since this contradiction follows by the hypothesis it must hold that as desired .given an instance for a sequence of tests and a real , let be the separation cost of when every non - covered object is charged , that is , the proofs of the following technical lemma is deferred to the appendix .[ lemma : key2 ] let be the sequence obtained by concatenating the tests selected in the * while * loop of algorithm [ algo : main ] . then, and where is a positive constant and is the budget calculated at line 3 .[ lemma : key3 ] the sequence covers at least pairs and it holds that the sequence can be decomposed into the sequences and , that are constructed , respectively , in the * while * and * repeat - until * loop of the algorithm dectree ( see also fig .it follows from the definition of that there is a sequence of tests , say , of total cost not larger than that covers at least pairs for instance .let be the number of pairs of instance covered by the sequence .thus , the tests in , that do not belong to , cover at least pairs in the set of objects not covered by .the sequence coincides with the concatenation of the two possible outputs of the procedure adapted - greedy( ( algorithm 1 ) , when it is executed on the instance defined by : the objects in ( those not covered by ) ; the tests that are not in the submodular set function and bound by corollary [ cor : wolsey ] , we have that and covers at least uncovered pairs . therefore , since altogether, we have that covers at least pairs and the proof of theorem [ theo : key ] will now follow by combining the previous three lemmas .* proof of theorem [ theo : key ] .* first , it follows from lemma [ lemma : key3 ] that covers at least pairs . to prove that , we decompose into and , the sequences of tests selected in the * while * and in the * repeat - until * loop of algorithm [ algo : main ] , respectively . for ,let .in addition , let be the set of objects which are not covered by the tests in thus , where the last inequality follows from lemma [ lemma : key2 ] .it remains to show that .this inequality holds because lemma [ lemma : key3 ] assures that and lemma [ lemma : length ] assures that .the proof is complete .let be a set of elements and be a family of subsets of .the minimum set cover problem asks for a family of minimum cardinality such that .it is known that no sub logarithmic approximation is achievable for the minimum set cover problem under the standard assumption that more precisely , by the result of raz and safra it follows that there exists a constant such that no -approximation algorithm for the minimum set cover problem exists unless .we will show that an approximation algorithm for the minimization of the expected testing cost dfep with exact classes for , implies that the same approximation can be achieved for the minimum set cover problem .this implies that one can not expect to obtain a sublogarithmic approximation for the dfep unless .the reduction we present can also be used to show the same inapproximability result for the minimization of the worst testing cost version of the dfep .given an instance for the minimum set cover problem as defined above , we construct an instance for the dfep as follows : the set of objects is .the family of classes is defined as follows : all the objects of belong to class while the object , for , belongs to class . notice that and the objects of belong to the same class . in order to define the set of tests proceed as follows : for each set we create a test such that has value for the objects in and value 1 for the remaining objects .in addition , we create a test which has value for objects in and value for object ( ) . for our later purposes ,we notice here that the test can not distinguish between and the elements in .each test has cost 1 , i.e. , the cost assignment * c * is given by for each finally , we set the probability of to be equal to and the probability of the other objects equal to , for some fixed let be the decision tree with minimum expected testing cost for and let be a minimum set cover for instance , where .we first argue that .in fact , we can construct a decision tree by putting the test associated with in the root of the tree , then the test associated with as the child of and so on . notice that , for we have that has two children , one is and the other is a leaf mapping to the class as for one of its children in again a leaf mapping to , the other child is set to the test , whose children are all leaves .the expected testing cost of can be upper bounded by since we have for any and for any . on the other hand ,let be a decision tree for and let be the path from the root of to the leaf where the object lies .it is easy to realize that the subsets associated with the tests on this path cover all the elements in fact these tests separate from all the other objects from let be the solution to the set cover problem provided by the sets associated with the tests on the path we have that now assume that there is an algorithm that for any instance of the dfep can guarantee a solution with approximation for some therefore , given an instance for set cover we can use this algorithm on the transformed instance defined above , where we obtain a decision tree for such that where we upper bound and from , as seen above we can construct a solution for the set cover problem such that hence , it would follow that is an approximate solution for the set cover instance satisfying : which by the result of is not possible unless the same construction can be used for analyzing the case of the worst testing cost , in which case we have that ( [ inapprox:1 ] ) becomes and ( [ inapprox:2 ] ) becomes leading to the inapproximability of the dfep w.r.t .the worst testing cost within a factor of for any notice that an analogous result regarding the worst testing cost had been previously shown by moshkov based on the result of feige .thus , we have the following theorem [ theo : inappr ] both the minimization of the worst case and the expected case of the dfep do nt admit an approximation unless presented a new algorithm for the discrete function evaluation problem , a generalization of the classical bayesian active learning also studied under the names of equivalence class determination problem and group based active query selection problem of .our algorithm builds a decision tree which asymptotically matches simultaneously for the expected and the worst testing cost the best possible approximation achievable under standard complexity assumptions this way , we close the gap left open by the previous approximation for the expected cost shown in and , where is the minimum positive probability among the objects in and in addition we show that this can be done with an algorithm that guarantees the best possible approximation also with respect to the worst testing cost . with regards to the broader context of learning , given a set of samples labeled according to an unknown function , a standard task in machine learning is to find a good approximation of the labeling function ( hypothesis ) . in order to guarantee that the hypothesis chosen has some generalization power w.r.t . to the set of samples, we should avoid overfitting .when the learning is performed via decision tree induction this implies that we shall not have leaves associated with a small number of samples so that we end up with a decision tree that have leaves associated with more than one label .there are many strategies available in the literature to induce such a tree . in the problem considered in this paperour aim is to over - fit the data because the function is known a priori and we are interested in obtaining a decision tree that allows us to identify the label of a new sample with the minimum possible cost ( time / money ) .the theoretical results we obtain for the `` fitting '' problem should be generalizable to the problem of approximating the function . to this aimwe could employ the framework of covering and learning from along the following lines : we would interrupt the recursive process in algorithm 2 through which we construct the tree as soon as we reach a certain level of learning ( fitting ) w.r.t .the set of labeled samples .then , it remains to show that our decision tree is at logarithmic factor of the optimal one for that level of learning .this is an interesting direction for future research .30 adler , m. and heeringa , b. approximating optimal binary decision trees .approx / random 08 , pp . 19 , 2008 .allen , s.r . , hellerstein , l. , kletenik , d , and nlyurt , t. evaluation of dnf formulas .in _ proc . of isaim 2014_. arkin , e. m. , meijer , h. , mitchell , j.s.b . , rappaport , d. and skiena , s.s .decision trees for geometric models . in _ proc . of scg 93_ , pp . 369378 , 1993 .bellala , g. , bhavnani , s. k. , and scott , c. group - based active query selection for rapid diagnosis in time - critical situations ._ ieee trs .theor . _ , 580 ( 1):0 459478 , 2012 .chakaravarthy , v. t. , pandit , v. , roy , s. , awasthi , p. , and mohania , m. decision trees for entity identification : approximation algorithms and hardness results . in _ proc .pods 07 _ , pp . 5362 , 2007 .chakaravarthy , v. t. , pandit , v. , roy , s. , and sabharwal , y. approximating decision trees with multiway branches . in _ proc .icalp 09 _ , pp . 210221 , 2009 .charikar , m. , fagin , r. , guruswami , v. , kleinberg , j. m. , raghavan , p. , and sahai , a. query strategies for priced information . _ journal of computer and system sciences _ , 640 ( 4):0 785819 , 2002 .cicalese , f. and laber , e. s. on the competitive ratio of evaluating priced functions ._ j. acm _ , 580 ( 3):0 9 , 2011 . cicalese , f. , jacobs , t. , laber , e. , and molinaro , m. on greedy algorithms for decision trees . in _ proc . of isaac10 _ , 2010 .cormen , t. h. , leiserson , c. e. , rivest , r. l. , and stein , c. _ introduction to algorithms_. mit press , cambridge , ma , 2001 .dasgupta , s. analysis of a greedy active learning strategy . in _ nips04 _ , 2004 .deshpande , a. , hellerstein , l. , and kletenik , d .. approximation algorithms for stochastic boolean function evaluation and stochastic submodular set cover . in _ proc . of soda2014 _ , pp . 14531467 , 2014 . a threshold of for approximating set cover_ journal of acm _ 45 ( 1998 ) 634652 .garey , m. optimal binary identification procedures ._ siam journal on applied mathematics _, 23(2):173186 , 1972 .golovin , d. , krause , a. , and ray , d. near - optimal bayesian active learning with noisy observations . in _ proc . of nips10_ , pp . 766774 , 2010 .golovin , d. , krause , a .. adaptive submodularity : theory and applications in active learning and stochastic optimization. _ journal of artificial intelligence research _ , vol .42 , pages 427 - 486 , 2011 .greiner , r. , howard , r. , jankowska , m. , and malloy , m. finding optimal satisfiscing strategies for and - or trees ._ artificial intelligence _ , 1700 ( 1):0 1958 , 2005 .guillory , a. and bilmes , j. average - case active learning with costs . in _ proc . of alt09_ , pp .141155 , 2009 . guillory , a. and bilmes , j. interactive submodular set cover . in _ proc .icml10 _ , pp . 415422 .guillory , a. and bilmes , j. simultaneous learning and covering with adversarial noise ._ icml11 _ , pp .gupta , a. , nagarajan , v. , and ravi , r. approximation algorithms for optimal decision trees and adaptive tsp problems . in _ proc .icalp10 _ , pp . 690701 , 2010 .hanneke , s. the cost complexity of interactive learning ._ unpublished _ , 2006 .hyafil , l. and rivest , r. l. constructing optimal binary decision trees is np - complete . _inf . process ._ , 50 ( 1):0 1517 , 1976 .kaplan , h. , kushilevitz , e. , and mansour , y .. learning with attribute costs . in _ proc .of stoc 2005 _ , pp . 356365 , 2005 .kosaraju , s. rao , przytycka , teresa m. , and borgstrom , ryan s. on an optimal split tree problem . in _ proc . of wads99 _ , pp . 157168 , 1999 .laber , e. s. and nogueira , l. t. on the hardness of the minimum height decision tree problem . _discrete appl ._ , 144:0 209212 , 2004 .larmore , l. l. and hirschberg , d. s. a fast algorithm for optimal length - limited huffman codes ._ jacm : journal of the acm _ , 37 , 1990 .ecision trees and diagrams ._ acm computing surveys _ , pp .593623 , 1982 .moshkov , j.m .approximate algorithm for minimization of decision tree depth . in _ proc . of 9th intl .conf . on rough sets , fuzzy sets , data mining , and granular computing _ , pp . 611 - 614 , 2003 .moshkov , j.m .greedy algorithm with weights for decision tree construction ., 104 : 285292 , 2010 .nemhauser , g. , wolsey , l. , and fisher , m. l. an analysis of approximations for maximizing submodular set functions - i ._ math . programming _ , 14:0 265294 , 1978 .nevmyvaka , y. , feng , y. , and kearns , m .. reinforcement learning for optimized trade execution . in _ proc . of icml06_ , pp .673680 , 2006 . andsafra , s .. a sub - constant error - probability low - degree test , and sub - constant error - probability pcp characterization of np . ,acm , 1997 , pp .475484 .saks , m. e. and wigderson , a. probabilistic boolean decision trees and the complexity of evaluating game trees . in _ proc . of focs86_ , pp . 2938 , 1986 .sviridenko , m. a note on maximizing a submodular set function subject to a knapsack constraint ._ operations research letters _ , 320 ( 1):0 41 43 , 2004 .tarsi , m. optimal search on some game trees ._ journal of the acm _ , 300 ( 3):0 389396 , july 1983 .nlyurt , t. sequential testing of complex systems : a review ., 142(1 - 3):189205 , 2004 .wolsey , l. maximising real - valued submodular functions ._ math . of operation research _ , 7(3):410425 , 1982 .* lemma [ lemma : key2 ] . * _ let be the sequence obtained by concatenating the tests selected in the * while * loop of algorithm 2 .then , and where is a positive constant and is the budget calculated at line [ line : budget ] . _clearly , the algorithm 2 in the * while * loop constructs a sequence such that in order to prove the second inequality in the statement of the lemma , it will be convenient to perform the analysis in terms of a variant of our problem which is explicitly defined with respect to the separation cost of a sequence of tests .we call this new problem the _ pair separation problem _ ( psp ) : the input to the psp , as in the dfep , is a 5-uple , where is a set of objects , is a partition of into classes , is a family of subsets of , is a probability distribution on and is a cost function assigning to each a cost the only difference between the input of these problems is that the set of tests in the input of dfep is replaced with a family of subsets of .we say that covers an object iff .moreover , we say that covers a pair of objects if at least one of the conditions hold : ( i ) or ( ii ) .we say that a pair is covered by a sequence of tests if some test in the sequence covers .the separation cost of a sequence in the instance of psp is given by : the _ pair separation problem _ consists of finding a sequence of subsets of with minimum separation cost , , among those sequences that cover all pairs in .an instance of the dfep induces an instance of the psp where and for every test we have a corresponding subset such that .thus , in our discussion we will use the term test to refer to a subset . in the body of this paperwe implicitly work with the instance of the psp induced by the input instance of the dfep .it is easy to realize that .in addition , where is the sequence obtained from when every is replaced with .thus , in order to establish the lemma it suffices to prove that it is useful to observe that is equal to the sequence returned by procedure greedypsp in algorithm [ alg : appendix ] when it is executed on the instance .this algorithm corresponds to lines 4,5,9 and 10 of the while loop of algorithm [ algo : main ] . in algorithm[ alg : appendix ] , the greedy criteria consists of choosing the test that maximizes the ratio this is equivalent to the maximization of defining the greedy choice in algorithm [ algo : main ] .( : instance of psp , :budget ) ) ( * ) [ alg : appendix ] the proof consists of the following steps : 1 .we construct an instance of the psp from 2 .we prove that the optimal separation cost for is no larger than the optimal one for , that is , .we prove that separation cost of any sequence of tests returned by the above pseudo - code on the instance is at a constant factor of , that is , is .we prove that there exists a sequence of tests possibly returned by greedypsp when executed on the instance such that . by chaining these inequalities ,we conclude that is the steps ( ii ) , ( iii ) and ( iv ) are proved in claims 1,2 and 3 , respectively .we start with the construction of instance ._ construction of instance ._ for every test , we define . the instance is constructed from as follows .let . for each add objects to , each of them with probability and with class equal to that of .if an object is added to set due to , we say that is generated from . for every test add tests to the set , each of them with cost .if a test is added to set due to , we say that is generated from .it remains to define to which subset of each test corresponds to . if then for every generated from and every generated from .let be the set of tests that contains the object .note that the number of tuples , where is a test generated from is .thus , we create a one to one correspondence between these tuples and the numbers in the set . for a test , generated from , let be the set of numbers that correspond to the tuples that includes .note that in addition , we associate each object , generated from , with a number in a balanced way so that each number in is associated with objects .thus , a test , generated from , covers an object generated from if and only if .for the instance we have the following useful properties : * if covers object then each test , generated from , covers exactly objects generated from .moreover , each object generated from is covered by exactly one test generated from .* if a set of tests covers all pairs of then the set all tests generated from belong to covers all pairs of .property ( a ) holds because a test generated by is associated with numbers in and to each number in we have objects associated with . to see that property ( b )holds , let us assume that covers all pairs of the instance and does not cover a pair .let and be the set of tests that covers and , respectively .the fact that does not cover implies that so that for each , there is a test , generated from , that does not belong to .similarly , for each , there is a test , generated from , that does not belong to .let be an object , generated from , that is mapped , via function , into the number in that corresponds to the tuple .moreover , let be an object , generated from , that is mapped , via function , into the number in that corresponds to the tuple .the pair is not covered by , which is a contradiction . * claim 1 . * the optimal separation cost for is no larger than the optimal separation cost for i.e. , given a sequence for that covers all pairs we can obtain a sequence for by replacing each test with the tests in that were generated from , each of which has cost it is easy to see that covers all the pairs in and the separation cost of is not larger than that of .this establishes our claim .now let be a sequence of tests returned by procedure greedypsp in algorithm [ alg : appendix ] when it is executed on the instance .* claim 2 . *the separation cost of the sequence is at most a constant factor of that of , which is the sequence of tests with minimum separation cost among all sequences of tests covering all the pairs , for the instance i.e. , for some constant let ( resp . ) be the sum of the probabilities of the objects covered by the first tests in ( resp . ) .in particular , we have in addition , let be the sum of the probabilities of all objects in . notice that , with the above notation , we can rewrite the separation cost of the sequence as let be such that , where is the budget in the statement of the lemma . for , let and .furthermore , let } ] analogously , let }_* ] for the sake of definiteness , we set and } = p^{[-1]}_*=0 ] for .we now devise a lower bound on the separation cost of for this , we first note that the length of is at least , for otherwise the property ( b ) of instance would guarantee the existence of a sequence of tests of total cost smaller than that covers all pairs for instance ( and for the instance of the dfep as well ) , which contradicts lemma [ lemma : length ] .therefore , we can lower bound the the separation cost of the sequence as follows : } ) \geq \frac{3q}{4 } + \frac{1}{2 } \sum_{j=1}^{\ell-1 } 2^{j } ( q - p_*^{[j ] } ) \label{eq:27may2}\end{aligned}\ ] ] the inequality in ( [ eq:27may1 ] ) follows from ( [ eq:27may0 ] ) by considering in the summation on the right hand side of ( [ eq:27may1 ] ) only the first tests . the term in the first inequality ( [ eq:27may2 ] ) is the contribution of the first two tests of the sequence to the separation cost . to prove that in the last inequality , we note that that because the probability covered by the first test of sequence is , where is the test that generates . in the last inequality we used the fact that for all be the set of objects covered by the sequence of tests , which is the prefix of length of the sequence of tests .we shall note that for , the subsequence of coincides with the sequence of tests constructed through the execution of adapted - greedy over the instance ( ) , where * is a set of tests , all of them with cost * the function maps a set of tests into the probability of the objects in that are covered by the tests in the set ; * since the set is a feasible solution for this instance , it follows from theorem 2 that , where by setting and we get that } - p^{[j-1 ] } \geq \hat{\alpha } ( p_*^{[j-1 ] } - p^{[j-1]}).\ ] ] it follows that } \leq \hat{\alpha}(q - p_*^{[j-1 ] } ) + ( 1-\hat{\alpha } ) ( q - p^{[j-1]}).\ ] ] thus , setting }),\ ] ] which is the upper bound we derived on the separation cost of the sequence , we have })\\ & \leq & q + \hat{\alpha } \sum_{j=0}^{\ell-2 } 2^{j+1 } ( q - p_*^{[j-1 ] } ) + ( 1-\hat{\alpha } ) \sum_{j=0}^{\ell-2 } 2^{j+1 } ( q - p^{[j-1]})\\ & = & q + 2\hat{\alpha}q + \hat{\alpha}\sum_{j=1}^{\ell-2 } 2^{j+1 } ( q - p_*^{[j-1 ] } ) + 2(1-\hat{\alpha})q + ( 1-\hat{\alpha } ) \sum_{j=1}^{\ell-2 } 2^{j+1 } ( q - p^{[j-1]})\\ & = & q + 2 q + 2\hat{\alpha } \sum_{j=0}^{\ell-3 } 2^{j+1 } ( q - p_*^{[j ] } ) + 2 ( 1-\hat{\alpha } ) \sum_{j=0}^{\ell-3 } 2^{j+1 } ( q - p^{[j]})\\ & \leq & q + 2 q + 4 \hat{\alpha } q + 2\hat{\alpha } \sum_{j=1}^{\ell-3 } 2^{j+1 } ( q - p_*^{[j ] } ) + 2 ( 1-\hat{\alpha } ) \sum_{j=0}^{\ell-3 } 2^{j+1 } ( q - p^{[j]})\\ & = & ( 1 - 2(1-\hat{\alpha})+2 + 4\hat{\alpha } ) q + 4\hat{\alpha } \sum_{j=1}^{\ell-3 } 2^{j } ( q - p_*^{[j ] } ) + 2(1-\hat{\alpha})\left(q + \sum_{j=0}^{\ell-3 } 2^{j+1 } ( q - p^{[j]})\right)\\ & \leq & ( 1 + 6 \hat{\alpha } ) q + 4\hat{\alpha } \sum_{j=1}^{\ell-1 } 2^{j } ( q - p_*^{[j ] } ) + 2(1-\hat{\alpha})u\\ & \leq & ( 8\hat{\alpha}+4/3 ) \ , sepcost(i ' , { \bf x}^ * ) + 2(1-\hat{\alpha } ) u,\end{aligned}\ ] ] where the last inequality follows from equation [ eq:27may2 ] .thus , we obtain for the last claim let be the sequence obtained by greedypsp ( algorithm [ alg : appendix ] ) when it is executed on instance .* claim 3 .* there exists an execution of procedure greedypsp ( algorithm [ alg : appendix ] ) on instance which returns a sequence satisfying .let be the -th test of sequence and let be the first test of that is not the test which maximizes among all the tests in in line ( * ) of algorithm [ alg : appendix ] .note that is chosen by algorithm [ alg : appendix ] rather than , the test which maximizes , because has cost larger than remaining budget .the case where does not exist is easier to handle and will be discussed at the end of the proof . because is a prefix of have thus , to establish the claim it suffices to show that where is a possible output of greedypsp ( algorithm [ alg : appendix ] ) on instance for , let } = \langle z^{[j]}_1 , \dots , z^{[j]}_{n(x_j^a)}\rangle ] be a sequence of of the tests in , generated from the proof of the following proposition is deferred to section [ app : prop ] . [ prop : claim3-apendix ] let } \ , { \bf z}^{[2]}\ , \dots \ , { \bf z}^{[r ] } \ , { \bf z}^{[r+1]} ] then , for the following conditions hold : * for each and , it holds that }_{\kappa } \cap z^{[j]}_{\kappa ' } = \emptyset\ ] ] * for each and each test in with being the test in from which is generated , it holds that }_{\kappa ' } ) } { c(q ) } = \frac{p(x - \bigcup_{i=1}^{j-1 } x^a_{i})}{c(x)}\ ] ] * is a feasible output for greedypsp ( algorithm [ alg : appendix ] ) on instance first note that the sequence has length and total cost .this is easily verified by recalling that : ( i ) each test in the sequence has cost ; ( ii ) for each the subsequence } ] ; ( iii ) the subsequence } ] hence } ) = z = b - \sum_{j=1}^r totcost(i ' , { \bf z}^{[j]}) ] by grouping objects which incur the same cost in we can write as follows now , we notice that for each and the set of objects covered by }_{\kappa} ] moreover , by proposition [ prop : claim3-apendix ] ( ii ) we have that } - \left(\bigcup_{i=1}^{j-1 } \bigcup_{\kappa'=1}^{n(x^a_i ) } z^{[i]}_{\kappa ' } \right)\right ) = \frac{p\left(x^a_j - \bigcup_{i=1}^{j-1 } x^a_i \right)}{n(x^a_j)}.\ ] ] finally , the set }_{\kappa ' } \right ) - \left ( \bigcup_{\kappa=1}^{2z } z^{[r+1]}_{\kappa } \right)\ ] ] that appears in the third term of the righthand side of ( [ eq:24 ] ) can be spilt into the objects covered by the tests generated from and the remaining ones . by using arguments similar to those employed above one can realize that the objects covered by the tests generated from are exactly those generated by the objects in that are not covered by the tests in }_{\kappa } \right) ] the lemma follows from the correctness of the three claims .* proposition [ prop : claim3-apendix ] . * _ let } \ , { \bf z}^{[2]}\ , \dots \ , { \bf z}^{[r ] } \ , { \bf z}^{[r+1]} ] then , for the following conditions hold : _ * for each and , it holds that }_{\kappa } \cap z^{[j]}_{\kappa ' } = \emptyset\ ] ] * for each and each test in with being the test in from which is generated , it holds that }_{\kappa ' } ) } { c(q ) } = \frac{p(x - \bigcup_{i=1}^{j-1 } x^a_{i})}{c(x)}\ ] ] * is a feasible output for greedypsp ( algorithm [ alg : appendix ] ) on instance in order to prove ( ii ) , we observe that , from the definition of the sequences } ] are all the elements in which are generated from therefore , the elements of are precisely the elements of which are generated from for each there are precisely elements in that are generated from , and each one of them has probability hence we have }_{\kappa ' } \right ) } { c(q ) } = \frac{1}{c(q)}\sum_{s \in x - \bigcup_{i=1}^{j-1 } x^a_i } \frac{n}{n(x)}\frac{p(s)}{n } = \frac{1}{c(q ) n(x ) } \sum_{s \in x - \bigcup_{i=1}^{j-1 } x^a_i } p(s),\ ] ] from which we have ( ii ) , since _ claim . _ for each and we have that } - \left(\bigcup_{i=1}^{j-1 } \bigcup_{\kappa'=1}^{n(x^a_i ) } z^{[i]}_{\kappa ' } \right ) - \left ( \bigcup_{\kappa'=1}^{\kappa-1 } z^{[j]}_{\kappa ' } \right)\right)}{c(z_{\kappa}^{[j ] } ) } \geq \frac{p\left(q - \left(\bigcup_{i=1}^{j-1 } \bigcup_{\kappa'=1}^{n(x^a_i ) } z^{[i]}_{\kappa ' } \right ) - \left ( \bigcup_{\kappa'=1}^{\kappa-1 } z^{[j]}_{\kappa ' } \right)\right)}{c(q)}\ ] ] for any this claim says that , for each and if has been constructed up to the test preceding } ] satisfies the greedy criterium of procedure greedypsp .this implies that is a feasible output for greedypsp , as desired ._ proof of the claim ._ let be the quantity on the right hand side of ( [ eq : z - greedy ] ) , and be the test in from which is generated .then we have }_{\kappa ' } \right)\right ) } { c(q ) } \label{ineq : greedy - z-1}\\ & = & \frac{p(x - \bigcup_{i=1}^{j-1 } x^a_{i})}{c(x ) } \label{ineq : greedy - z-2 } \\ &\leq & \frac{p\left(z_{\kappa}^{[j ] } - \left(\bigcup_{i=1}^{j-1 } \bigcup_{\kappa'=1}^{n(x^a_i ) } z^{[i]}_{\kappa ' } \right ) \right)}{c(z_{\kappa}^{[j ] } ) } \label{ineq : greedy - z-3}\\ & = & \frac{p\left(z_{\kappa}^{[j ] } - \left(\bigcup_{i=1}^{j-1 } \bigcup_{\kappa'=1}^{n(x^a_i ) } z^{[i]}_{\kappa ' } \right ) - \left ( \bigcup_{\kappa'=1}^{\kappa-1 } z^{[j]}_{\kappa ' } \right)\right)}{c(z_{\kappa}^{[j ] } ) } \label{ineq : greedy - z-4}\end{aligned}\ ] ] inequality ( [ ineq : greedy - z-1 ] ) holds since the set whose probability is considered at the numerator of the right hand side of ( [ ineq : greedy - z-1 ] ) is a superset of the set whose probability is considered at the numerator of the right hand side of ( [ eq : z - greedy ] ) . } - \left(\bigcup_{i=1}^{j-1 } \bigcup_{\kappa'=1}^{n(x^a_i ) } z^{[i]}_{\kappa ' } \right ) \right)}{c(z_{\kappa}^{[j]})}\ ] ] and the last equality follows from property ( ii ) of the proposition under analysis . if we have that , by definition is the test in which maximizes the greedy criterium , but is not chosen because it exceeds the available budget ] of and the sequence }, ] hence , + } - \left(\bigcup_{i=1}^{j-1 } \bigcup_{\kappa'=1}^{n(x^a_i ) } z^{[i]}_{\kappa ' } \right ) - \left ( \bigcup_{\kappa'=1}^{\kappa-1 } z^{[j]}_{\kappa ' } \right)\right ) = p\left(z_{\kappa}^{[j ] } - \left(\bigcup_{i=1}^{j-1 } \bigcup_{\kappa'=1}^{n(x^a_i ) } z^{[i]}_{\kappa ' } \right)\right).$ ]
in several applications of automatic diagnosis and active learning a central problem is the evaluation of a discrete function by adaptively querying the values of its variables until the values read uniquely determine the value of the function . in general , the process of reading the value of a variable might involve some cost , computational or even a fee to be paid for the experiment required for obtaining the value . this cost should be taken into account when deciding the next variable to read . the goal is to design a strategy for evaluating the function incurring little cost ( in the worst case or in expectation according to a prior distribution on the possible variables assignments ) . our algorithm builds a strategy ( decision tree ) which attains a logarithmic approximation simultaneously for the expected and worst cost spent . this is best possible under the assumption that . 0.3 in
from the conceptual point of view different degrees of synchronization can be distinguished : complete synchronization , generalized synchronization , lag synchronization , phase synchronization , and burst ( or train ) synchronization . in the following ,we focus our attention on phase synchronization in stochastic systems that has attracted recent interest for the following reason : in many practical applications the dynamics of a system , though not perfectly periodic , can still be understood as the manifestation of a stochastically modulated limit cycle . as examples , we mention neuronal activity , the cardiorespiratory system , or population dynamics .given a data set or some model dynamics there exists a variety of methods to define an instantaneous phase of a signal or a dynamics . for a clear cut separation of deterministic and noise - induced effects it is essential to assess the robustness of each of these different phase definitions with respect to noise .section [ sec:1 ] is devoted to this issue .since we do not distinguish between dynamical and measurement noise our treatment is also tied to the question how synchronization can be detected within any realistic experimental data .the synchronization properties of a noisy system can be classified in a hierarchical manner : stochastic phase locking always implies frequency locking while the converse is not true in general . on the other hand ,small phase diffusivity is necessary but not sufficient for phase synchronization .this will become clear in sec .[ sec:2 ] when we review an analytic approach to stochastic phase synchronization developed for a thermal two - state system with transitions described by noise - controlled rates .a recently proposed method to measure the average phase velocity or frequency in stochastic oscillatory systems based on rice s rate formula for threshold crossings will be presented and discussed in sec .[ sec:3 ] .the rice frequency proves to be useful especially in underdamped situations whereas the overdamped limit yields only finite values for coloured noise .its relation to the frequency based on the widely used hilbert phase ( cf .[ sec:1:phih ] ) is discussed and illustrated . in the final sec .[ sec:4 ] we connect the topic of stochastic resonance ( sr ) with results on noise - enhanced phase coherence . to this endwe study the synchronization properties of the bistable kramers oscillator driven externally by a periodic signal . as a complement to the frequently investigated overdamped limit, we consider here the underdamped case employing the methods presented in sec .[ sec:3 ] .a phase occurs in a quite natural way when describing the cyclic motion of an oscillator in phase space .self - sustained oscillators are nonlinear systems that asymptotically move on a limit cycle .the instantaneous position in phase space can be represented through instantaneous amplitude and phase .a systematic approach to relate the amplitude and phase dynamics to the dynamics formulated in original phase space was developed by bogoliubov and mitropolski .their method starts from the following decomposition of the dynamics where the function comprises all terms of higher than first order in ( nonlinearities ) , velocity dependent terms ( friction ) , and noise . in their work bogoliubov and mitropolskiconsidered the function to be a small perturbation of order which means that the system is weakly nonlinear and the noise or the external forces are comparatively small as not to distort the harmonic signal too much .the definition of an instantaneous phase proceeds by expressing the position and the velocity in polar coordinates and \,,\\ \label{eq : bogmist2b } v(t ) & = & -\omega_0\ ; a^n(t ) \sin\left[\phi^n(t)\right]\end{aligned}\ ] ] which yields by inversion ^ 2 } \,,\\ \label{eq : bogmist3b } \phi^n(t ) & = & \arctan \left [ -\frac{v(t)/{\omega}_0}{x(t ) } \right ] .\end{aligned}\ ] ]it should be noted that a meaningful clockwise rotation in the -plane determines angles to be measured in a specific way depending on the sign of .using eqs .( [ eq : bogmist2a ] ) , ( [ eq : bogmist2b ] ) , ( [ eq : bogmist3a ] ) and ( [ eq : bogmist3b ] ) it is straightforward to transform the dynamics in and , eqs .( [ eq : bogmist1a ] ) and ( [ eq : bogmist1b ] ) , into the following dynamics for and the line corresponds to angles . as can be read off from eq .( [ eq : phidot ] ) , the phase velocity always assumes a specific value for , i.e. , this has the following remarkable consequence .we see that even in the presence of noise passages through zero in the upper half plane are only possible from to , in the lower half plane only from to .this insight becomes even more obvious from a geometrical interpretation : as the noise exclusively acts on the velocity , cf .( [ eq : bogmist1b ] ) , it can only effect changes in the vertical direction ( in -space ) . along the vertical line , however , the angular motion possesses no vertical component while radial motion is solely in the vertical direction and , therefore , only affected by the noise . from thiswe conclude that between subsequent zero crossings of the coordinate with positive velocity the phase has increased by an amount of .this finding establishes a simple operational instruction how to measure the average phase velocity of stochastic systems .we will come back to this point in sec .[ sec:3 ] as we have just seen zero crossings can be utilized to mark the completion of a cycle .this can be generalized to the crossings of an arbitrary threshold with positive velocity or even to the crossing of some separatrix . in this connectionthe concept of isochrones of a limit cycle has to be mentioned .all of these extensions of the natural phase require a thorough knowledge of the dynamics and the phase space structure .in many practical applications , however , the detailed phase portrait is not known . instead, one is given a data series exhibiting a repetition of characteristic marker events , e.g. the spiky peaks of neural activity , the r - peaks of an electrocardiogram , or pronounced maxima as found in population dynamics .these marker events can be used to pinpoint the completion of a cycle , , and the beginning of a subsequent one , .it is then possible to define an instantaneous phase by linear interpolation , i.e. , where the times are fixed by the marker events . reexpressing the time series of the system as ,\ ] ]then defines an instantaneous amplitude .the benefit of such a treatment is to reveal a synchronization of two or more such signals : whereas the instantaneous amplitudes and , therefore , the time series might look rather different , the phase evolution can display quite some similarity .if the average growth rates of phases match ( notwithstanding the fact that phases may diffuse rapidly ) the result is termed frequency locking .small phase diffusion , in addition to frequency locking , means that phases are practically locked during long episodes that occasionally are disrupted by phase slips caused by sufficiently large fluctuations .this elucidates the meaning of effective phase synchronization in stochastic systems .as should be clear from its definition the linear phase relies on the clear identification of marker events . with increasing noise intensitythis identification will fail since sufficiently large fluctuations may either mask true or imitate spurious marker events . on the other hand , in some cases , e.g. for excitable systems, fluctuations can be essential for the generation of marker events , i.e. , marker events may be noise - induced .as a final remark , let us mention that relative maxima of a differentiable signal correspond to positive - going zeros of its derivative .however , this seemingly trivial connection is overshadowed by complications if the derivative itself is a non - smooth function that does not allow to easily extract the number of zero crossings ( cf .[ sec:3 ] and fig .[ f_oscillator ] ) . in situationswhere a measured signal exhibits a lot of irregularity it is not quite clear how to define a phase the signal might look far from a perturbed harmonic or even periodic one and marker events can not be identified unambiguously .the concept of the analytic signal as introduced by gabor offers a way to relate the signal to an instantaneous amplitude and a phase .the physical relevance of a such constructed phase is a question of its own ; for narrow - band signals or harmonic noise it has a clear physical meaning whereas the general case requires further considerations ( cf .appendix a2 in ) .the analytic signal approach extends the real signal to a complex one ] . in fig .[ fig:3phases ] we show how the three alternative phases and agree in the description of a dichotomous switching process . note that the natural phase is related to the underlying process in real phase space and , hence , can not be deduced from the two - state signal .the advantage of the discrete phase is that it allows an analytic treatment of effective phase synchronization in stochastic bistable systems .this will be addressed in sec .[ sec:2 ] .the robustness of the discrete phase with respect to noise comes into play not when making the `` transition '' from the switching process to the instantaneous phase but when constructing the switching events from the continuous stochastic trajectory . at the level of a markovian switching process noiseenters only via its intensity that changes transition rates . in systems of dimension are various ways how to define one or even more instantaneous phases : projecting the -dimensional phase space onto 2-dimensional surfaces , choosing poincar sections or computing the hilbert phase for each of the coordinates .many of the methods can only be done numerically and always require to consider the dynamics in detail in order to check whether a made choice is appropriate .we will not elaborate these details here but refer to ( especially chap .10 ) and references therein .in this section we consider a stochastic bistable system , for instance a noisy schmitt trigger , which is driven externally either by a dichotomous periodic process ( dpp ) or a dichotomous markovian process ( dmp ) .the dichotomous character of the input shall be either due to a two - state filtering or be rooted in the generation mechanism of the signal .the bistable system generates a dichotomous output signal . for conveniencewe choose to label input and output states with values and respectively .the dpp is completely specified by its angular frequency where is the half - period .accordingly , the dmp is fully characterized by its average switching rate . since both input and output are two - state variables it is possible to study phase synchronization in terms of discrete indicating the discrete phase considered throughout this section ] input and output phases and respectively .consequently , also the phase difference is a discrete quantity that can assume positive and negative multiples of . from the definition of the phase difference it follows that each transition between the output states _ increases _ by whereas each transition between the input states _ reduces_ by .transitions between the input states are governed by the rates where a single realization of the dpp is characterized by deterministic switching times . here, is the initial phase of the input signal rendering the periodic process non - stationary ( cyclo - stationary ) . to achieve strict stationarity we average the periodic dynamics with respect to which is equidistributed over the interval . in the absence of an input signalthe two states are supposed to be symmetric and the hopping rates for both directions are identical and completely determined by a prefactor , the energy barrier , and the noise intensity .the central assumption of our analysis is that the input signal modifies the transition rates of the output solely through the phase difference in the following way \,,\ ] ] where the function and the amplitude to keep the signal subthreshold .this definition introduces two noise - dependent time scales with . the function favours phase differences with even multiples of , i.e. , in - phase configurations .a description of the stochastic evolution of the phase difference is based on the probabilities to experience a phase difference at time conditioned by a phase difference at time . due to the discrete character of ( allowing only for multiples of ) we briefly denote . then the probabilistic evolution operator reads with from ( [ kram1 ] ) while the last two terms on the right hand side of eq .( [ ph1 ] ) account for the change of due to transitions of the output the operator reflects switches of the input with the related input switching rates given by eq .( [ inputrates ] ) . as mentioned above the non - stationary ( cyclo - stationary ) character of the dpp can be cured by averaging over the initial phase . since `` temporal '' and `` spatial '' contributions in eq .( [ l ] ) are separable we can perform this average prior to the calculation of any moment of yielding from eq .( [ average ] ) we see that the -averaged dpp formally looks equivalent to a dmp with the transition rate .of course , initial phase averaging does not really turn a dpp into a dmp .the subtle difference is that while -moments of the dmp continuously change in time related -averages of the dpp ( before the -average ) are still discontinuous , hence , temporal derivatives of functions of have to be computed with care before initial phase averaging . using standard techniques we can derive the evolution equation for the mean phase difference from eq .( [ ph1 ] ) here , denotes the average frequency of the input phase and equals for the dmp and for the dpp . assuming higher moments uncoupled , i.e. , , eq .( [ ph2b ] ) is adler s equation arising in the context of phase locking . note that here both the frequency mismatch and the synchronization bandwidth are noise dependent .this elucidates the opportunity to achieve _ noise - induced _ frequency and effective phase locking . for the short - time evolutiona necessary condition for locking is which defines `` arnold tongues '' of synchronization in the vs. plane .the kinetic equation for can be evaluated explicitly yielding \mean{\sigma } + a_2-a_1\,.\ ] ] from eq .( [ sigdot ] ) we see that approaches a stationary value that exactly coincides with the stationary correlation coefficient between the input and output .the relaxation time is given by ^{-1} ] . performing the calculation for both the dmp and dpp yields \label{d*p}\ ] ] with for the dmp and for the dpp .the stationary correlator , i.e. , the asymptotic limit of , can be computed from the corresponding kinetic equation .inserting into eq .( [ d*p ] ) we thus find for the dmp ( cf .[ fig : epl2 ] top ) \label{dmp2}\end{aligned}\ ] ] and for the dpp ( cf . fig .[ fig : epl2 ] bottom ) \label{dpp2}\end{aligned}\ ] ] with given by eq .( [ ph4 ] ) and by eq .( [ ph5 ] ) . both eqs .( [ dmp2 ] ) and ( [ dpp2 ] ) possess the same structure with .since is never decreasing ( with increasing noise intensity ) the same is true for the sum of the first two terms .the possibility of synchronized input - output jumps is rooted in . sincethis term comprises only contributions scaling with powers of , which itself rapidly vanishes for small ( cf .inset of fig .[ fig : epl2 ] bottom ) , we first observe an increase of .an increase of signals the coherent behaviour of input and output and , consequently , endows with considerable weight to outbalance the increase of . as can be seen from the inset of fig .[ fig : epl2 ] bottom a negative slope is initiated only for a sufficiently large .however , the range of high input - output correlation does not determine the range of low diffusion coefficients since at rather high noise intensities the output switches with a large variance and thus , finally dominates over the ordering effect of .plotting the boundaries of the region where defines the tongues depicted in the inset of fig .[ fig : epl2 ] top . as for the `` arnold tongues '' in fig .[ fig : epl1 ] a minimal amplitude varies with the mean input switching rate and shifts to lower values when considering slower signals .it is worth mentioning that the addition of an independent dichotomous noise that modulates the barrier can drastically reduce this minimal amplitude if this second dichotomous noise switches faster than the external signal .the minimum of observed in the region of frequency locking can be equivalently expressed as a pronounced maximum of the average duration of locking episodes . to show this we note that a locking episode is ended by a phase slip whenever the phase difference has changed , i.e. , increased or decreased , by the order of , or this quadratic equationcan be solved for and by inserting the noise - dependent expressions for and we can compute as a function of noise intensity where is either for the dmp or for the dpp respectively .the results for both the dmp and the dpp are plotted in fig .[ fig : tlock ] .a pronounced maximum for intermediate values of noise intensity clearly proves that noise - induced frequency synchronization is accompanied by noise - induced phase synchronization .as mentioned in sec . [ sec:1:phin ] positive - going zero crossings can be used to count completions of a cycle in oscillatory systems . in this view the average frequency ,i.e. , the average phase velocity , turns out to be the average rate of zero crossings which is captured by a formula put forward by rice .this elementary observation yields a novel way to quantify the average frequency of a phase evolution , henceforth termed the `` rice frequency '' , and to prove frequency locking in stochastic systems . to detail our derivation of the rice frequency in this section , we start from the following one - dimensional potential system subjected to gaussian white noise of intensity , i.e. , and being driven by the external harmonic force . in fig .[ f_oscillator ] we show a sample path for the harmonic oscillator where we used the friction coefficient , the natural frequency , and a vanishing amplitude of the external drive . as can be read off from fig .[ f_oscillator ] , the velocity basically undergoes a brownian motion and , therefore , constitutes a rather jerky continuous , but generally not differentiable signal . in particular ,near a zero crossing of there are many other zero crossings .in contrast to that , the coordinate is a much smoother signal since it is determined by an integral over a continuous function and , therefore , differentiable . in particular , near a zero crossing of there are no other zero crossings . in the following , we will take advantage of this remarkable smoothness property of that is an intrinsic property of the full oscillatory system ( [ e_osc ] ) and disappears when we perform the overdamped limit . in 1944, rice deduced a formula for the average number of zero crossings of a smooth signal like in the oscillator equation ( [ e_osc ] ) . in this rate formulaenters the probability density of and its time derivative , , at a given instant . the rice rate for passages through zero with positive slope ( velocity )is determined by this time - dependent rate is to be understood as an ensemble average . if the dynamical system is ergodic and mixing the asymptotic stationary rate can likewise be achieved by the temporal average of a single realization .let ) ] .using ergodicity , the relation \right ) } { t}\ ] ] is fulfilled for the process characterized by the stationary density . in the following we always consider stationary quantities . as explained in sec .[ sec:1:phin ] , the zero crossings can be used as marker events to define an instantaneous phase by linear interpolation , cf .( [ linphase ] ) .the related average phase velocity is the product of the ( stationary ) rice rate and and , hence , called the ( stationary ) rice frequency for a dynamics described by a potential in the absence of an external driving , i.e. , ( [ e_osc ] ) with , the stationary density can be calculated explicitly yielding \ ] ] where is the normalization constant . from this and the application of eq .( [ eq : ricefreq ] ) , it is straightforward to derive the exact result } { \int\limits_{-\infty}^{\infty}\exp\left[-{u(x)\over d}\right]\,dx}\,.\ ] ] without loss of generality we can set . in the limit , we can perform a saddlepoint approximation around the deepest minima ( e.g. for symmetric potentials ) . in this way we find the following expression valid for , i.e. , the small noise approximation , \over\sqrt{u^{''}(x_i ) } } \right]^{-1}\,.\ ] ] in the limit , we have to consider the asymptotic behaviour of the potential , , to estimate the integral in eq .( [ eq : potsys ] ) . for potentials that can be expanded in a taylor series about zero and that , therefore , result in a power series of order , i.e. , , we can rescale the integration variable by . for sufficiently large , the integralis dominated by the power term . in this waywe find the large noise scaling applying eqs .( [ eq : potsys ] ) and ( [ eq : smallnoise ] ) to the harmonic oscillator ( [ e_harmonicosc ] ) we immediately find that , independent of and for all values of .this is also in agreement with eq .( [ eq : largenoise ] ) .it follows because implies that , for large noise , the rice frequency does not depend on at all .note , however , that in the deterministic limit , i.e. , for , we have the standard result which explicitly does depend on the friction strength .therefore , the limit is discontinuous except in the undamped situation .the similarity of eqs .( [ eq : potsys ] ) and ( [ eq : smallnoise ] ) with rates from transition state theory will be addressed below when we discuss the bistable potential .it is well known that the rice frequency can not be defined for stochastic variables that integrate increments of the wiener process ( white noise ) . from eq .( [ e_osc ] ) this holds true for the velocity .this is so , because the stochastic trajectories of degrees of freedom being subjected to gaussian white noise forces are continuous but are of _ unbounded _ variation and nowhere differentiable .this fact implies that such stochastic realizations cross a given threshold within a fixed time interval infinitely often if only the numerical resolution is increased _ad infinitum_. this drawback , which is rooted in the mathematical peculiarities of idealized gaussian white noise , can be overcome if we consider instead a noise source possessing a finite correlation time , i.e. , coloured noise , see ref . . to this end, we consider here an oscillatory noisy harmonic dynamics driven by gaussian exponentially correlated noise , i.e. , with obeying and following the same reasoning as before we find for the rice frequency of as before likewise , upon noting that within a time interval , or , respectively , the rice frequency of the zero crossings with positive slope of the process is given by which is evaluated to read the result in ( [ eq : wx ] ) shows that for small noise colour the rice frequency for assumes a correction , as . in clear contrast, the finite rice frequency for the velocity process ( [ eq : col1b ] ) diverges in the limit of vanishing noise colour proportional to . to exemplify the relation between the rice frequency and the hilbert frequency ,again we consider the damped harmonic oscillator eq .( [ e_harmonicosc ] ) agitated by noise alone . in fig .[ f_hilbertoscillator ] we show a numerically evaluated sample path and the corresponding hilbert phase ( normalized to and modulo ) using the parameters .an important point to observe here is that around and the hilbert phase does not increase by after two successive passages through zero with positive slope .this shall illustrate the difference between the hilbert phase and the natural phase .in subsec .[ sec:1:phih ] this observation was already mentioned as a consequence of the nonlocal character of the hilbert transform . in particular , short and very smallamplitude crossings to positive are not properly taken into account by the hilbert phase since they only result in a small reduction of .this leads us to conjecture that quite generally holds .in fact , for the case of the harmonic oscillator that generates a stationary gaussian process one even can prove this conjecture by deriving explicit expressions for and . as usual , let denote the spectrum of the stationary gaussian process .then the rice frequency can be recast in the form of ^{1/2}\ , .\label{e_srice}\ ] ] a similar expression ( additionally involving an arrhenius - like exponential ) exists when considering not zero crossings , as in eq .( [ e_srice ] ) , but crossings of an arbitrary threshold . in was shown that the hilbert frequency of the same process is given by a similar expression , namely \ , .\label{e_shil}\ ] ] interpreting the quantity as a probability density , , we can use the property that the related variance is positive , i.e. , ^2 \,.\ ] ] taking the square - root on both sides of the last inequality immediately proves eq .( [ e_ungleich ] ) . using the spectrum of the undriven noisy harmonic oscillator and employing eqs .( [ e_srice ] ) and ( [ e_shil ] ) , it is easy to see that both and do not vary with .we have already shown above that .in contrast to this , is a monotonically decreasing function of that approaches from below in the limit .the probability density of the periodically driven noisy harmonic oscillator can be determined analytically by taking advantage of the linearity of the problem .introducing the mean values of the coordinate and the velocity , and , the variables obey the differential equation of the undriven noisy harmonic oscillator .in the asymptotic limit the mean values converge to the well known deterministic solution \end{aligned}\ ] ] with the common phase lag .therefore , after deterministic transients have settled the _ cyclo - stationary _ probability density of the driven oscillator reads with the gaussian density } .\ ] ] using eq .( [ eq : ricefreq ] ) the cyclo - stationary probability density ( [ e_pharm ] ) yields an oscillating expression for the rice frequency .the time dependence of this stochastic average can be removed by an initial phase average , i.e. , a subsequent average over one external driving period , the resulting analytical and numerically achieved values of the rice frequency as a function of the noise intensity are shown in fig .[ f_oscillatordriven ] for fixed and various values of . for small noise intensities the rice frequency is identical to the external driving frequency , whereas for large noise intensities the external drive becomes inessential and the rice frequency approaches .further insight into the analytic expression ( [ e_drivenosc ] ) is gained from performing the following scale transformations from which we immediately find the rescaled velocity inserting these dimensionless quantities into eq .( [ e_drivenosc ] ) yields i(\tilde a,\tilde{\omega}_0)&=&{1\over\pi } \int\limits_{-\delta}^{2\pi-\delta}\;\int\limits_0^{\infty } \tilde v\,\exp\big[\!\ ! -(\tilde v + \tilde a \sin\tilde t)^2 \nonumber\\[-.3 cm ] & & \qquad\qquad\qquad\qquad -(\tilde{\omega}_0\,\tilde a \cos\tilde t)^2 \big]\,d\tilde v\;d\tilde t \label{eq : i}\end{aligned}\ ] ] where we have defined further dimensionless quantities \tilde{\omega}_0 & = & \frac{{\omega}_0}{\omega } .\end{aligned}\ ] ] due to the periodicity of the trigonometric functions , the integral ( [ eq : i ] ) does not change when shifting the interval for the integration with respect to back to $ ] .hence , is only a function of and .an expansion for small yields \ ] ] which implies for large the opposite extreme , or , can be extracted from a saddlepoint approximation around and . following this procedure ,the integral ( [ eq : i ] ) gives the constant .this directly implies .the crossover between these two extremes occurs when the first correction term in ( [ eq : a20 ] ) is no longer negligible , i.e. , for when solved for the crossover noise intensity , this yields }\,,\ ] ] which , for the parameters used in fig .[ f_oscillatordriven ] , correctly gives values between and .in fig . [ f_oscillatordriven ] the parameters , and and , hence , are identical for all curves . solving with respectto shows that the curves become shifted horizontally as in the log - linear plot in fig .[ f_oscillatordriven ] . another way to explainthis shift is by noting that .the bistable kramers oscillator , i.e. , eq .( [ e_osc ] ) with the double well potential is often used as a paradigm for nonlinear systems . with reference to eq .( [ e_osc ] ) the corresponding langevin equation is given by which , in the absence of the external signal , , generates the stationary probability distribution with the normalization constant .using this stationary probability density and eq .( [ eq : ricefreq ] ) we can determine the rice frequency analytically . in fig .[ f_bioscstatic ] we depict this analytic result together with numerical simulation data including error bars .the simulation points perfectly match the analytically determined curve . as expected for the asymptotically dominant quartic term , i.e. , ( cf .sec.[sec:3:general ] , especially eq .( [ eq : largenoise ] ) ) , the rice frequency scales as for large values of . comparing the rice frequency formula , eq .( [ eq : ricefreq ] ) , with the forward jumping rate from the transition state theory , \label{eq : tstint}\ ] ] where \ , , \label{eq : semipf}\ ] ] and represents the corresponding hamiltonian , one can see that the difference between both solely rests upon normalizing prefactors .whereas the rate is determined by the division of the integral eq .( [ eq : tstint ] ) by the `` semipartition '' function , the rate is established by dividing the same integral eq .( [ eq : tstint ] ) by the complete partition function \ , .\label{eq : fullpf}\ ] ] particularly for symmetric ( unbiased ) potentials , i.e. , , this amounts to the relation , hence , at weak noise , , this relation simplifies to \ ; , \label{eq : tstraprox}\ ] ] wherein denotes the barrier height and the angular frequency inside the well ( ) .indeed , in the small - to - moderate regime of weak noise this estimate nicely predicts the exact rice frequency ( cf .[ f_bioscstatic ] ) .the periodically driven bistable kramers oscillator was the first model considered to explain the phenomenon of sr and it still serves as one of the major paradigms of sr . in its overdamped formit was used to support experimental data ( from the schmitt trigger ) displaying the effect of stochastic frequency locking observed for sufficiently large , albeit subthreshold signal amplitudes , i.e. , for . from a numerical simulation of the overdamped kramers oscillator and computing the hilbert phase it was also found that noise - induced frequency locking for large signal amplitudes was accompanied by noise - induced phase coherence, the latter implies a pronounced minimum of the effective phase diffusion coefficient \label{eq : deff1}\ ] ] occurring for optimal noise intensity .based on a discrete model , analytic expressions for the frequency and phase diffusion coefficient were derived that correctly reflect the conditions for noise - induced phase synchronization for both periodic and aperiodic input signals . to link the mentioned results to the rice frequency introduced above we next investigate the behaviour of the kramers oscillator with non - vanishing inertia .we show numerical simulations for eq .( [ e_biosc ] ) with the parameters and diverse values of in fig .[ f_nonlinearf ]. for larger values of , a region around appears where the rice frequency is locked to the external driving frequency . since for larger values of the external driving values of the noise parameter are needed to obtain the same rate for switching events , the entry into the locking region shifts to smaller values of for increasing . in fig .[ f_nonlinearrice1 ] we present numerical simulations for fixed and different values of the damping coefficient .note that the value of is slightly smaller than the critical value . for smaller values of coupling regions appear since it is easier for the particle to follow the external driving for smaller damping . to check whether frequency synchronization is accompanied by effective phase synchronization we have also computed the averaged effective phase diffusion coefficient , this time defined by the following asymptotic expression ^ 2 \right\rangle\,.\ ] ] it should be clear that the instantaneous `` rice '' phase was determined via zero crossings . the connection with the instantaneous diffusion coefficient defined in ( [ eq : deff1 ] ) is established by applying the limit in fig .[ f_nonlineardiff1 ] we show numerical simulations of the effective phase diffusion coefficient as function of noise intensity .the phase diffusion coefficient displays a local minimum that gets more pronounced if the damping coefficient is decreased .indeed , phase synchronization reveals itself through this local minimum of the average phase diffusion coefficient in the very region of the noise intensity where we also observe frequency synchronization , cf .[ f_nonlinearf ] .the qualitative behaviour of the diffusion coefficient agrees also with a recently found result related to diffusion of brownian particles in biased periodic potentials .a necessary condition for the occurrence of a minimum was an anharmonic potential in which the motion takes place . in this biased anharmonic potentialthe motion over one period consists of a sequence of two events .every escape over a barrier ( arrhenius - like activation ) is followed by a time scale induced by the bias and describing the relaxation to the next minimum .the second step is weakly dependent on the noise intensity and the relaxation time may be even larger then the escape time as a result of the anharmonicity . for such potentialsthe diffusion coefficient exhibits a minimum for optimal noise , similar to the one presented in figs .[ f_nonlineardiff1 ] and [ f_nonlineardiff1a ] .the average duration of locking episodes can be computed by equating the second moment of the phase difference ( between the driving signal and the oscillator ) to . a rough estimate , valid for the regions where frequency synchronization occurs, i.e. , where the dynamics of the phase difference is dominated by diffusion , thus reads or , when expressed by the number of driving periods in this way we estimate from figs .[ f_nonlineardiff1 ] and [ f_nonlineardiff1a ] for and relevant varying between . in the previous exampleswe have shown how frequency synchronization , revealing itself through a plateau of the output frequency matching the harmonic input frequency , and reduced phase diffusivity together mark the occurrence of noise - enhanced phase coherence .optimal noise intensities were found in the range where one also observes sr ( in the overdamped system ) . in order to underline that under certain conditions sr exists but may not be accompanied by effective phase synchronization we present simulation results for the rice frequency and the diffusion coefficient in fig .[ fig : srnosync ] obtained for the bistable kramers oscillator with a friction coefficient and external frequency . for noise intensities the output frequency matches and nearby the overdamped kramers oscillator exhibits the phenomenon of sr , i.e. , one finds a maximum of the spectral power amplification .in contrast , we neither can find a minimum in the diffusion coefficient nor a plateau around meaning that no phase coherence and not even frequency synchronization can be observed .the reason is that the external signal switches much too fast for the bistable system to follow ; note that in the two - state description with arrhenius rates the prefactor ( cf .eq . [ a1a2 ] ) restricts the switching frequency from above .noise - induced phase coherence requires a device with a faster internal dynamics , i.e. , .we underline that the noise - induced phase synchronization is a much more stringent effect than stochastic resonance .this statement becomes most obvious when recalling that the spectral power amplification attains a maximum at an optimal noise intensity for arbitrarily small signal amplitudes and any frequency of the external signal .in contrast , noise - induced phase synchronization and even frequency locking are nonlinear effects and as such require amplitude and frequency to obey certain bounds ( see the `` arnold tongues '' in sec .[ sec:2 ] ) .we expect that the functioning of important natural devices , e.g. communication and information processing in neural systems or sub - threshold signal detection in biological receptors , rely on phase synchronization rather than stochastic resonance .the authors acknowledge the support of this work by the deutsche forschungsgemeinschaft , sfb 555 `` komplexe nichtlineare prozesse '' , project a 4 ( l.s .- g . and j.f .) , sfb 486 `` manipulation of matter on the nanoscale '' , project a 10 ( p.h . ) and deutsche forschungsgemeinschaft project ha 1517/13 - 4 ( p.h . ) .10 v. i. arnold , trans . of the am .soc . * 42 * , 213 ( 1965 ) ; e. ott , _ chaos in dynamical systems _ , ( cambridge university press , cambridge , 1993 ) .stratonovich , _ topics in the theory of random noise _ , ( gordon and breach , new york , 1967 ). h. fujisaka and t. yamada , prog .phys.*69 * , 32 ( 1983 ) ; a.s .pikovsky , z. phys .b * 55 * , 149 ( 1984 ) ; l.m .pecora and t.l .carroll , phys .rev . lett . * 64 * , 821 ( 1990 ) .rulkov , m.m .sushchik , l.s .tsimring , and h.d.i .abarbanel , phys .e * 51 * , 980 ( 1995 ) ; l. kocarev and u. parlitz , phys .lett * 76 * , 1816 ( 1996 ) .rosenblum , a.s .pikovsky , and j. kurths , phys .lett . * 78 * , 4193 ( 1997 ) ; s. taherion and y.c .lai , phys .e * 59 * , r6247 ( 1999 ) ; m.g .rosenblum , a.s .pikovsky , and j. kurths , phys .lett . * 76 * , 1804 ( 1996 ) .han , t.g .yim , d.e .postnov , and o.v .sosnovtseva , phys .83 * , 1771 ( 1999 ) .izhikevich , int .chaos * 10 * , 1171 ( 2000 ) ; bambi hu and changsong zhou , phys .e * 63 * , 026201 ( 2001 ) .v. anishchenko , a. neiman , a. astakhov , t. vadiavasova , and l. schimansky - geier , _ chaotic and stochastic processes in dynamic systems _ , ( springer , berlin , 2002 ) .l. schimansky - geier , v. anishchenko , and a. neiman , in _ neuro - informatics _ , s. gielen and f. moss ( eds . ) , _ handbook of biological physics _ , vol .4 , series editor a.j . hoff , ( elsevier science , 2001 ) .a. neiman , x. pei , d. russell , w. wojtenek , l. wilkens , f. moss , h.a .braun , m.t .huber , and k. voigt , phys .* 82 * , 660 ( 1999 ) , s. coombes and p.c .bressloff , phys .e * 60 * , 2086 ( 1999 ) and phys .e * 63 * , 059901 ( 2001 ) ; w. singer , neuron * 24 * , 49 ( 1999 ) ; r.c .elson , a.i .selverston , r. huerta , n.f .rulkov , m.i .rabinovich , and h.d.i .abarbanel , phys .lett . * 81 * , 5692 ( 1998 ) ; p. tass , m.g .rosenblum , j. weule , j. kurths , a. pikovsky , j. volkmann , a. schnitzler , and h .- j .freund , phys .* 81 * , 3291 ( 1998 ) ; r. ritz and t.j .sejnowski , current opinion in neurobiology * 7 * , 536 ( 1997 ) .b. schfer , m.g .rosenblum , and j. kurths , nature ( london ) * 392 * , 239 ( 1998 ) .b. blasius , a. huppert , and l. stone , nature ( london ) * 399 * , 354 ( 1999 ) .j.p.m . heald and j. stark , phys .lett . * 84 * , 2366 ( 2000 ) .freund , a.b .neiman , and l. schimansky - geier , europhys . lett . *50 * , 8 ( 2000 ) .l. callenbach , p. hnggi , s.j .linz , j.a .freund , l. schimansky - geier , phys .e ( in press ) .s.o . rice , bell system technical j. , * 23 /24 * , 1 - 162 ( 1944/1945 ) ; note sec .3.3 on p. 57 - 63 in this double volume therein .s.o . rice , in _ selected papers on noise and stochastic processes _ , n. wax , ed .189 - 195 therein , ( dover , new york , 1954 ) l. gammaitoni , p. hnggi , p. jung , and f. marchesoni , rev .mod . phys . * 70 * , 223 ( 1998 ) .anishchenko , a.b .neiman , f. moss , and l. schimansky - geier , phys .usp . * 42 * , 7 ( 1999 ) .a. neiman , a. silchenko , v. anishchenko , and l. schimansky - geier , phys .e * 58 * , 7118 ( 1998 ) .p. jung , phys .rep . * 234 * , 175 ( 1995 ) .n.n . bogoliubov and y.a .mitropolski , _ asymptotic methods in the theory of non - linear oscillations _ , ( gordon and breach sciences publishers , new york , 1961 ) .the explicit expression for the phase involving the has to be understood in the sense of adding multiples of to make it a continuously growing function of time .p. hnggi and p. riseborough , am .* 51 * , 347 ( 1983 ) .the statement requires the function defined in eq .( [ eq : bogmist1b ] ) to remain finite for ; this , however , is no severe restriction .winfree , j. theor .biol . , * 16 * 15 ( 1967 ) ; j. guckenheimer , j. math . biol . * 1 * , 259 ( 1975 ) .d. gabor , j. iee ( london ) * 93 * ( iii ) , 429 ( 1946 ) .a. pikovsky , m. rosenblum , and j. kurths , _ synchronization : a universal concept in nonlinear sciences _ , ( cambridge university press , cambridge , 2001 ) .vainstein and d.e .vakman , _ frequency analysis in the theory of oscillations and waves _ ( in russian ) , ( nauka , moscow , 1983 ) .r. carmona , w .- l .hwang , b. torresani , _ practical time - frequency analysis _ , ( academic press , san diego , 1998 ) .lachaux , e. rodriguez , m. le van quen , a. lutz , j. martinerie , and f. varela , int .chaos * 10 * , 2429 ( 2000 ) ; m. le van quen , j. foucher , j .-lachaux , e. rodriguez , a. lutz , j. martinerie , and f. varela , j. neurosci . meth . *111 * , 83 ( 2001 ) .deshazer , r. breban , e. ott , and r. roy , phys .* 87 * , 044101 ( 2001 ) .b. mcnamara and k. wiesenfeld , phys .a * 39 * , 4854 ( 1989 ) .cox , _ renewal theory _ ,( chapman & hall , london , 1967 ) . j.a .freund , a.b .neiman , and l. schimansky - geier , in p. imkeller and j. von storch ( eds . ) , stochastic climate models .progress in probability , ( birkhuser , boston , basel , 2001 ) .n. g. van kampen , _stochastic processes in physics and chemistry _ rev . and( north - holland , amsterdam , 1992 ) .r. adler , proc .ire * 34 * , 351 ( 1946 ) ; p. hnggi and p. riseborough , am . j. phys .* 51 * , 347 ( 1983 ) . c. van den broeck , physe * 47 * , 4579 ( 1993 ) ; u. zrcher and c. r. doering , phys .e * 47 * , 3862 ( 1993 ) .f. marchesoni , f. apostolico , and s. santucci , phys .e * 59 * , 3958 ( 1999 ) .a. neiman , l. schimansky - geier , f. moss , b. shulgin , and j. j. collins , phys .e * 60 * , 284 ( 1999 ) .b. shulgin , a. neiman , and v. anishchenko , phys .* 75 * , 4157 ( 1995 ) .r. rozenfeld , j.a .freund , a. neiman , l. schimansky - geier , phys .e 64 , 051107 ( 2001 ) .p. hnggi , p. talkner , and m. borkovec , rev .* 62 * , 251 ( 1990 ) .p. hnggi and h. thomas , phys. rep . * 88 * , 207 ( 1982 ) ; see sec . 2.4 therein .p. hnggi and p. jung , adv .* 89 * , 239 ( 1995 ) ; p. hnggi , f. marchesoni , and p. grigolini , z. physik b*56 * , 333 ( 1984 ) .r. benzi , g. parisi , and a. vulpiani , j. phys .a : math gen . * 14 * , l453 ( 1981 ) ; c. nicolis , solar physics * 74 * , 473 ( 1981 ) .a. neiman , l. schimansky - geier , f. moss , b. shulgin , and j. j. collins , phys .e * 60 * , 284 ( 1999 ) .b. lindner , m. kostur and l. schimansky - geier , fluct . andnoise letters * 1 * , r25 ( 2001 ) .freund , j. kienert , l. schimansky - geier , b. beisner , a. neiman , d.f .russell , t. yakusheva , and f. moss , phys .e * 63 * , 031910 ( 2001 ) .
the phenomenon of frequency and phase synchronization in stochastic systems requires a revision of concepts originally phrased in the context of purely deterministic systems . various definitions of an instantaneous phase are presented and compared with each other with special attention payed to their robustness with respect to noise . we review the results of an analytic approach describing noise - induced phase synchronization in a thermal two - state system . in this context exact expressions for the mean frequency and the phase diffusivity are obtained that together determine the average length of locking episodes . a recently proposed method to quantify frequency synchronization in noisy potential systems is presented and exemplified by applying it to the periodically driven noisy harmonic oscillator . since this method is based on a threshold crossing rate pioneered by s.o . rice the related phase velocity is termed rice frequency . finally , we discuss the relation between the phenomenon of stochastic resonance and noise - enhanced phase coherence by applying the developed concepts to the periodically driven bistable kramers oscillator . * studying synchronization phenomena in stochastic systems necessitates a revision of concepts originally developed for deterministic dynamics . this statement becomes obvious when considering the famous phase - locking effect : unbounded fluctuations that occur , for instance , in gaussian noise will always prevent the existence of a strict bound for the asymptotic phase difference of two systems . nevertheless , a reformulation of the synchronization phenomenon in the presence of noise is possible by quantifying the average duration of locking epochs that are disrupted by phase slips . in case that , where is some characteristic time of the dynamics , e.g. the period of an external drive or the inverse of some intrinsic natural frequency , it is justified to speak about effective phase synchronization . *
classic optimization settings assume that the problem data are known exactly .robust optimization , like stochastic optimization , instead assumes some degree of uncertainty in the problem formulation .most approaches in robust optimization formalize this uncertainty by assuming that all uncertain parameters are described by a set of possible outcomes , the uncertainty set . for general overviews on robust optimization , we refer to .while the discussion of properties of the robust problem for different types of uncertainty sets has always played a major role in the research community , only recently the data - driven design of useful sets has become a focus of research . in ,the authors discuss the design of taking problem tractability and probabilistic guarantees of feasibility into account .the paper discusses the relationship between risk measures and uncertainty sets . in distributionally robust optimization, one assumes that a probability distribution on the data is roughly known ; however , this distribution itself is subject to an uncertainty set of possible outcomes ( see ) .another related approach is the globalized robust counterpart , see .the idea of this approach is that a relaxed feasibility should be maintained , even if a scenario occurs that is not specified in the uncertainty set .the larger the distance of to , the further relaxed becomes the feasibility requirement of the robust solution . in this work we present an alternative to constructing a specific uncertainty set .instead , we only assume knowledge of a nominal ( undisturbed ) scenario , and consider a set of possible uncertainty sets of varying size based on this scenario. that is , a decision maker does not need to determine the size of uncertainty ( a task that is usually outside his expertise ) .our approach constructs a solution for which the worst - case objective with respect to any possible uncertainty set performs well on average over all uncertainty sizes . the basic idea of variable - sized uncertaintywas recently introduced in .there , the aim is to construct a set of robust candidate solutions that requires the decision maker to chose one that suits him best . in our setting, we consider all uncertainty sizes simultaneously , and generate a single solution as a compromise approach to the unknown uncertainty .we call this setting the _ compromise approach to variable - sized uncertainty_. we focus on combinatorial optimization problems with uncertainty in the objective function , and consider both min - max and min - max regret robustness ( see ) .this work is structured as follows . in section [ sec : var ] , we briefly formalize the setting of variable - sized uncertainty .we then introduce our new compromise approach for min - max robustness in section [ sec : comp ] , and for the more involved case of min - max regret robustness in section [ sec : comp2 ] .we present complexity results for the selection problem , the minimum spanning tree problem , and the shortest path problem in section [ sec : probs ] . in section [ sec : exp ] , we evaluate our approach in a computation experiment , before concluding this paper in section [ sec : conc ] .we briefly summarize the setting of .consider an uncertain combinatorial problem of the form with , and an uncertainty set that is parameterized by some size . for example , * interval - based uncertainty } [ ( 1-{\lambda})\hat{c}_i,(1+{\lambda})\hat{c}_i] ] , or * ellipsoidal uncertainty with .we call the _ nominal scenario _ , and any that is a minimizer of p( ) a _ nominal solution_. in variable - sized uncertainty , we want to find a set of solutions that contains an optimal solution to each robust problems over all . here ,the robust problem is either given by the min - max counterpart or the min - max regret counterpart in the case of min - max robustness , such a set can be found through methods from multi - objective optimization in , where denotes the complexity of the nominal problem , for many reasonable uncertainty sets. however , may be exponentially large . furthermore , in some settings , a set of solutions that would require a decision maker to make the final choice may not be desirable , but instead a single solution that represents a good trade - off over all values for is sought .in this paper we are interested in finding one single solution that performs well over all possible uncertainty sizes . to this end , we consider the problem for some weight function , such that is well - defined .we call this problem the compromise approach to variable - sized uncertainty .the weight function can be used to include decision maker preferences ; e.g. , it is possible to give more weight to smaller disturbances and less to large disturbances . if a probability distribution over the uncertainty size were known , it could be used to determine . in the following , we consider ( c ) for different shapes of .[ th1 ] let } [ ( 1-{\lambda})\hat{c}_i,(1+{\lambda})\hat{c}_i] ] . then , a nominal solution is an optimal solution of ( c ) . as , we get therefore , a minimizer of the nominal problem with costs is also a minimizer of ( c ) .[ lem1 ] for an ellipsoidal uncertainty set with , it holds that this result has been shown in for .the proof holds analogously .[ th2 ] let be an ellipsoidal uncertainty set with .then , an optimal solution to ( c ) can be found by solving a single robust problem with ellipsoidal uncertainty .using lemma [ lem1 ] , we find to find a minimizer of , we can therefore solve the robust counterpart of ( p ) using an uncertainty set with .note that , if , i.e. , the compromise solution simply hedges against the average size of the uncertainty .in general , recall that this formula gives the centroid of the curve defined by . the results of theorems [ th1 ] and [ th2 ] show that compromise solutions are easy to compute , as the resulting problems have a simple structure .this is due to the linearity of the robust objective value in the uncertainty size .such linearity does not exist for min - max regret , as is discussed in the following section .we now consider the compromise approach in the min - max regret setting . in classic min - max regret ,one considers the problem with . in the following ,we restrict the analysis to the better - researched interval uncertainty sets } [ ( 1-{\lambda})\hat{c}_i,(1+{\lambda})\hat{c}_i] ] and for all in the following .all results can be directly extended to piecewise linear functions with polynomially many changepoints .we first discuss the objective function for some fixed .note that } \hat{c}_i ( 1-{\lambda}+ 2{\lambda}x_i ) y_i.\ ] ] hence , is a piecewise linear function in , where every possible regret solution defines an affine linear regret function , with .,scaledwidth=60.0% ] figure [ fig1 ] illustrates the objective function . in red is the maximum over all regret functions , which defines . on the interval ] , and defines the regret on ] for a fixed .then is an optimal solution to the min - max regret selection problem .we assume that items are sorted with respect to .let be an optimal solution with for an item .we assume is the smallest such item .then there exists some with .consider the solution with for and , .let be a regret solution for .we can assume that , as must be one of the cheapest items .we can also assume , as is not among the cheapest items .let be the regret solution for .the solutions and differ only on the two items and .hence , the following cases are possible : * case and , i.e. , . we have * case and , for some with .note that this means , as otherwise , the regret solution of could be improved .hence , * case and for some with . as the costs of item have increased by using solution instead of ,the resulting two cases are analogue to the two cases above .overall , solution has regret less or equal to the regret of . note that this result does not hold for general interval uncertainty sets , where the problem is np - hard .it also does not necessarily hold for other combinatorial optimization problems ; e.g. , a counter - example for the assignment problem can be found in . finally , it remains to show that can also be computed in polynomial time .for the compromise min - max regret problem of minimum selection it holds that for any fixed , and there is a set with . if is fixed , then there are items with costs , and items with costs . the regret solution is determined by the smallest items .accordingly , when increases , the regret solution only changes if an item with , that used to be among the smallest items , moves to the largest items , and another item with becomes part of the smallest items .there are at most values for where this is the case .we define to consist of all ] , as only for such values of an optimal regret solution may change . hence , . as the size of polynomially bounded , can be computed in polynomial time , and we get the following conclusion .the compromise min - max regret problem of minimum selection can be solved in polynomial time .the min - max regret spanning tree problem in a graph has previously been considered , e.g. , in .the regret of a fixed solution can be computed in polynomial time , but it is np - hard to find an optimal solution .we now consider the compromise min - max regret counterpart ( c ) .let any spanning tree be fixed . to compute , we begin with and calculate a regret spanning tree by solving a nominal problem with costs . recall that this can be done using kruskal s algorithm that considers edges successively according to an increasing sorting with respect to costs .if increases , edges that are included in have costs ( i.e. , their costs increase ) and edges not in have costs ( i.e. , their costs decrease ) .kruskal s algorithm will only find a different solution if the sorting of edges change .as there are edges with increasing costs , and edges with increasing costs , the sorting can change at most times ( note that two edges with increasing costs or two edges with decreasing costs can not change relative positions ) .we have therefore shown : a solution to the compromise min - max regret problem of minimum spanning tree can be evaluated in polynomial time .if the solution is not known , we can still construct a set with size that contains all possible changepoints along the same principle .we can conclude : there exists a compact mixed - integer programming formulation for the compromise min - max regret problem of minimum spanning tree .however , we show in the following that solving the compromise problem is np - hard . to this end, we use the following result : the min - max regret spanning tree problem is np - hard , even if all intervals of uncertainty are equal to ] , then therefore , the min - max regret problem with costs ] , in the sense that objective values only differ by a constant factor and both problems have the same set of optimal solutions . in particular , a solution that maximizes the regret of with respect to cost intervals ] .we can conclude : the compromise problem of min - max regret minimum spanning tree is np - hard , even if for all ] be given .consider an instance of the compromise problem with for all , and .then hence , any minimizer of is also an optimal solution to the min - max regret spanning tree problem .for the shortest path problem , we consider as the set of all simple paths in a graph ( for the min - max regret problem , see , e.g. , ) . as for the minimum spanning tree problem ,the regret of a fixed solution can be computed in polynomial time , but it is np - hard to find an optimal solution . for the compromise problem ( c ) , we have : we can interpret the minimization problem as a weighted sum of the bicriteria problem the number of solutions we need to generate to compute can therefore be bounded by the number of solutions we can find through such weighted sum computations ( the set of extreme efficient solutions ) . for the compromise shortest path problem , it holds that . depending on the graph , the following bounds on the number of extreme efficient solutions ( see , e.g. , ) can be taken from the literature : * for series - parallel graphs , * for layered graphs with width and length , * for acyclic graphs , * for general graphs we can conclude : a solution to the compromise min - max regret problem of shortest path can be evaluated in polynomial time on series - parallel graphs and layered graphs with fixed width or length . note that the number of extreme efficient solutions is only an upper bound on . unfortunately , we can not hope to find a better performance than this bound , as the following result demonstrates . for any bicriteria shortest path instance with costs , for all , there is an instance of ( c ) and a solution where .let an instance of the bicriteria shortest path problem be given , i.e. , a directed graph with arc costs for all .as for all , we can assume w.l.o.g . that for all .we create the following instance of ( c ) .every arc is substituted by three arcs , and .we set , and ( see figure [ fignp ] for an example of such a transformation ) .let , and contain all edges of the respective type .additionally , we choose an arbitrary order of edges , and create arcs .we set costs of these arcs to be a sufficiently large constant .finally , let be the path that follows all edges in and .note that edges in can be contracted , but are shown for better readability .+ note that is sufficiently large so that no regret path will use any edge in .hence , if uses an edge in , it will also have to use the following edges in and , i.e. , corresponds to a path in the original graph .the regret of is therefore , if goes from to , all extreme efficient paths in the original graph are used to calculate .we now consider the complexity of finding a solution that minimizes .note that the reduction in uses interval costs of the form ] , which does not fit into our cost framework ] . note that for layered graphs , all paths have the same cardinality .hence , ( see section [ sec : mst ] ) , and the problem with costs ] for any .analog to the last section , we can therefore conclude : finding an optimal solution to the compromise shortest path problem is np - hard on layered graphs , even if for all ] . for type b costs ,we generate nominal costs in \cup[70,100] ] . for diagonal edges ,we determine their length by sampling from the same interval times , and adding these values , i.e. , all paths have the same expected length .we generate instances with length from 50 to 850 in steps of 100 , and set the number of diagonal edges to be for .the smallest instances therefore contain 102 nodes and 105 edges ; the largest instances contain 1,702 nodes and 1,830 edges . for each parameter combination ,we generate 20 instances , i.e. , a total of instances .the classic min - max regret shortest path problem on instances of both types is known to be np - hard , see .we investigate both types , as we expect the nominal solution to show a different performance : for layered graphs , the nominal solution is also optimal for , as for every path there also exists a disjoint path .therefore , the regret of a path with respect to is . for the second type of instances , a good solution with respect to min - max regretcan be expected to intersect with as many other paths as possible .we can therefore expect the nominal solution to be different to the optimal solution of .all experiments were conducted using one core of a computer with an intel xeon e5 - 2670 processor , running at 2.60 ghz with 20 mb cache , with ubuntu 12.04 and cplex v.12.6 .we solve the compromise approach to variable - sized uncertainty for each instance using the algorithms described in section [ sec : algo ] and record the computation times .average computation times in seconds are presented in table [ tab1 ] . in each columnwe average over all instances for which this parameter is fixed ; i.e. , in column width 5 we show the results over all 440 instances that have a width of 5 , broken down into classes of different length .the results indicate that computation times are still reasonable given the complexity of the problem , and mostly depend on the size of the instance ( width parameter ) and the density of the graph , while the cost structure has no significant impact on computation times ..average computation times to solve ( c ) in seconds . [ cols= " > ,> , > , > , > , > , > , > " , ] in our second experiment , we compare the compromise solution to the nominal solution ( which is also the min - max regret solution with respect to the uncertainty sets and ) , and to the min - max regret solutions with respect to , and .to compare solutions , we calculate the regret of the compromise solution for values of in $ ] .we take this regret as the baseline . for all other solutions, we also calculate the regret depending on , and compute the difference to the baseline .we then compute the average differences for fixed over all instances of the same size .the resulting average differences are shown in figure [ fig : exp ] for four instance sizes . to set the differences in perspective , the average regret ranges from to of the compromise solutionsare shown in the captions .+ by construction , a min - max regret solution with respect to has the smallest regret for this .generally , all presented solutions have higher regret than the nominal solution for small and for large values of , and perform better in between . by construction, the compromise solution has the smallest integral under the shown curve .it can be seen that it presents an interesting alternative to the other solutions by having a relatively small regret for small and large values of , but also a relatively good performance in between .we generate the same plots as in section [ sec : plots ] using the two - path instances .recall that in this case , the nominal solution is not necessarily an optimal solution with respect to .we therefore include an additional line for in figure [ fig : exp2 ] .+ it can be seen that the nominal solution performs different to the last experiment ; the regret increases with in a rate that part of the line needed to be cut off from the plot for better readability .the solution to performs very close to the compromise solution overall .additionally , the scale of the plots show that differences in regret are much larger than in the previous experiment .overall , it can be seen that using a robust solution plays a more significant role than in the previous experiment , as the nominal solution shows poor performance .the solutions that hedge against large uncertainty sets ( and ) are relatively expensive for small uncertainty sets and vice versa .the compromise solution ( as , in this case ) presents a reasonable trade - off over all uncertainty sizes .classic robust optimization approaches assume that the uncertainty set is part of the input , i.e. , it is produced using some expert knowledge in a previous step .if the modeler has access to a large set of data , it is possible to follow recently developed data - driven approaches to design a suitable set . in our approach , we remove the necessity of defining by using a single nominal scenario , and considering all uncertainty sets generated by deviating coefficients of different size simultaneously .the aim of the compromise approach is to find a single solution that performs well on average in the robust sense over all possible uncertainty set sizes . for min - max combinatorial problems, we showed that our approach can be reduced to solving a classic robust problem of particular size .the setting is more involved for min - max regret problems , where the regret objective is a piecewise linear function in the uncertainty size .we presented a general solution algorithm for this problem , which is based on a reduced master problem , and the iterative solution of subproblems of nominal structure . for specific problems ,positive and negative complexity results were demonstrated .the compromise selection problem can be solved in polynomial time .solutions to the compromise minimum spanning tree problem can be evaluated in polynomial time , but it is np - hard to find an optimal solution .for compromise shortest path problems , the same results hold in case of layered graphs ; however , for general graphs , it is still an open problem if there exist instances where exponentially many regret solutions are involved in the evaluation problem . in computational experiments we highlighted the value of our approach in comparison with different min - max regret solutions , and showed that computation times can be within few minutes for instances with up to 22,000 edges .m. goerigk and a. schbel .algorithm engineering in robust optimization . in l.kliemann and p. sanders , editors , _ algorithm engineering : selected results and surveys _ , volume 9220 of _ lncs state of the art_. springer , 2016 .final volume for dfg priority program 1307 .a. kasperski and p. zieliski .robust discrete optimization under discrete and interval uncertainty : a survey . in _ robustness analysis in decision aiding , optimization , and analytics _ , pages 113143 .springer , 2016 .
in classic robust optimization , it is assumed that a set of possible parameter realizations , the uncertainty set , is modeled in a previous step and part of the input . as recent work has shown , finding the most suitable uncertainty set is in itself already a difficult task . we consider robust problems where the uncertainty set is not completely defined . only the shape is known , but not its size . such a setting is known as variable - sized uncertainty . in this work we present an approach how to find a single robust solution , that performs well on average over all possible uncertainty set sizes . we demonstrate that this approach can be solved efficiently for min - max robust optimization , but is more involved in the case of min - max regret , where positive and negative complexity results for the selection problem , the minimum spanning tree problem , and the shortest path problem are provided . we introduce an iterative solution procedure , and evaluate its performance in an experimental comparison . * keywords : * robust combinatorial optimization ; min - max regret ; variable - sized uncertainty
[ [ section ] ] five books have so far been published in george r. r. martin s popular _ song of ice and fire _ series , , , , .it is widely anticipated that two more will be published , the first of which ( * ? ? ? * foreword ) will be entitled _ the winds of winter _each chapter of the existing books is told from the point of view of one of the characters .so far , ignoring prologue and epilogue chapters in some of the books , characters have had chapters told from their point of view . a chapter told from the point of view of a particular character be called a _pov chapter for _ .a character who has at least one pov chapter in the series will be called a _ pov character . _ [ [ section-1 ] ] the goal is to predict how many pov chapters each of the existing pov characters will have in the remaining two books ( especially . )this varies from character to character , since some major characters have been killed off and are unlikely to appear in future novels , whereas other characters are of minor importance and may or may not have chapters told from their point of view .[ [ section-2 ] ] no attempt is made to deal with characters who have not yet appeared as pov characters .this issue is discussed further in section [ newcharacters ] .the data consist of a matrix obtained from http://www.lagardedenuit.com/ , a french fansite .the rows of correspond to pov characters and the columns to the existing books in order of publication .the of is the number of pov chapters for character in book .the data are displayed in table [ m ] ..the data .novel titles are abbreviated using their initials , so for example agot = _ a game of thrones _ etc .obtained from http://www.lagardedenuit.com/wiki/index.php?title=personnages_pov .a similar table appears at http://en.wikipedia.org/wiki/a_song_of_ice_and_fire . [ cols="<,^,^,^,^,^",options="header " , ] 2ex it is easy to predict how many pov chapters certain characters will have in the next book . for example , if character was beheaded in book and has not appeared in subsequent books , most readers would predict that will have pov chapters in book .if we denote by the number of pov chapters for in book , we predict .the prediction is a _ point prediction .on the other hand , suppose it is believed that a certain character will have pov chapters in book .it is quite plausible that might have or pov chapters instead . on the other hand, it may be thought unlikely that will appear in ( too few ) or ( too many ) pov chapters . instead of the point prediction , it would be better to give a range of likely values for this is an _ interval prediction . _ for example, to say that the interval ] . the overall model is : for , and , with \\ \beta_i & \sim n(\mu_\beta , \sigma_\beta^2 ) \text { truncated to } [ 0,7]\end{aligned}\ ] ] where and . for fixed , the assumed to be conditionally independent given and .for fixed and , the and are assumed to be conditionally independent given the values of and and .[ [ section-7 ] ] to be explicit , let be the data . for define by the likelihood is proportional to where the integral is over the dimensions and the symbol stands for if is true and if is false .[ [ section-8 ] ] a model like [ model ] is often called a _hierarchical _ model .the and are called _ hyperparameters _ to distinguish them from the individual , and .the model is fitted using bayesian inference with non - informative priors on the location parameters , and and inverse gamma priors on the scale parameters and . because intractable - looking integrals appear in the likelihood ( [ likelihood ] ) , the model is fitted using gibbs sampling . for the and ,samples are drawn from the marginal distribution using a histogram approximation .this is slow but easier to code than alternatives .[ [ algorithm ] ] at each iteration of the algorithm , a value of is sampled using the theory of the normal distribution and then , for each character , the values of , and are sampled in that order. then predictions for and are sampled using the definition of model [ model ] .after all iterations are complete , a burn - in is discarded and the output is thinned to make the resulting samples as uncorrelated as possible .this is useful for some purposes , such as drawing figure [ prob0 ] .[ [ section-9 ] ] the output of the algorithm is a collection of samples of and predictions and where .the algorithm is written in and uses the ` truncnorm ` package .model [ model ] does not fit the training data very well since there are so many zeroes in column affc in table [ m ] .it is known ( * ? ? ?* afterword ) , ( * ? ? ?* foreword ) that and were originally planned to be a single book , but it was later split into two volumes , each of which concentrates on a different subset of the characters .[ [ preprocess ] ] this problem can be approached either by ignoring it , by modelling , or by pre - processing the data .the model already has a lot of parameters and making it more complex is unlikely to be a good idea .ignoring the problem and fitting the model to table [ m ] is not too bad , but it was decided to pre - process the data by replacing by where where and are the number of chapters in books and respectively .this preserves the total number of chapters in each book , which may be of interest ( see section [ newcharacters ] . )[ [ section-10 ] ] another possible approach would be to treat books and as one giant book when fitting the model .the main disadvantage of this approach is that it decreases the amount of data available even further , although the resulting matrix would probably provide a better fit than to the chosen model .the gibbs sampler of section [ algorithm ] was run for iterations with a burn - in of and thinned by taking every sample , resulting in posterior samples of size .the algorithm was applied to the smoothed data of section [ datasmoothing ] .it was run several times with random starting points to check that the results are stable .only one run is recorded here .[ [ section-11 ] ] table [ posteriors1 ] gives the posterior predictive distribution of pov chapters for each pov character in book .table [ posteriors2 ] gives a similar distribution for book .( the results for book are of less interest , as new predictions for book should be made following the appearance of book . )graphs of the posterior distributions for book are given in figures [ posteriors_1 ] and [ posteriors_2 ] .many of the distributions are bimodal and are not well - summarised by a single credible interval .the distribution for tyrion has the highest variance , followed by jon snow ( see figure [ posteriors_1 ] . )aeron , areo , jon connington , arianne , , victarion , asha , barristan and , melisandre the distributions are identical ; apparent differences are due to sampling variation . ]one of the most compelling aspects of the _ song of ice and fire _ series is that major characters are frequently and unexpectedly killed off .the probability of a character having zero pov chapters ( which is the first column of table [ posteriors1 ] divided by ) is therefore of interest .these values are plotted in figure [ prob0 ] . treating the posterior samples as independent, the error bars in figure [ prob0 ] indicate approximate confidence intervals of .note that although a character who has been killed off will have zero pov chapters , the converse is not necessarily true .the probabilities in figure [ prob0 ] are not based on events in the books , but solely reflect what the model can glean from table [ m ] . .the characters are ordered on the by the value of the posterior probability for book ( the black dots . )the blue circles are the posterior probabilities of having zero pov chapters in book .the error bars extend to and are intended to indicate when two posterior probabilities are roughly equal .the figure is discussed in sections [ zeroprobabilities ] to [ isjonsnowdead ] . ][ [ section-12 ] ] the characters in figure [ prob0 ] are arranged on the in order of their probability of having zero pov chapters in book .eddard and catelyn were already killed off in and ( arguably ) respectively .arys , who has the third highest posterior probability of having zero pov chapters , was killed off in , but it is misleading to be impressed by this . the reason why arys has a high posterior probability of having zero pov chapters is that he only ever appeared in one pov chapter , so is small and so , even if is large , there is a high probability that .in fact , in the smoothed data , the rows for melisandre and arys are exactly the same .the difference between the posterior distributions for these two characters is due to sampling variation ( and varies from one model run to another . )[ [ section-13 ] ] the characters between tyrion and aeron in figure [ prob0 ] all have roughly the same posterior probability of having zero pov chapters .they are mostly characters who have featured prominently in all the books since the beginning , together with the newer characters aeron , areo , arianne and jon connington whose posterior predictive distributions are identical and who have lower probabilities of non - appearance in book .[ [ section-14 ] ] the next group of characters are those who have had relatively few pov chapters , including quentyn who , despite having been killed off in , is assigned a low posterior probability of of having zero pov chapters in book since he has the same posterior predictive distribution as asha , barristan and victarion .[ [ section-15 ] ] finally , there are the characters cersei , brienne , jaime and samwell who have only recently become pov characters and have had a large number of pov chapters in the books in which they have appeared .the model suggests that the probability of jon snow _ not _ being dead is at least since this is less than the posterior probability of his having at least one pov chapter in book .given the events of , many readers would assess his probability of not being dead as being much lower than , but we must again point out that the model is unaware of the events in the books .the model can only say that , based on the number of pov chapters observed so far , he has about as much chance of survival as the other major characters .it is desirable to check that the model has been coded correctly .a way to check this is to generate a data set according to the model and then see whether the chosen method of inference can recover the parameters which were used to generate the data set .[ [ section-16 ] ] if the model had been fitted by frequentist methods , it would be possible to generate a large number of data sets , fit the model to each one , calculate confidence intervals for the hyperparameters , and check that the confidence intervals have the correct coverage .since the model has been fitted by bayesian methods , it can only be used to produce credible intervals . however , as flat priors were used , the posterior distributions for the hyperparameters should be close to their likelihoods and so bayesian credible intervals should roughly coincide with frequentist confidence intervals when the posterior distributions of the hyperparameters are symmetric and unimodal , which they are .[ [ inferencetest ] ] to test the method of inference , the model was fitted to data sets .each data set was generated from hyperparameters which were a perturbation of .these values were chosen because they were approximately the posterior medians for the hyperparameters obtained from one of the fits of model [ model ] to the smoothed data .the location parameters and were perturbed by adding noise and the scale parameters and were perturbed by multiplying by where . for each of the data sets , intervals were calculated by taking the middle of the posterior distributions for each of the hyperparameters , yielding credible intervals per .the results , plotted in figure [ inference_test ] ( left panel ) indicate that the credible intervals have roughly the correct coverage .[ inference_test ] model fits as described in section [ inferencetest ] and section [ inferencetest_pred ] . on the left , a comparison of the nominal and actual coverage of credible intervals for the hyperparameters . on the right , a comparison of the nominal and actual coverage of credible intervals for the predicted number of pov chapters.,title="fig : " ] [ [ section-17 ] ] note that there are choices of the hyperparameters which the chosen method of inference will not be able to recover .for example , data generated with will be practically indistinguishable from data generated with , so there is no hope of inferring the hyperparameters in this case .this is no great drawback as it should not affect the model s predictions , which are the topic of interest .[ [ inferencetest_pred ] ] the procedure of section [ inferencetest ] was carried out for the one - step - ahead predictions for book .the result , shown in figure [ inference_test ] ( right panel ) shows that the credible intervals have greater coverage than they should .this is because the number of pov chapters can only take integer values and so the closed interval $ ] obtained by taking the and quantiles of the posterior distribution will in general cover more than of the posterior samples .[ [ section-18 ] ] note once again that the purpose of these checks and experiments is to make sure that the model has been correctly coded .we now discuss how to evaluate its predictions .every predictive model should be applied to unseen test data to see how accurate its predictions really are .it will not be possible to test model [ model ] before the publication of but an attempt at validation can be made by fitting the model to earlier books and seeing what it would have predicted for the next book .[ [ validation_12 ] ] the model was tested by fitting it to books and in the series .only pov characters appear in these books , so the data consist of the upper - left submatrix of table [ m ] .figure [ validation ] shows the result of fitting the model to this matrix and comparing with the true values from the third column of table [ m ] .the intervals displayed are central ( solid lines ) and ( dotted lines ) credible intervals .the coverage is satisfactory but the intervals are much too wide to be of interest .[ validation ] ( plotted as dots ) from table [ m ] compared with central ( solid lines ) and ( broken lines ) credible intervals obtained from fitting the model to as described in section [ validation_12 ] .the characters have been sorted in increasing order of the posterior median.,title="fig : " ] [ [ section-19 ] ] fitting the model to the upper - left submatrix of table [ m ] consisting of pov chapters from the first three books gives more interesting output , but it is not clear how to evaluate the results because of the splitting of books and discussed above in section [ datasmoothing ] .[ [ section-20 ] ] we can also compare the model s predictions with preview chapters from which are said to have been released featuring the points of view of arya , arianne , victarion and barristan .given that there is at least one arya chapter , table [ posteriors1 ] indicates that there will probably be at least arya pov chapters and perhaps more .[ [ section-21 ] ] the model can generate data containing a row of zeroes , but there are no zero rows in the data to which the model is fitted , because by definition this would correspond to a character who has never been a pov character in the books .this is a source of bias in the model but it is not obvious how it can be avoided .the effect of the bias can be tested by repeating the simulations of section [ inferencetest ] , but deleting zero rows before fitting the model . for ,the coverage of a credible interval for a hyperparameter tends to be roughly .there is little to support the choice of the poisson distribution in model [ model ] other than that it has the smallest possible number of parameters .it is more common to use the negative binomial distribution for count data , but this would introduce extra complexity into the model , which is undesirable . with only values of , and available for finding the corresponding hyperparameters , it may not be possible to fit a ( truncated ) normal distribution in a meaningful way .consideration of the posterior samples of the , and suggest that they more - or - less follow the pattern which is evident in the data and that the shrinkage of these parameters towards a common mean , which is one of the benefits of using a hierarchical model , can not really be attained with so little data .for example , when the model is fitted to a data set containing a row in which the most recent entry is , the vast majority of posterior samples for the next entry in that row are always .this is one reason for smoothing the data in section [ datasmoothing ] before fitting the model , in preference to fitting the model directly to table [ m ] .if the number of pov characters was much larger , this might not be such a big problem .given , the model treats and as independent for .this is not a realistic assumption because if one character has more pov chapters , then the other characters will necessarily have fewer .again , addressing this would seem to over - complicate the model .the model ignores the introduction of new pov characters , although every book in the series has featured some new pov characters .we can , however , use the output from the fitted model to make guesses about new characters .the posterior distribution of the number of chapters in book told from the points of view of existing pov characters is unimodal with a mean of chapters , but typical books in the series so far have had about chapters .so we could estimate that there will be about chapters in told from the point of view of new pov characters . in the previous books , according to table [ m ] , the number of chapters told from the point of view of new pov characters has been and , so does not seem like an unreasonable guess .we could continue to make predictions in the hope of getting one right , but there is no merit in this .we hope that it will be possible to review the model s performance following the publication of .
predictions are made for the number of chapters told from the point of view of each character in the next two novels in george r. r. martin s _ a song of ice and fire _ series by fitting a random effects model to a matrix of point - of - view chapters in the earlier novels using bayesian methods . * spoiler warning : readers who have not read all five existing novels in the series should not read further , as major plot points will be spoiled , starting with table [ m ] .
as in the previous paper we define the system as an aggregation of particles capable for autocatalytic reactions .symbols and are used for notations of _ autocatalytic and substrate particles _ , respectively .we denote by _ the particles of end - product _ which do not take part in the reactions .we assume that the system is `` open '' for particles , i.e. the number of particles is kept constant by a suitable reservoir . however , the system is strictly closed for the autocatalytic particles . in this paperwe investigate systems which are governed by reactions where and are the rate constants .these reactions became interesting since convincing speculations were published about the `` natural selection '' based on their properties .the organization of the paper as follows .after brief discussion of the kinetic rate equation , in section i we derive and formally solve the forward kolmogorov equation for the probability of finding particles in a system of volume at time moment . defining the conditions of the stationarity we determine the stationary probability , and analyze its properties . in sectionii we present a modified stochastic model of the autocatalytic reactions ( [ a1 ] ) which is capable of reproducing the solution of the rate equation , however , brings about such a large fluctuation in the stationary number of particles , that the mean value loses practically all of its information content . in sectioniii we define the lifetime of a system controlled by reaction ( [ a1 ] ) , and calculate exactly the extinction probability , as well as the mean value of the system lifetime .finally , in section iv we summarize the main conclusions .in order to make comparison , first the well - known results of the kinetic rate equation are briefly revisited , and then the stochastic model of the reactions ( [ a1 ] ) will be thoroughly analyzed .let be the number of particles in the volume of the system at the time moment , and denote by the number - density of the particles and by that of particles which is kept strongly constant by suitable reservoir . according to the kinetic law of mass actionwe can write ,\ ] ] where is the _ critical parameter _ of the reaction . taking into account the initial condition we obtain immediately the solution of ( [ z1 ] ) in the form : clearly we have two stationary solutions , namely it is easy to show that the solution of is stable , if , i.e. if , while is stable , if , i.e. if .let a small disturbance in .it follows from eq .( [ z1 ] ) that \;\delta c,\ ] ] and so , we see immediately , if , then , i.e. is stable , when . similarly , if , then , i.e. is stable , when .the stationary density of particles versus can be seen in fig .it is clear that is a `` bifurcation point '' , since if then there are two stationary solutions but among them only one , namely the solution is stable . by decreasing the density _ adiabatically _ below the critical value , the autocatalytic particles are dying out completely , and it is impossible to start again the process by increasing the density above the critical .( 5,6)(0,0)(0,1)5 ( 0,0)(1,0)6 ( 2,0)(3,4)3 ( 0,0)(1,0)2 ( 0,5.5)(0,0)[t] ( 6.5,0)(0,0)[t] ( 2,-0.5)(0,0) ( 1 , 2.5)(0,-1)2.5 ( 5 , 2.5)(-1,0)1.1 ( 1.1,3)(0,0)[t] ( 5.4,2.8)(0,0)[t] fig. : dependence of the stationary number - density of particles on the number - density .the thick lines refer to stable stationary values . in the next subsectionwe will analyze the stochastic model of reversible reactions ( [ a1 ] ) .it will be shown that in the stochastic model the long time behavior of the number of autocatalytic particles is completely different from that we obtained by using the rate equation ( [ z1 ] ) .it is to mention that interesting and seemingly convincing speculations were published about the possibility of `` natural selection '' based on autocatalytic reactions in sets of prebiotic organic molecules and about the `` origin of life '' beginning with a set of simple organic molecules capable of self - reproduction .the essence of these speculations can be summarized as follows : let us consider different and independent autocatalytic particles , and denote by the corresponding bifurcation points . if , then the stationary density of all autocatalytic particles is larger than zero .when the density decreases _ adiabatically _ to a value lying in the interval , then the autocatalytic particles disappear from the system , and by increasing again the density of particles above , there is no possibility to recreate those particles which were lost . in this way some form of selection can be realized .when we take into account the stochastic nature of the autocatalytic reaction , then we will see that speculations of this kind can not be accepted .let the random function be the number of autocatalytic particles at the time moment .the task is to determine the probability of finding exactly autocatalytic particles in the system at time instant provided that at the number of particles was .assume that the number of the substrate particles is kept constant during the whole process , we can write for the equation : + \ ] ] {n+1}(t)\ ; \delta t + o(\delta t),\ ] ] and for the equation : where from these equations it follows immediately that where .if , then for the sake of completeness we derive the generating function equations and by using the equations ( [ z7 ] ) and ( [ z8 ] ) it is easy to show that satisfies the partial differential equation while the equation \ ; \frac{\partial g_{exp}(t , y)}{\partial y } - \beta ( 1-e^{-y})\ ; \frac{\partial^{2 } g_{exp}(t , y)}{\partial y^{2}}.\ ] ] the initial conditions are and , respectively , and in addition it is to note that . for many purposesit is convenient to use the logarithm of the exponential generating function ) .therefore , define the function the derivatives of which at , i.e. {y=0 } = \kappa_{j}(t ) , \;\;\;\;\;\ ; j = 1 , 2 , \ldots\ ] ] are the _ cumulants _ of .one can immediately obtain the equation \;\frac{\partial k(t , y)}{\partial y } - \beta ( 1-e^{-y})\ ; \left\{\frac{\partial^{2 } k(t , y)}{\partial y^{2 } } + \left[\frac{\partial k(t , y)}{\partial y}\right]^{2}\right\}.\ ] ] now , the initial condition is in order to simplify the notations define the vector and the matrix where and write the eqs .( [ z7 ] ) and ( [ z8 ] ) in the following concise form : the formal solution of this equation is where the components of are and the matrix is a _normal jacobi matrix_. as known , the eigenvalues of a normal jacobi matrix are different and real . if are the eigenvalues of , then the component of the vector can be written in the form for any .since to find the eigenvalues and the coefficients is not a simple task , we concentrate our efforts only on the determination of stationary solutions of eqs .( [ z7 ] ) and ( [ z8 ] ) .in order to show that the limit relations exist , we need a _theorem for eigenvalues of the matrix _ stating that and for every .the proof of the theorem can be found in * appendix a*. by using this theorem it follows from eq .( [ z20 ] ) that since in this case we can write from ( [ z7 ] ) immediately the stationary equations = n [ ( n+1 ) w_{n+1 } - ( n-1)w_{n } ] + b\;[(n+1 ) w_{n+1 } - n w_{n}],\ ] ] for all . summing up both sides of ( [ z23 ] ) from to , we obtain taking into account the eq .( [ z8 ] ) we should write that = w_{0 } + \sum_{k=1}^{\infty } w_{0k}\ ; e^{-|\nu_{k}|u},\ ] ] and we see that the condition of the stationarity is either , or , i.e. . if , then it follows from ( [ z24 ] ) that for all , and consequently , since if , i.e. if the autocatalytic particles do not decay , then and from ( [ z24 ] ) one obtains that taking into account that we can write for , while for we have for the calculation of factorial moments let us introduce the generating function in order to obtain the equation for the mean value of the number of particles we use the generating function equation ( [ z10 ] ) . introducing the time parameter we have - \beta\;[m_{2}(u)-m_{1}^{2}(u)],\ ] ] which clearly shows that the kinetic law of the mass action is violated , when the variance is not negligible . for the second momentwe get the equation \;m_{2}(u ) - 2\;m_{3}(u),\ ] ] in which appears the third moment .there are several methods to find approximate solution of , some of them were mentioned already in . here, we do not want to discuss the details , instead we are focussing our attention on the properties of _ the expectation value and variance in stationary state_. if the decay constant of particles is zero , then it is easy to show from eq .( [ z28 ] ) that the stationary value of the average number of autocatalytic particles in the system is equal to while _ the second factorial moment _ is consequently, the variance can be written in the form = m_{1}^{(st)}\left(1 - \frac{a}{e^{a } - 1}\right).\ ] ] if , then .the first important conclusion drawn from the stochastic model is that the average number of the autocatalytic particles in stationary state is different from zero , only when the decay rate constant of particles is zero .it means that _ there is no `` bifurcation '' point in the dependence of the average stationary number of particles on the number of particles _ , therefore speculations mentioned earlier about the `` natural selection '' are not supported by the stochastic model .the second conclusion is connected with _ the law of the mass action _ referring to the _ chemical equilibrium_. if , then the reversible reaction has to lead to an equilibrium state , in which the stochastic model results in an entirely different expression , namely the relative dispersion , i.e. clearly shows that the fluctuations of the number of autocatalytic particles become poisson - like when the number of substrate particles is increasing .the modification of the stochastic model is very simple .we assume that the probability of the reverse reaction is proportional to the average number of particles .therefore , the probability that a reverse reaction occurs in the time interval is nothing else than provided that the number of particles was exactly at time moment . accepting this assumptionwe can rewrite instead of . ] the equations ( [ z7 ] ) and ( [ z8 ] ) in the form : \ ; n\;\tilde{p}_{n}(t ) + \alpha n_a(n-1)\;\tilde{p}_{n-1}(t ) + \ ] ] (n+1 ) \;\tilde{p}_{n+1}(t ) , \;\;\;\;\;\ ; n = 1 , 2 , \ldots,\ ] ] and respectively .one can immediately see that the generating function satisfies the equation (1 - e^{-y})\right\}\;\frac{\partial \tilde{g}_{exp}(t , y)}{\partial y},\ ] ] with and . in the sequel . introducing the notations and , it is not surprising that the first moment is the solution of the equation ,\ ] ] which is exactly the same as ( [ z1 ] ) .if the initial condition is , then and one can see that if , i.e. if , then one obtains from ( [ d5 ] ) the value is the number of particles that corresponds to _ the bifurcation concentration _ introduced in the rate equation model .however , the modified stochastic model takes into account the randomness of the reactions , and hence gives a possibility for the determination of _ the variance of the number of particles _ versus time .it can be easily shown that the variance of is .\ ] ] in order to prove this relation we need the equation for the second moment , which can be derived from eq .( [ d3 ] ) . introducing the time parameter obtain \;m_{1}(u ) + 2[a - b - m_{1}(u)]\;m_{2}(u),\ ] ] and this can be rewritten in the form it is an elementary task to show that and from this we obtain immediately the variance ( [ d7 ] ) . taking into account the expression ( [ d5 ] ) for the variance of the number of particlescan be written in the following form , & \text{if , } \\\text { } & \text { } \\ \left[b u^{2 } + ( 2b + 1 ) u \right]/(1 + u)^{2 } , & \text{if , } \end{array } \right.\ ] ] where .\ ] ] from ( [ d7 ] ) we can conclude that [ ht ! ] 0.2 cm [ ht ! ] 0.2 cm in order to have some insight into the nature of the time behavior of the variance of the number of autocatalytic particles we have calculated the variance versus time curves for different values of the number of particles and for several decay rate constants .[ fig1 ] shows the time dependence of when at a fixed value of rate constant .we see that the variance versus time curves have a maximum and after that they decrease slowly to zero . in contrary to this , when , then the variance of the number of autocatalytic particles increase monotonously to infinity .it can be easily shown that and so , we can use for large the asymptotic formula the time dependence of the variance in the case when is seen in fig .[ fig2 ] for three decay rate constants .the variance converges to the infinity linearly with increasing time parameter , if , and to a constant value , if , so we can say that the fluctuation of the number of particles in the stationary state near the `` bifurcation '' point alters the possible conclusions based only on the average number of autocatalytic particles .we say that a system is in the state , when .obviously , the system is live at time instant , when .let us define the probability of transition by since the process is homogenous in time , and using eq .( [ z7 ] ) we can write that where .\ ] ] we see that the first equation is and the initial condition is given by .the random time due to the transition is _ the lifetime of the system _ which has been at in the state . it is evident that where is the probability that the lifetime is not larger than .the moments of the lifetime are given by the extinction of a system of state occurs when a transition to the state is realized for any .we define the extinction probability by in accordance with ( [ z27 ] ) we find that it is a remarkable result stating that a system being in any of states at a given time instant after elapsing sufficiently long time will be almost surly annihilated , if , and never dies , i.e. the extinction probability is zero , if .it is to mention that the statement ( [ z40 ] ) can be obtained from a nice lemma by karlin which can be formulated in the following way : introducing the notation where and are non - negative real numbers which are not necessarily equal to ( [ z37 ] ) , it can be stated that if while if by using the expressions ( [ z37 ] ) we see immediately that hence we can conclude that and so , applying the lemma we prove the statement ( [ z40 ] ) . to determine the transition probability , i.e. the probability is not an easy problem . instead, we show how to calculate the average lifetime .let us define the parameters and formulate the following statement called karlin s theorem ._ if , then the average lifetime of a system containing autocatalytic particles is given by where is defined by ( [ z41 ] ) , and in contrary , if , then _ the proof of this statement can be found in * appendix b*. now , by using this statement we would like to calculate the average lifetime of a system which is in the state at the moment . introducing the notations and by using the expressions ( [ z37 ] ) we can write where is the kronecker - symbol .define the sum for .if , then one can see immediately that i.e. the formula ( [ z45 ] ) should be used for the calculation of the average lifetime .first , determine .it follows from eq .( [ z45 ] ) that where is the confluent hypergeometric function .the next step is the calculation of the expression which can be rewritten into the form : after some elementary algebra we obtain and finally we have introducing a new index we have which can be transformed into the expression by using the identity takes a new form , namely \;dv.\ ] ] taking into account this formula the expression ( [ z50 ] ) can be rewritten in the form \;dv,\ ] ] which is convenient for numerical calculations . from this equationwe see that and we can prove the limit relation [ ht ! ] 0.2 cm it is _ necessary and sufficient _ to show that since is a non - negative , monotonously increasing function of we can write immediately that and if , then consequently , we obtain the inequality and this proves the statement ( [ z52 ] ) .we calculated how the mean value depends on the ratio at three different values of the number of particles .the results are shown in fig .we can see that decreases rapidly with increasing , and if the values are smaller than , then we can observe that the larger is the number in the system the longer is the average lifetime .[ ht ! ] 0.2 cm that fig .[ fig5 ] shows is rather surprising .the mean value of the system lifetime does not depend practically on the number of particles to be found in the system at that time moment which the lifetime is counted from .we can state that the average lifetime of systems controlled by reactions ( [ a1 ] ) is already determined by several particles , and even a large increase of the number of particles does not effect significantly on the system lifetime .[ ht ! ] 0.2 cm on this basis imagine an `` organism '' which consists of particles capable of self - reproduction and self - annihilation . assume that the organism becomes dead if it loses the last particle .one might think that the greater is the number of particles in the organism the larger is irs average lifetime .contrary to this conviction , an organism containing , let us say particles hardly lives longer than that which contains only particles . by using the values and we obtain that and .it is rather surprising that the increase is only . in fig .[ fig6 ] we see the dependence of the logarithm of the mean value on the number of particles at three different .one can observe that the increase of results in a rather large lengthening of the average lifetime , i.e. the effect of the substrate particles on the process is much stronger than that of the autocatalytic particles .we assumed the distribution of reacting particles in the system volume to be uniform and introduced the notion of _ the point model of reaction kinetics_. in this model the probability of a reaction between two particles per unit time is evidently proportional to the product of their actual numbers . by using this assumptionwe constructed a stochastic model for systems controlled by the reactions provided that the number of particles is kept constant by a suitable reservoir , and the end - product particles do not take part in the reaction .we have shown that the stochastic model results in an equation for the expectation value of autocatalytic particles which differs strongly from the kinetic rate equation .further , we found that if the decay constant of the particles is not zero , then _ the stochastic description _ , in contrary to the rate equation description , _ brings about only one stationary state with probability _ , and it is the zero - state .it has been also proven that the probability of a nonzero stationary state is larger than zero , if and only if the decay rate constant is equal to zero .consequently , the average number of particles in the stationary state is larger than zero , if only .however , one has to underline that this average number is completely different from that which corresponds to the law of the mass action of reversible chemical reactions .we paid a special attention on the random behavior of _ the system lifetime _ , and derived an exact formula for the average lifetime .it has been shown that the mean value of the system lifetime does not depend practically on the number of particles to be found in the system at the time instant which the lifetime is counted from . for example , the lifetime of a system having particles at the beginning is larger only by % than the lifetime of a system containing particles at the time moment .as mentioned already , the eigenvalues of a normal jacobi matrix are real and different .we would like to prove the following theorem : * theorem . * _ the is an eigenvalue of the normal jacobi matrix defined by ( [ z14 ] ) and the other nonzero eigenvalues are all negative . _* proof . *denote by is the set of nonnegative integers . ]the elements of matrix .we see immediately that i.e. the sum of elements of any column of the matrix is equal to zero . strictly speaking, the theorem itself is a straightforward consequence of this property . if an eigenvalue , then and define a nonzero eigenvector let be an arbitrary nonzero vector and form the following expression : which can be rewritten in the form this equality can be valid only if and because is an arbitrary nonzero vector one can chose its components to be equal to unity . then taking into account the property ( [ c1 ] ) one has .e. is indeed an eigenvalue .now , we show if , then . since , and ,if , it follows from ( [ c2 ] ) that as is an arbitrary nonzero vector , let us chose it so that we obtain the inequality which can be valid only if .* theorem . * _ if , then the average lifetime of a system containing autocatalytic particles is given by where is defined by ( [ z41 ] ) , and in contrary , if , then _ * proof . * assume the system to be in the state at a given time instant and suppose that the first reaction after occurs at a random time moment .this reaction can result in a transition to either the state or with probabilities respectively .since is arbitrary , the equation is valid with probability . taking into account that one obtains from ( [ e1 ] ) the recursion where . introducing the difference after simple rearrangements we obtain the solution of whichcan be written in the form : by using the notation and taking into account the identity and the relation , we have if , then i.e. the average lifetime of the system is infinite . the proof is simple : it is obvious that , therefore , it follows from ( [ e4 ] ) that for any , and if , than must be infinite . since , it is evident that , hence the statement is proven . if , then one can find a finite real number such that , consequently and it follows from this that taking into account this relation we can rewrite eq .( [ e4 ] ) into the form : where is defined by ( [ z41 ] ) . introducing the notation we have the solution of whichis nothing else than and by substituting and we obtain immediately the equation ( [ e0 ] ) .
we analyzed the stochastic behavior of systems controlled by autocatalytic reaction provided that the distribution of reacting particles in the system volume is uniform , i.e. the point model of reaction kinetics introduced in arxiv : cond - mat/0404402 can be applied . assuming the number of substrate particles to be kept constant by a suitable reservoir , we derived the forward kolmogorov equation for the probability of finding autocatalytic particles in the system at a given time moment . we have shown that the stochastic model results in an equation for the expectation value of autocatalytic particles which differs strongly from the kinetic rate equation . it has been found that not only the law of the mass action is violated but also the bifurcation point is disappeared in the well - known diagram of particle- vs. particle - concentration . therefore , speculations about the role of autocatalytic reactions in processes of the `` natural selection '' can be hardly supported .
radiative transfer , the process that describes radiation propagating through and interacting with matter , is a general problem that is encountered in all areas of astronomy ( and far beyond ) . due to its high dimensionality and nonlocal and nonlinear behaviour ,it is generally considered as one of the most challenging problems in numerical astrophysics .recent years have seen an impressive advancement of three - dimensional ( 3d ) radiative transfer studies , thanks to the increase in computational power , the availability of a wealth of new observational constraints and the development of new algorithms . among the different approaches available to solve the radiative transfer problem ,the monte carlo method is generally the most popular one .the first monte carlo radiative transfer codes were developed more than four decades ago , and since then the method has steadily increased its market share in all fields where 3d radiative transfer is important .many powerful monte carlo codes are available in various fields of computational astrophysics , including dust radiative transfer , ly line transfer , ionising radiation transport , neutron transport and neutrino radiation transport . these and many other papers and monographs discuss at length the main algorithms behind monte carlo radiative transfer and the various proposed improvements and acceleration techniques . a very different aspect of monte carlo radiative transfer codes that has received very little attention in the literature , is the setup of a suite of components that describe the distribution of the sources and sinks in the radiative transfer code ( i.e. , the objects that add radiation to or remove radiation from the radiation field ) .this is an aspect outside the real core of monte carlo radiative transfer problem .virtually all of the available codes require the user to hard - code the model makeup for each distinct problem ( although the results of hydrodynamical simulations can generally be processed out of the box , given the appropriate import module ) .we argue , however , that it is very useful for a radiative transfer code to provide a suite of input models , or _geometries _ , as built - in components .such toy models can provide a low - threshold introduction to new users .they also have a crucial role in a more scientific way : toy input models are used to benchmark different codes , to investigate the effects of dust attenuation on the apparent structural parameters of galaxies , or to investigate the physical impact of e.g. clumpiness or spiral arms .finally , they are essential for so - called inverse radiative transfer , i.e. when a parameterised radiative transfer model is fitted to observational data . in order to be useful for these goals, the suite of input models should be diverse enough and contain models with different degrees of complexity , ranging from the metaphorical spherical cows to more realistic toy models that consider , for example , spheroidal and triaxial geometries , including bars , spiral arms and clumpy distributions .setting up such a suite is more complex than it might appear at first sight .while each model is in principle completely determined by the 3d density distribution , the setup requires more than just an implementation for this density function .a crucial aspect of monte carlo radiative transfer codes is the emission of a multitude of simulated photon packets from random locations sampled from the source density distribution .so each model in the suite should contain a routine that generates random numbers according to .moreover , this random position generation needs to be very efficient , since the random positions are sampled extremely often . in this aspect ,monte carlo radiative transfer simulations differ from other codes where random positions need to be sampled from arbitrary density distributions , e.g. to generate the initial conditions for n - body or hydrodynamical simulations .the efficient generation of random positions from complex 3d density distribution is not a trivial task .another aspect that needs consideration is the organisation of such a suite of input models .one could provide a parameterised model that can cover every possible option , but this would quickly lead to an explosion of options that is hard to overview and maintain , and would inevitably contain substantial code duplication . in this paper , we describe how these issues are dealt with in the publicly available 3d monte carlo dust radiative transfer code skirt .while we summarise the relevant information for the discussion here , an in - depth description of the architecture and overall design of the skirt code can be found in .it should also be noted that the presented design issues , while discussed in the context of skirt and thus dust radiative transfer , are fully applicable to other monte carlo transport problems .this paper is laid out as follows . in section [ techniques.sec ]we review the general techniques to generate random vectors from multivariate distributions that are available in the specialised numerical analysis literature , but less so in the astrophysics community . in section [ geometrysetup.sec ] we present the general setup of the suite of models for the skirt code . in section [ buildingblocks.sec ]we describe the various building blocks currently present in the skirt code , and in section [ decorators.sec ] we present a number of decorators that can be used to combine and add complexity to simple building blocks .some decorators are analysed in more detail , focusing on the methods used to efficiently generate random positions . in section [ discussion.sec ]we discuss the advantages of the decorator - style design of our suite of components , and we critically investigate an alternative option in which random positions are generated using a generic routine rather than customised generators .we show that , while such an approach is simpler to implement and might hence seem an attractive alternative , it can not compete with our approach , in terms of accuracy and efficiency .section [ conclusions.sec ] sums up .the generation of random numbers from univariate distributions is a well - known topic in numerical analysis .a number of standard methods are widely used and clearly described in standard textbooks ( e.g. , * ? ? ?additional techniques that are less known in the ( astro)physics community include the acceptance - complement method and the forsythe - von neumann method .for an extensive overview of random number generation from univariate distribution , see .unfortunately , the generation of random vectors from multivariate distributions is much more complex .the only multivariate distributions from which random vectors can be generated directly are those where the density can be written as a product of independent univariate density distributions . in general , however , more advanced techniques are necessary .the inversion method , also known as inversion sampling , is the most popular method for univariate generation problems .the basis of this method is the following : if is distributed according to a density , then the variable defined as the solution of the equation is distributed according to the density if we now want to generate a number from a given density , we can take a uniform distribution and set where is the cumulative distribution corresponding to the density . the inversion method can hence be used to generate random variates with an arbitrary density , provided that the inverse function to the cumulative distribution is explicitly known .classical examples include the exponential distribution , the cauchy distribution , the rayleigh distribution and the logistic distribution .formula ( [ transfmethod1d ] ) can directly be expanded to multiple dimensions : if is distributed according to the joint probability distribution , then the vector defined as the solution of the vector equation is distributed according to the joint distribution where is the jacobian determinant .probably the most famous application of this formula is the box - muller method to generate random normally distributed deviates .another interesting special application is the case of a linear transformation . in this case, the transformation can be written as a matrix multiplication , and the distribution of is with the absolute value of the determinant of .this kind of transformation is particularly useful for the generation of random vectors with a given dependence structure , as measured by the covariance matrix . in general , however , it is not straightforward to use this identity ( [ transfmethodmd ] ) to construct a method that can be used to generate random positions from an arbitrary multidimensional distribution .apart from the inversion method , the rejection method , also known as the acceptance - rejection method , is the most popular method to sample nonuniform random numbers from univariate distributions .the basic idea behind the method is that , if one wants to sample a random number from a function , one can sample uniformly from the 2d region under the graph .more concretely , assume that , where is another distribution from which random numbers are easily generated , and is the so - called rejection constant ( it obviously satisfies the condition ) .one then generates a uniform deviate and a random number from the reference distribution , and calculates the quantity .this procedure is repeated until , in which case is the desired random number .one of the advantages of the rejection method is that it does not require that the cumulative distribution function be analytically known , let alone be invertible . however , its effectiveness depends on how accurate is approximated from above by .less accurate approximation leads to a greater chance of rejection ; on average iterations of the loop are required before one successful random number is generated . moreover, the reference distribution should be such that random numbers can be easily generated from it and that the computation of is simple . for a range of standard distributions , such as the gamma distribution and the poisson distribution ,efficient reference functions can be constructed .the rejection method is by no means limited to univariate distributions , and can immediately be applied to multivariate generation problems .however , the rejection rate typically increases rapidly when going from one to more dimensions , which decreases the efficiency of the method .moreover , the design of a suitable reference distribution becomes much more complicated .the composition method or probability mixing method is another important technique in both univariate and multivariate random number generation . rather than a method on itself to generate non - uniform random numbers , it is a principle to facilitate and speed up other random number generating methods . the simple idea behind composition is to decompose the distribution as a weighted sum , with all normalised densities , and the weights a probability vector ( i.e. , all and ) . to generate a random from the distribution , we first generate a random integer number from the discrete probability vector , and subsequently generate a random from the density .a prerequisite for this method to work is that , obviously , the decomposition can be done efficiently , and that the complexity of the problem is reduced by the decomposition .finally , a powerful technique that applies only to multivariate random number generation is the so - called conditional distribution method .it is based on the bayesian identity which expresses that a joint distribution of two variables can be calculated as the product of the marginal distribution of the former variable and the conditional distribution of the latter .expanding it to multiple dimensions , the beauty of this technique is that it reduces the multidimensional generation problem to a sequence of independent univariate generation problems .the main drawback is that it can only be used when much detailed information is known about the distribution . in particular , it is by far not always the case that marginal and conditional distributions are easily calculated in closed form .the standard textbook example of this technique is the multivariate cauchy distribution .skirt is a multi - purpose 3d monte carlo dust radiative transfer code that is mainly used to simulate dusty galaxies and active galactic nuclei .the code was designed as a highly modular software , with a particular consideration for the development of a flexible and easy user interface and the use of proven software design principles as described by .the skirt code offers a wealth of configurable features that are ready to use without any programming .in particular , the skirt code is equipped with a suite of input model components , the so - called classes , that can be used to represent distributions of either sources or sinks .essentially , the suite consists of a number of classes that all inherit from an abstract class , for which the c++ class interface looks like .... class geometry { public : geometry ( ) ; virtual ~geometry ( ) ; virtual double density(position x ) const = 0 ; virtual position generateposition ( ) const = 0 ; } .... both the and functions are pure virtual functions , which means that each model in the suite should provide a routine that implements the ( normalised ) density , and a routine that generates random positions according to this density . for example , the interface of a concrete class that contains only a single parameter would look as follows .... class concretegeometry : public geometry { public : concretegeometry(double p ) { _ p = p } double density(position x ) const ; position generateposition ( ) const ; private : double _ p ; } .... for the design of this suite of input models , a naive option would be to provide a set of parameterised models that contain free parameters to control every possible option .a typical standard component of such a suite would be a very generalised plummer model , with free parameters setting the location of the centre , the orientation with respect to the coordinate system , the scales , the flattening describing potential triaxiality , possible degrees of clumpiness , etc .this approach has a number of strong disadvantages .clearly , it would quickly lead to an explosion of different options that is hard to overview .it would inevitably lead to substantial code duplication ( nearly all the code would need to copied if we would consider a hernquist profile instead of a plummer profile ) and code that is virtually impossible to maintain ( the code for all models would need to be updated if a new feature is added or altered ) . also concerning efficiency, such a design would not be optimal .indeed , a random position generating routine for such a generalised plummer model is hard to construct and is certainly much less efficient than the simple routine that is possible for a plain spherical plummer model ( using the inversion method ) . to overcome these problems, we adopted a completely different approach that is much simpler but still provides the flexibility and functionality needed to set up complex models .this is achieved using a combination of simple base models on the one hand , and so - called decorator geometries on the other hand .decorator geometries are not real models on their own , but they apply modifications upon other models in interesting ways , following the _ decorator _ design pattern . in object - oriented programming ,a decorator attaches additional responsibilities to an object dynamically and provides a flexible and powerful alternative to subclassing for extending functionality . in our present context, a decorator geometry is a special kind of geometrical model ( i.e. , it is also a c++ class in the general suite ) that takes one or more other components and adds a layer of complexity upon them .simple examples of decorators that can easily be implemented are the relocation of the centre of a given component , or the rotation of a given model with respect to the coordinate system .more complex decorators deform a spherical model to a triaxial one , or add a spiral perturbation to an axially symmetric model .the advantage of the decorator approach is clear : each decorator needs to be implemented only once , and can then be applied to any possible model . in the following two sections we describe the building blocks in the skirt suite , and a number of decorator geometries that can alter these building blocks to more complex structures .the skirt suite contains a limited number of elementary input models , which can be used either as elementary toy models , or as building blocks for more complex geometries . for each of them, the density can be expressed as a simple analytical function , and the generation of random positions reduces to three independent univariate generation problems .apart from these analytical components , skirt also offers the possibility to set up components in which the geometry of sources and/or sinks is defined by means of particles or on a grid .in particular , skirt can import a snapshot from a ( magneto)hydrodynamical simulation .one obvious goal would be to post - process a hydrodynamical simulation in order to calculate the observable multi - wavelength properties of the simulated objects ( e.g. , * ? ? ?* ; * ? ? ?* ) . in this case, it would be sufficient to just read in the geometry of the snapshot as it is , and start the radiative transfer simulation .more generally , however , it would be useful if these numerical components would be at a similar level as the analytical components .this would open up the possibility to decorate them and combine them with other analytical and/or numerical components to more complex models .a first group consists of spherically symmetric models . in spherical symmetry, the generation of a random position from the 3d density simplifies to the generation of a random azimuth from a uniform distribution , a random polar angle from a sine distribution , and a random radius from the univariate density .the skirt suite includes the most popular models used to represent shells , star clusters , early - type galaxies , or galaxy bulges , such as a power - law shell model , the plummer model , the -model , the srsic model , and the einasto model . within this family of models , many other famous profilesare contained , including the hernquist model , the jaffe model and the -model .a second group of elementary input models consists of axisymmetric models in which the density is a separable function .the standard example with a density distribution separable in cylindrical coordinates is the double - exponential disc that is the de facto standard to represent disk galaxies in radiative transfer studies .how random positions can be drawn from this distribution is discussed in appendix a of .an example of an axisymmetric model where the density is separable in spherical coordinates is the torus model that is often used to represent the dusty tori around stars or active galactic nuclei .the first group of numerical components in skirt are defined as a set of smoothed particles .this is mainly useful when we want to use the output of a smoothed particle hydrodynamics simulation . in spite of claims that the technique suffers from fundamental problems , it is still the most popular hydrodynamics technique , especially for cosmological simulations of galaxy formation ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?the output of an sph snapshot consists of a set of `` particles '' ( or rather anchor points in a co - moving grid ) , each characterised by a large suite of physical quantities .as far as skirt is concerned , most of these physical quantities are irrelevant .an component in skirt is essentially defined by a list of smoothed particles and the assumed smoothing kernel , with each smoothed particle characterised by a position , a fractional mass and a smoothing length . the total density at an arbitrary position is then given by in practice , the kernels used in sph simulations almost always have a finite support , and in this case only a relatively small number of terms in the sum have a non - zero contribution .the class in skirt employs smoothing kernel implementations optimised for each specific task .the geometry s density at a given location is calculated according to equation ( [ rho - sph ] ) using a finite - support cubic spline kernel , so that the operation can be limited to particles that potentially overlap the location of interest . to facilitate this process, the setup phase of the simulation places a rough grid over the spatial domain and constructs a list of overlapping particles for each grid cell . as described in section 3.3 of ,a further optimisation is provided to calculate the mass within a given box ( a cuboid lined up with the coordinate axes ) , as an alternative to sampling the density in various locations across the volume of the box . in this case, the calculation uses the analytical properties of a scaled and clipped gaussian kernel , designed to approximate the cubic spline kernel , to directly determine the mass in the box .this optimisation accelerates the density calculation for typical cartesian grids by an order of magnitude . on the other hand ,the generation of random positions sampled from the geometry s density distribution is rather straightforward , thanks to the composition method .the first step is the choice of a random smoothed particle , based on a discrete distribution where each particle is weighted by its relative mass contribution .the second step is generating a random position according to the distribution of the chosen particle . the current implementation samples a random number from a gaussian smoothing kernel with infinite support using the inversion method . in the futurewe may provide a suite of smoothing kernels , allowing the user to select the kernel that best fits the sph snapshot being imported .the methods used for calculating the density and generating a random radius would obviously depend on the actual shape of the kernel .in addition to what is described above , skirt has facilities to associate a spectral energy distribution with each sph star particle based on properties such as age and metallicity , or to associate a dust mass with each sph gas particle based on properties such as temperature and metallicity . in the former case, the geometry will weigh the particles by luminosity , for each wavelength separately , and in the latter case , the geometry will weigh the particles by dust mass .a detailed description of these features is beyond the scope of this paper .apart from smoothed particle hydrodynamics , the main other technique that is used to perform hydrodynamics simulations is eulerian mesh - based hydrodynamics .a fundamental ingredient of this technique is the use of grids with adaptive mesh refinement .eulerian amr simulations are used in virtually all fields of astrophysics , and monte carlo codes have been adapted to work directly on these hierarchical grids .the in skirt reads in snapshots defined by density fields discretised on hierarchical cartesian grids and converts them to the format of the other components in the skirt suite . calculating the density at an arbitrary position comes down to identifying the cell that contains this position and returning the density associated with that cell . as hierarchical grids have the structure of a tree , isolating the correct cell is a straightforward and computationally cheap operation . generating random positions from sucha component is similar to the case of the smoothed particles , and is based on the composition method .we first generate a random cell from a discrete distribution where each cell is weighted by its relative mass . secondly, we determine a random position within the chosen cell ; as the cells in a hierarchical cartesian grids are cuboids , this is a trivial task .recently , a new lagrangian technique that solves the hydrodynamics equations on a moving , unstructured voronoi grid is gaining popularity . it is claimed to avoid some of the difficulties of smoothed particle hydrodynamics on the one hand and eulerian grid - based hydrodynamics on the other hand .this technique has been used for many years in the computational fluid dynamics community , and a number of novel codes based on this principle have recently been developed in the astrophysics community . moving mesh hydrodynamicsis mainly applied to simulations of galaxy formation and evolution ( e.g. , * ? ? ?* ; * ? ? ?skirt contains a class that converts a snapshot from a voronoi hydrodynamical simulation ( or any other density field defined on a voronoi grid ) to a regular skirt component . due to the nature of a voronoi grid ,the only necessary input is the list of all the generating sites and the associated densities ; it is hence not necessary to store all the vertices and sides of each of the cells .based on the generating sites , skirt constructs the corresponding unique voronoi grid using the public voro++ library .the density routine essentially works in the same way as for the case of the hierarchical grids : it comes down to identifying the cell that contains the given position and returning the density associated to this cell . in the case of a voronoi grid, however , the cell identification is not as simple as in a hierarchical tree structure of cartesian grid cells . due to the nature of voronoi grids , this is essentially a nearest neighbour search . rather than looping over all possible sites , skirt implements an approach using cuboidal blocks , as explained in detail in .this task could be optimised even further using more advanced techniques based on space partitioning structures such as -d trees or r - trees .also the generation of random positions works essentially in the same way as for hierarchical grids .the first step is identical : we generate a random cell from a discrete distribution where each cell is weighted by its relative mass contribution .the second step , generating a random position from within the chosen cell , is significantly more complex than in the case of a cuboidal cell . to the best of our knowledge ,there are no dedicated techniques to generate a random point from a voronoi cell .there are two possible options .the first option is to partition the cell into a set of tetrahedra , subsequently select a random tetrahedron from a discrete distribution where every tetrahedron is weighted by its relative volume , and finally generate a random position from the selected tetrahedron .specific algorithms are available for both the tetrahedrisation of convex polyhedra and the generation of random positions from a tetrahedron . the second option , which is more simple and which we have adopted in skirt , is to use the rejection technique . as the reference distributionwe use a uniform density in a cuboidal volume , defined as the 3d bounding box of the cell . as voronoi cellsare convex polyhedra , this bounding box is directly obtained when the vortices of the cell are known ( these are calculated using the voronoi grid setup ) .extensive testing has shown that , depending on the distribution of the generating voronoi sites , the average ratio of the volume of the bounding box of a voronoi cell over the actual cell volume is about 3 to 4 .this ratio immediately represents the average rejection rate for the random position generation ..an overview of the geometry decorators currently implemented in skirt , referencing the section in which they are presented .geometry decorators can be applied to basic geometry building blocks , and can be chained and combined to create complex structures . [ cols= " < , < ,< " , ] in this section we describe a number of decorator geometries that can be applied on the building blocks described in the previous section in order to convert them to more complex structures ; see table [ decorators.tab ] for an overview .the implementation of the density of a decorator geometry is usually not a major problem ; the main challenge is to implement the routine that generates random positions from a decorator geometry , so this is what we focus on .the decorator in skirt applies an arbitrary offset to any density distribution .if the original density is , the new density is simply .generating random positions is equally simple : we generate a random from the original density distribution and return .the c++ implementation of the class in skirt looks like : .... class offsetgeometrydecorator : public geometry { public : offsetgeometrydecorator(geometry * g , position a ) { _ g = g ; _ a = a ; } double density(position x ) const { return _ g->density(x - a ) ; } position generateposition ( ) const { return _ g->generateposition()+a ; } private : geometry * _ g ; position _ a ; } .... similarly , the decorator rotates any density distribution .if the rotation is characterised by the orthonormal matrix , the new density is . to generate a random position from this new density ,we generate a random from the original density and rotate it over the inverse rotation matrix , i.e. .a third simple decorator , the , carves out a cavity from another density . in formula form, we have with the fraction of the mass of the original density located in the cavity .this decorator is useful to represent density distributions of dust close to a star or active galactic nucleus , where the dust has been cleared due to sublimation . to generate random positions from this new density distribution, we just generate a random position from the original density distribution and reject is when it is located in the cavity .this is , in fact , an almost trivial application of the rejection technique , where the original density assumes the role of the reference function , and the rejection constant is .the combines two density distributions into a single distribution according to with the original distributions and their respective weights in the composite distribution . generating random positions for this new density distribution is a trivial application of the composition method . as a first more complex decorator, we consider the case of a triaxial decorator geometry , which converts a spherically symmetric density distribution into one that is stratified on concentric and confocal ellipsoids .more concretely , assume that we have a spherically symmetric density distribution , we then consider its triaxial counterpart such triaxial models are discussed and used extensively to describe the stellar distribution of elliptical galaxies and galactic nuclei and the shape of galactic haloes ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?oblate and prolate spheroidal distributions are just a special case of these triaxial models in which .it is clear that the density ( [ tri : density ] ) can not be written as a product of independent univariate density distributions .we can use the conditional distribution method . in order to develop a general recipe for generating random positions ,we start by rewriting the probability distribution according to ( [ tri : density ] ) in spherical coordinates , according to formula ( [ cdm ] ) , we now rewrite this expression as after some calculation , one finds for the marginal distribution for the simple expression note that this marginal distribution is independent of the specific shape of the density profile and only depends on the flattening parameter ( and for , it reduces to a simple uniform distribution , as expected for a spheroidal distribution ) . generating a random azimuth from this density can be done using the standard inversion method ; the corresponding cumulative distribution is and is readily inverted .once this random has been determined , we can generate a random polar angle from the conditional distribution where we have set .also this distribution is independent of the specific shape of the density profile and only depends on the flattening parameters and ( and the already randomly determined azimuth ) .again , generating a random polar angle from this distribution can be done using the standard inversion method : the corresponding cumulative distribution is and is again readily inverted .finally , we consider the conditional distribution for , if we apply a linear transformation from to the corresponding density for is simply which is nothing but the distribution for a random radius from the original density .the last step in the generation of a random position is to generate a random position from the original spherical density distribution , and transform the radius of this position to a new radius using the linear transformation ( [ mr ] ) , in which we use the previously determined values for and .while this method based on conditional probabilities has a certain beauty , there is , actually , a simpler and more efficient method that is based on the inversion method . indeed , consider the simple transformation or in matrix format according to the law of linear transformations ( [ transfmethodlin ] ) , if is distributed according to the spherical density , will be distributed exactly according to the triaxial density ( [ tri : density ] ) . an easy way to generate random positions from a triaxial counterpart of a spherical symmetric density is thus simply generating a random position from the spherical density distribution , from which the desired position can easily be calculated as . as a second complex decorator ,we consider a logarithmic spiral arm perturbation that can be applied to any axisymmetric density distribution , [ spiral ] \ ] ] where are the usual cylindrical coordinates , and the function is defined as \label{perturbation}\ ] ] the factor is a normalisation constant that guarantees that the density is normalised , and is given by this general spiral arm perturbation is a generalisation of the models used by and , and allows for an arbitrary number of arms , pitch angle , spiral perturbation weight , and arm - interarm size ratio ( controlled by the parameter ) .for the density reduces to a more simple perturbation with equal arm - interarm size ratio , \right\}\ ] ] for the general 3d density distribution ( [ spiral ] ) , we can again use the conditional distribution method , and write since the spiral perturbation is such that it averages out along every single circle around the -axis , i.e. , we trivially find this is exactly the 2d probability density distribution that corresponds to the density of the axisymmetric density distribution .we hence simply generate a random position from our original density distribution and save the radius and the height . in order to generate a random azimuth , we consider the conditional distribution , for which we find directly this univariate density is independent of the shape of the original axisymmetric density profile ; it only depends on the parameters of the spiral perturbation and randomly determined radius . for the general expression ( [ perturbation ] ) , the standard inversion technique is not easily applicable ; the cumulative distribution corresponding to the density ( [ spiral : pdf ] ) can be calculated analytically , but the resulting expression can not be inverted analytically .one could resort to a numerical inversion , but a better approach is to use the rejection technique .we can use a simple uniform density as the reference distribution , .the rejection constant should ideally be chosen as the smallest value for which for all values of .since the maximum value of the perturbation function is , we find actually , as an alternative to the conditional probability technique , we could also simply have used the rejection technique from the beginning .the 3d density satisfies the condition , so if we simply use the original density as our reference distribution , we have essentially the same algorithm with the same rejection constant . the difference between this 3d rejection technique and the previous version , where we used the rejection technique only for the conditional distribution for the azimuth ,is a matter of efficiency . in the former versiononly one random position needs to be generated from the original density , and it the rejection is only to determine the azimuth . in the latter version ,an entirely new position is generated for every rejection , which is less efficient .as a last example , we consider the class that turns any density distribution into a clumpy analogue , i.e. , a density distribution in which a fraction of the mass is distributed `` smoothly '' as in the original density distribution , and the remaining fraction is locked up into compact clumps . the density of this distribution can be written as where is the fraction of the mass in clumps , is the number of clumps , and the positions are the central positions of the clumps , determined randomly according to the original density function .the function is the smoothing function that sets the distribution of matter within every single clump , with a characteristic length scale .this can in principle be any spherical 3d distribution ; in practice we use either a uniform density sphere or one of the compact support kernels that are used in smoothed particle hydrodynamics simulations .it is straightforward to generate random positions from this distribution .it is a direct application of the composition method , and the case is very similar to smoothed particle building blocks discussed in section [ sph.sec ] .as indicated in section [ geometrysetup.sec ] , a major advantage of the decorator - style approach is that each decorator needs to be implemented only once , and can then be applied to different models .this avoids the need to implement complex geometries with many different possible features and heavy code duplication . that alone is already an argument strong enough to justify its use .another strong advantage is the flexibility offered by this approach .the decorators discussed in section [ decorators.sec ] transform an input model into a more complex model .such a decorated model is often still relatively simple : it typically adds one layer of complexity onto the model that is being decorated ( think of a rotated exponential disc or a triaxial srsic model ) . if we want to generate toy models that can be used as idealised representations of , for example , a spiral galaxy , various layers of complexity have to be added .the decorator - style approach is ideal for this purpose : decorators can be applied also to models that have already been decorated . in other words , decorators can easily be chained or nested .the left - hand panel of figure [ chaineddecorators.fig ] shows a projected image of a relatively complex toy model , constructed using the suite in skirt .the model consists of a bulge component and a clumpy spiral disc .each of these two components have a very simple spherical -model as their starting point , on which a chain of decorators was applied .the former was first decorated into a triaxial model , the latter was turned into an axisymmetric disc by applying a very strong flattening , then a spiral perturbation was added , and a fraction of its mass was turned into clumps . in decoration chains the successive decorators do not necessarily need to be different .it is also possible to chain the same decorator several times , i.e. to apply them recursively .a powerful example of this nesting is a repeated application of the .the right - hand panel of figure [ chaineddecorators.fig ] shows an application of this principle on a simple triaxial plummer model . in this case , we have applied four nested applications of the decorator , in which the number of clumps increases in each level , whereas the smoothing length decreases .this algorithm results in a fractal density distribution that is self - similar over an order of magnitude in scale .the possibility to chain decorators , including a repeated application of the same decorator , facilitates the construction of very complex models out of simple building blocks .analytical components can easily be combined with numerical components based on smoothed particles or grids , and decorators can be applied regardless of the underlying component type .it is , for example , possible to add a smooth triaxial bulge to an irregular disc structure defined using sph particles , or to carve out a cavity from a complex hydrodynamics system and locate a small nuclear structure there to simulate the effects of an agn on the large - scale structure of a galaxy .one could argue that the setup that we have chosen is unnecessarily complex .an alternative approach would be a design where we only provide the density for each component in the suite , and where the generation of random positions is executed by a generic routine rather than a geometry - specific routine for each component .such a suite could still use the decorator design pattern , but would be simpler to implement .the main challenge is the design of a generic routine that can generate random positions corresponding to an arbitrary 3d density function .this is in principle possible : there are a few multivariate black - box random number generation libraries available , including foam and ranlip , based on the so - called composition - rejection method , a combination of the composition and rejection methods . in a first phase , the exploration phase , the domain of a distribution is partitioned into different cells . in the generation phase , the rejection technique is used on each cell , where typically a constant density function is used as the reference distribution .the advantages of this approach are that , in principle , it can be applied for all distributions and in any dimension , and that only the density of each component is required .the main disadvantage is the complexity and the overhead , both in memory and run - time , that are linked to the exploration phase . in order to testwhether or not our design choice of providing a customised random position generator for each concrete class ( and for each decorator , in particular ) is justified , we have set up a comparison between our routines and a generic black - box routine .we have implemented a new decorator class , , that replaces the model - specific random position generator by a generic routine based on the foam library .foam is a self - adapting cellular code that uses iterative binary splitting , using either simplexes or hyper - rectangles , to subdivide the configuration space .we set up a simple suite of test models , for which we generate random positions using both the model - specific generator and the generic foam generator .our suite consists of a sequence of models with an increasing level of complexity .it essentially follows the different steps in building the model shown in the left panel of figure [ chaineddecorators.fig ] .the following models are considered 1 . a spherical model , 2 . model # 1 flattened to an axisymmetric disc , 3. model # 2 with spiral perturbation , 4 . model # 3 with a fraction turned into clumps , 5 .model # 4 with a triaxial bulge added . for each model, we generate random positions ( we do not use any photon propagation or detection as would be used in monte carlo radiative transfer simulation in order to isolate the random position generation process ) .the timings were done using a single core and with averages over multiple test runs .one crucial aspect for the foam - based generator is the choice of the foam parameters , in particular of the number of cells in the grid. a larger number of cells implies a higher computational cost of the exploration phase on the one hand , but a better approximation of the density and smaller rejection rates .the ideal choice of this parameter is impossible to determine in a general way .we consider 5 different values for the number of cells in the foam for every model , ranging from 25,000 to 400,000 .the results are illustrated in fig .[ timings.fig ] , where , for each model , we give the total run time and the contributions for the setup of the foam and the actual time spent on generating the random positions . a first major conclusion that can be drawn from these results is the efficiency of the customised random position generation routines . from a simple spherical model , to a complex model composed of two components that both have been decorated multiple times , the computational cost increases only by a factor two . moving from a spherical to a triaxial model does not affect the efficiency of the generation of random positions at all ( not a surprise , as it just implies two simple multiplications , as shown in section [ triaxial.sec ] ) .the biggest jump in timing occurs when spiral structure is added to the model ( i.e. , from model # 2 to # 3 ) , because the random position generation is based on the rejection method , which has more overhead compared to the inversion and composition methods .the addition of clumping ( from model # 3 to # 4 ) hardly implies an increase in computation time .note that the addition of a bulge even decreases the computation time . in model # 5 half of the positionsare generated according to the density of the relatively simple triaxial bulge , whereas in model # 4 they all are generated according to the more complex clumpy spiral structure .a second major conclusion is that the generic foam - based random position generator is significantly less efficient than the customised generator . for the most simple models ( # 1 and # 2 ) ,the difference in speed is a factor 38 , depending on the number of cells used .the increase in total run time for the foam - based models with increasing number of cells is mainly due to the time necessary to set up the foam .the time necessary to generate the random positions also increases , but not as strongly . for the more complex models ,the efficiency of the foam - based generator decreases even more .the biggest jump in timing now occurs when we add clumping to the model , because the calculation of the density for the clumpy models is computationally very demanding ( each density evaluation requires a sum over a large number of clumps ) . for models # 4 and 5 , the foam - based generators are at least an order of magnitude slower than the customised routines .interestingly , the run time of these simulations does not increase with the number of cells . for simulations with few cells ,the setup phase is relatively quick , but the generation phase is very inefficient .the reason is that the cells are relatively large with a strong variation in density values , and hence the rejection rates are uncomfortably large .if the number of cells in the grid increases , the setup time naturally increases , but the actual position generation becomes more efficient because the average rejection rates are smaller .moreover , the inefficiency of the foam - based generator is not the only problem , but also accuracy is an issue , in particular for complex models with multiple local maxima and strong gradients , such as models # 4 and # 5 .if the number of cells is limited and the cells hence rather large , both the total mass and the local maximum within a cell are very hard to guess ( they are determined by random sampling the cell ) . as a result ,both the relative weights of the different cells and the rejection constants within the individual cells are poorly determined .this will lead to an artificial smoothing and wrong results .figure [ genericversusfoam.fig ] shows a sequence of images corresponding to model # 4 , created using the customised random position generator ( top left ) and using the foam - based one with different values for the number of cells ( remaining panels ) .the customised generator generates an image that reveals all the details that are expected , including the sharp density gradients and the contrast between arm and interarm regions .the only limitation in the image is the finite resolution due to the pixel size and the unavoidable poisson noise .the foam images on the other hand show a clear signature of degradation that gradually decreases when the number of cells grows .for the grids with 25,000 and 50,000 cells , and even for the one with 100,000 cells , the effects of the grid are clearly visible , and the individual clumps and the spiral structure are insufficiently resolved . for the grid with 400,000 cells , the individual clumps are well resolved , but the sharpness of the spiral arms is still under - resolved , especially in the central regions . since both the accuracy and the efficiency of the generic foam - based random position generator can not compete with the customised versions , we conclude that it is worth investing in the latter , and that our design choice is justified .we have described the design of a suite of components that can be used to model the distribution of sources and sinks in the monte carlo radiative transfer code skirt .our main conclusions are the following : * the availability of a well - designed suite of input models , with enough variety and different degrees of complexity , in a publicly available monte carlo code has a strong added value .such models can serve as toy models to test new physical ingredients , or as parameterised models for inverse radiative transfer fitting .they also provide a low - threshold introduction to new users , since models with differing degrees of complexity can be run without any coding at all . *each model is in principle completely determined by the 3d density distribution . in order to be used in monte carlo radiative transfer , however , each model should contain a routine that generates random numbers according to this .finding the most suitable algorithm to implement this is the most challenging aspect of the design of a suite of input models . *the design of the skirt suite is based on a combination of basic building blocks and the extensive use of decorators .the building blocks can either be simple analytical components , or they can be numerical components defined as a set of smoothed particles or on a hierarchical or unstructured voronoi grid .the decorators combine and alter these building blocks to more complex structures . *the different multivariate random number generation techniques that exist in the specialised numerical analysis literature can be used to efficiently implement complex decorators , for example those that add triaxiality , spiral structure or clumpiness to other models .decorators can be chained without problems and essentially without limitation .different layers of complexity can hence be added one by one .the result is that very complex models can easily be constructed out of simple building blocks , without any coding at all .* from the software design point of view , decorators have many advantages , including code transparency , the avoidance of code duplication , and an increase in code maintainability .this is a clear example that adhering to proven software design principles pays off , even for small and mid - sized projects . *finally , we demonstrate that our design using customised random position generators is superior to a simpler suite design based on a generic black - box random position generator . using a suite of test simulations with increasing complexitywe demonstrate that our customised random number generators are more accurate and more efficient .this work fits in the charm framework ( contemporary physical challenges in heliospheric and astrophysical models ) , a phase vii interuniversity attraction pole ( iap ) program organised by belspo , the belgian federal science policy office .the authors thank ilse de looze and sbastien viaene for their careful reading of a draft version of this paper , and all skirt users for their feedback , which has led to many improvements and additions to the code . , o. , moore , b. , stadel , j. , potter , d. , miniati , f. , read , j. , mayer , l. , gawryszczak , a. , kravtsov , a. , nordlund , . , pearce , f. , quilis , v. , rudd , d. , springel , v. , stone , j. , tasker , e. , teyssier , r. , wadsley , j. , walder , r. , 2007 .fundamental differences between sph and grid methods . mon . not .r. astron .380 , 963978 . ,m. , davies , j. i. , dejonghe , h. , sabatini , s. , roberts , s. , evans , r. , linder , s. m. , smith , r. m. , de blok , w. j. g. , 2003 .radiative transfer in disc galaxies - iii .the observed kinematics of dusty disc galaxies . mon . not .r. astron .343 , 10811094 ., m. , fritz , j. , gadotti , d. a. , smith , d. j. b. , dunne , l. , da cunha , e. , amblard , a. , auld , r. , bendo , g. j. , bonfield , d. , burgarella , d. , buttiglione , s. , cava , a. , clements , d. , cooray , a. , dariush , a. , de zotti , g. , dye , s. , eales , s. , frayer , d. , gonzalez - nuevo , j. , herranz , d. , ibar , e. , ivison , r. , lagache , g. , leeuw , l. , lopez - caniego , m. , jarvis , m. , maddox , s. , negrello , m. , michaowski , m. , pascale , e. , pohlen , m. , rigby , e. , rodighiero , g. , samui , s. , serjeant , s. , temi , p. , thompson , m. , van der werf , p. , verma , a. , vlahakis , c. , 2010 .herschel - atlas : the dust energy balance in the edge - on spiral galaxy ugc 4754 .astroph .518 , l39 . ,m. , verstappen , j. , de looze , i. , fritz , j. , saftly , w. , vidal prez , e. , stalevski , m. , valcke , s. , 2011 .efficient three - dimensional nlte dust radiative transfer with skirt .j. suppl .196 , 22 . ,g. , baes , m. , camps , p. , fritz , j. , de looze , i. , hughes , t. m. , viaene , s. , gentile , g. , 2014 . the distribution of interstellar dust in califa edge - on galaxies via oligochromatic radiative transfer fitting . mon . notr. astron .441 , 869885 . , i. , fritz , j. , baes , m. , bendo , g. j. , cortese , l. , boquien , m. , boselli , a. , camps , p. , cooray , a. , cormier , d. , davies , j. i. , de geyter , g. , hughes , t. m. , jones , a. p. , karczewski , o. . ,lebouteiller , v. , lu , n. , madden , s. c. , rmy - ruyer , a. , spinoglio , l. , smith , m. w. l. , viaene , s. , wilson , c. d. , 2014 .high - resolution , 3d radiative transfer modeling .i. the grand - design spiral galaxy m51 .astroph .571 , a69 .desbrun , m. , gascuel , m .-smoothed particles : a new paradigm for animating highly deformable bodies . in : proceedings of the eurographics workshop on computer animation and simulation 96 .. 6176 . , b. , olson , k. , ricker , p. , timmes , f. x. , zingale , m. , lamb , d. q. , macneice , p. , rosner , r. , truran , j. w. , tufo , h. , 2000 .flash : an adaptive mesh hydrodynamics code for modeling astrophysical thermonuclear flashes .j. suppl .131 , 273334 . , g. l. , danese , l. , 1994 .thick tori around active galactic nuclei - a comparison of model predictions with observations of the infrared continuum and silicate features .mon . not .r. astron . soc . 268 , 235252 . ,c. , popescu , c. c. , tuffs , r. j. , 2006 . modelling the spectral energy distribution of galaxies .iv . correcting apparent disk scalelengths and central surface brightnesses for the effect of dust at optical and near - infrared wavelengths .astroph .456 , 941952 .mller , m. , charypar , d. , gross , m. , 2003 .particle - based fluid simulation for interactive applications . in : proceedings of the 2003 acm siggraph / eurographics symposium on computer animation .. 154159 . , i. , wolf , s. , steinacker , j. , dullemond , c. p. , henning , t. , niccolini , g. , woitke , p. , lopez , b. , 2004 .the 2d continuum radiative transfer problem .benchmark results for disk configurations .astroph .417 , 793805 . ,c. , harries , t. j. , min , m. , watson , a. m. , dullemond , c. p. , woitke , p. , mnard , f. , durn - rojas , m. c. , 2009 .benchmark problems for continuum radiative transfer .high optical depths , anisotropic scattering , and polarisation .astroph .498 , 967980 . ,m. , wada , k. , prieto , m. a. , burkert , a. , tristram , k. r. w. , 2014 .time - resolved infrared emission from radiation - driven central obscuring structures in active galactic nuclei .mon . not .r. astron .445 , 38783891 . ,j. , crain , r. a. , bower , r. g. , furlong , m. , schaller , m. , theuns , t. , dalla vecchia , c. , frenk , c. s. , mccarthy , i. g. , helly , j. c. , jenkins , a. , rosas - guevara , y. m. , white , s. d. m. , baes , m. , booth , c. m. , camps , p. , navarro , j. f. , qu , y. , rahmati , a. , sawala , t. , thomas , p. a. , trayford , j. , 2015 .the eagle project : simulating the evolution and assembly of galaxies and their environments .mon . not .r. astron .soc . 446 , 521554 . , m. , fritz , j. , baes , m. , nakos , t. , popovi , l. v c. , 2012 .3d radiative transfer modelling of the dusty tori around active galactic nuclei as a clumpy two - phase medium .mon . not .r. astron .420 , 27562772 . , j. m. , norman , m. l. , 1992 .zeus-2d : a radiation magnetohydrodynamics code for astrophysical flows in two space dimensions .i - the hydrodynamic algorithms and tests . astrophys . j. suppl .80 , 753790 . ,i. , asensio ramos , a. , rubio - martn , j. a. , graham , a. w. , aguerri , j. a. l. , cepa , j. , gutirrez , c. m. , 2002 .triaxial stellar systems following the r luminosity law : an analytical mass - density expression , gravitational torques and the bulge / disc interplay .mon . not .r. astron . soc .333 , 510516 . ,r. c. e. , van de ven , g. , verolme , e. k. , cappellari , m. , de zeeuw , p. t. , 2008 .triaxial orbit based galaxy models with an application to the ( apparent ) decoupled core galaxy ngc 4365 .mon . not .r. astron .385 , 647666 . , m. , genel , s. , springel , v. , torrey , p. , sijacki , d. , xu , d. , snyder , g. , nelson , d. , hernquist , l. , 2014 .introducing the illustris project : simulating the coevolution of dark and visible matter in the universe .mon . not .r. astron .444 , 15181547 .
the monte carlo method is the most popular technique to perform radiative transfer simulations in a general 3d geometry . the algorithms behind and acceleration techniques for monte carlo radiative transfer are discussed extensively in the literature , and many different monte carlo codes are publicly available . on the contrary , the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention . the availability of such models , with different degrees of complexity , has many benefits . for example , they can serve as toy models to test new physical ingredients , or as parameterised models for inverse radiative transfer fitting . for 3d monte carlo codes , this requires algorithms to efficiently generate random positions from 3d density distributions . we describe the design of a flexible suite of components for the monte carlo radiative transfer code skirt . the design is based on a combination of basic building blocks ( which can be either analytical toy models or numerical models defined on grids or a set of particles ) and the extensive use of decorators that combine and alter these building blocks to more complex structures . for a number of decorators , e.g. those that add spiral structure or clumpiness , we provide a detailed description of the algorithms that can be used to generate random positions . advantages of this decorator - based design include code transparency , the avoidance of code duplication , and an increase in code maintainability . moreover , since decorators can be chained without problems , very complex models can easily be constructed out of simple building blocks . finally , based on a number of test simulations , we demonstrate that our design using customised random position generators is superior to a simpler design based on a generic black - box random position generator . radiative transfer methods : numerical designing software design patterns
we consider the ill - posed inverse problems of the form where is a bounded linear operator between two hilbert spaces and whose inner products and the induced norms are denoted as and respectively which should be clear from the context .here the ill - posedness of ( [ 1.1 ] ) refers to the fact that the solution of ( [ 1.1 ] ) does not depend continuously on the data which is a characteristic property of inverse problems . in practical applications ,one never has exact data , instead only noisy data are available due to errors in the measurements . even if the deviation is very small , algorithms developed for well - posed problems may fail , since noise could be amplified by an arbitrarily large factor .therefore , regularization methods should be used in order to obtain a stable numerical solution .one can refer to for many useful regularization methods for solving ( [ 1.1 ] ) ; these methods , however , have the tendency to over - smooth solutions and hence are not quite successful to capture special features . in case a priori information on the feature of the solution of ( [ 1.1 ] ) is available , we may introduce a proper , lower semi - continuous , convex function ] is a proper , lower semi - continuous function that is strongly convex in the sense that there is a constant such that for all and .we then consider the method when the data contains noise and propose a stopping rule to render it into a regularization method .our analysis is based on some important results from convex analysis which will be recalled in the following subsection . given a convex function ] be a a proper , lower semi - continuous function that is strongly convex in the sense of ( [ sc ] ) .if is a decreasing sequence of positive numbers and if is chosen by ( [ 10.13.7 ] ) with and , then for the method ( [ method ] ) there hold the proof is based on the following useful result .[ general ] consider the equation ( [ 1.1 ] ) .let ] be a a proper , lower semi - continuous function that is strongly convex in the sense of ( [ sc ] ) .if is a decreasing sequence of positive numbers and is chosen by ( [ 10.3.6 ] ) with and , then rule [ rule1 ] defines a finite integer . moreover , if , then for the sequences and defined by ( [ method_noise ] ) there holds for any solution of ( [ 1.1 ] ) in .* let . by using the similar argument in the proof of lemma [ lem10.13.1 ]we can obtain in view of and the choice of , it follows that by the definition of and we have therefore , we have with that this shows the monotonicity result ( [ monotone ] ) and for all .we may sum the above inequality over from to for any to get by the choice of it is easy to check that .therefore , in view of ( [ 10.3.5 ] ) , we have for all . since for all , it follows from ( [ 10.6.2 ] ) that .the proof is therefore complete .+ by taking in ( [ 10.6.2 ] ) , the integer defined by rule [ rule1 ] can be estimated by where . in case chosen such that for all for some constant , then it then follows from ( [ 10.6.1 ] ) that which implies that . therefore , with such a chosen , algorithm [ alg1 ] exhibits the fast convergence property . in order to use the results given in lemma [ lem1 ] and theorem [ thm1 ] to prove the convergence of the method ( [ method_noise ] ) , we need the following stability result . [ lem10.9.1 ]let and be defined by ( [ method ] ) with chosen by ( [ 10.13.7 ] ) , and let and be defined by ( [ method_noise ] ) with chosen by ( [ 10.3.6 ] ) .then for each fixed integer there hold * proof . *we prove the result by induction on .it is trivial when because and .assume next that the result is true for some .we will show that and as .we consider two cases : _ case 1 : ._ in this case we must have since otherwise therefore , by the induction hypothesis it is straightforward to see that as . according to the definition of and the induction hypothesis , we then obtain .recall that we then obtain by the continuity of ._ case 2 : ._ in this case we have .therefore consequently , by the induction hypothesis , we have by using again the continuity of , we obtain .+ we are now in a position to give the main result concerning the regularization property of the method ( [ method_noise ] ) with noisy data when it is terminated by rule [ rule1 ] .[ thm2 ] let ] which is proper , convex and lower semi - continuous . let be the only available noisy data satisfying with a small known noise level . then an obvious extension of algorithm [ alg1 ] for solving ( [ n1.1 ] ) takes the following form .[ alg2 ] 1 .take , , and a decreasing sequence of positive numbers ; 2 .take and define as an initial guess ; 3 . for each define where 4 .let be the first integer such that and use to approximate the solution of ( [ n1.1 ] ) .we remark that when , algorithm [ alg2 ] reduces to a method which is similar to the regularized levenberg - marquardt method in for which convergence is proved under certain conditions on . for general convex penalty function , however , we do not have convergence theory on algorithm [ alg2 ] yet .nevertheless , we will give numerical simulations to indicate that it indeed performs very well .in this section we will provide various numerical simulations on our method .the choice of the sequence plays a crucial role for the performance : if decays faster , only fewer iterations are required but the reconstruction result is less accurate ; on the other hand , if decays slower , more iterations are required but the reconstruction result is more accurate . in order to solve this dilemma, we choose fast decaying at the beginning , and then choose slow decaying when the method tends to stop .more precisely , we choose according to the following rule .[ alpha ] let and be preassigned numbers . we take some number and for if we set ; otherwise we set .all the computation results in this section are based on chosen by this rule with , and .our tests were done by using matlab r2012a on an lenovo laptop with intel(r ) core(tm ) i5 cpu 2.30 ghz and 6 gb memory .we first consider the integral equation of the form ,\ ] ] where it is easy to see , that is a compact linear operator from ] . our goal is to find the solution of ( [ int ] ) using noisy data satisfying }=\d ] into subintervals of equal length and approximate any integrals by the trapezoidal rule .in figure [ fig1 ] we report the numerical performance of algorithm [ alg1 ] .the sequence is selected by rule [ alpha ] with , , and .the first row gives the reconstruction results using noisy data with various noise levels when the sought solution is sparse ; we use the penalty function given in ( [ eq : l1 ] ) with .the second row reports the reconstruction results for various noise levels when the sought solution is piecewise constant ; we use the penalty function given in ( [ eq : tv ] ) with .when the 1d tv - denoising algorithm ` fista ` in is used to solve the minimization problems associated with this , it is terminated as long as the number of iterations exceeds or the error between two successive iterates is smaller than . during these computations ,we use and the parameters , and in algorithm [ alg1 ] .the computational times for the first row are , and seconds respectively , and the computation times for the second row are , and seconds respectively .this shows that algorithm [ alg1 ] indeed is a fast method with the capability of capturing special features of solutions .let \times [ 0,1] ] .the properties of the autoconvolution operator (t ) : = \int_0^t x(t - s ) x(s ) ds ] to ] to reconstruct the solution . in figure [ fig7 ]we report the reconstruction results by algorithm [ alg2 ] using and the given in ( [ eq : tv ] ) with .all integrals involved are approximated by the trapezoidal rule by dividing $ ] into subintervals of equal length . for those parameters involved in the algorithm ,we take , and .we also take the constant function as an initial guess .the sequence is selected by rule [ alpha ] with replaced by in which , , and . when the 1d - denoising algorithm ` fista ` in is used to solve the minimization problems associated with , it is terminated as long as the number of iterations exceeds or the error between two successive iterates is smaller than .we indicate in figure [ fig7 ] the number of iterations and the computational time for various noise levels ; the results show that algorithm [ alg2 ] is indeed a fast method for this problem .we proposed a nonstationary iterated method with convex penalty term for solving inverse problems in hilbert spaces .the main feature of our method is its splitting character , i.e. each iteration consists of two steps : the first step involves only the operator from the underlying problem so that the hilbert space structure can be exploited , while the second step involves merely the penalty term so that only a relatively simple strong convex optimization problem needs to be solved .this feature makes the computation much efficient . when the underlying problem is linear , we proved the convergence of our method in the case of exact data ; in case only noisy data are available , we introduced a stopping rule to terminate the iteration and proved the regularization property of the method .we reported various numerical results which indicate the good performance of our method .q. jin is partially supported by the decra grant de120101707 of australian research council and x. lu is partially supported by national science foundation of china ( no . 11101316 and no .91230108 ) .999 h. gfrerer , _ an a posteriori parameter choice for ordinary and iterated tikhonov regularization of ill - posed problems leading to optimal convergence rates _ , math . comp . , 49(180 ) : 507522 , s5s12 , 1987 .
in this paper we consider the computation of approximate solutions for inverse problems in hilbert spaces . in order to capture the special feature of solutions , non - smooth convex functions are introduced as penalty terms . by exploiting the hilbert space structure of the underlying problems , we propose a fast iterative regularization method which reduces to the classical nonstationary iterated tikhonov regularization when the penalty term is chosen to be the square of norm . each iteration of the method consists of two steps : the first step involves only the operator from the problem while the second step involves only the penalty term . this splitting character has the advantage of making the computation efficient . in case the data is corrupted by noise , a stopping rule is proposed to terminate the method and the corresponding regularization property is established . finally , we test the performance of the method by reporting various numerical simulations , including the image deblurring , the determination of source term in poisson equation , and the de - autoconvolution problem .
a major challenge in neuroscience is to understand how the neural activities are propagated through different brain regions , since many cognitive tasks are believed to involve this process ( vogels and abbott , 2005 ) .the feedforward neuronal network is the most used model in investigating this issue , because it is simple enough yet can explain propagation activities observed in experiments .in recent years , two different modes of neural activity propagation have been intensively studied .it has been found that both the synchronous spike packet ( _ synfire _ ) , and the _ firing rate _ , can be transmitted across deeply layered networks ( abeles 1991 ; aertsen et al .1996 ; diesmann et al . 1999 ; diesmann et al . 2001 ; c and fukai 2001 ; gewaltig et al .2001 ; tetzlaff et al .2002 ; tetzlaff et al .2003 ; van rossum et al . 2002; vogels and abbott 2005 ; wang et al . 2006 ; aviel et al . 2003 ; kumar et al . 2008 ; kumar et al . 2010 ; shinozaki et al . 2007 ; shinozaki et al .although these two propagation modes are quite different , the previous results demonstrated that a single network with different system parameters can support stable and robust signal propagation in both of the two modes , for example , they can be bridged by the background noise and synaptic strength ( van rossum et al . 2002 ; masuda and aihara 2002 ; masuda and aihara 2003 ) .neurons and synapses are fundamental components of the brain . by sensing outside signals , neurons continually fire discrete electrical signals known as action potentials or so - called spikes , and then transmit them to postsynaptic neurons through synapses ( dayan and abbott 2001 ) .the spike generating mechanism of cortical neurons is generally highly reliable .however , many studies have shown that the communication between neurons is , by contrast , more or less unreliable ( abeles 1991 ; raastad et al .1992 ; smetters and zador 1996 ) .theoretically , the synaptic unreliability can be explained by the phenomenon of probabilistic transmitter release ( branco and staras 2009 ; katz 1966 ; katz 1969 ; trommershuser et al .1999 ) , i.e. , synapses release neurotransmitter in a stochastic fashion , which has been confirmed by well - designed biological experiments ( allen and stevens 1994 ) . in most cases , the transmission failure rate at a given synapse tends to exceed the fraction of successful transmission ( rosenmund et al .1993 ; stevens and wang 1995 ) . in some special cases, the synaptic transmission failure rate can be as high as 0.9 or even higher ( allen and stevens 1994 ) .further computational studies have revealed that the unreliability of synaptic transmission might be a part of information processing of the brain and possibly has functional roles in neural computation .for instance , it has been reported that the unreliable synapses provide a useful mechanism for reliable analog computation in space - rate coding ( maass and natschl 2000 ) ; and it has been found that suitable synaptic successful transmission probability can improve the information transmission efficiency of synapses ( goldman 2004 ) and can filter the redundancy information by removing autocorrelations in spike trains ( goldman et al .furthermore , it has also been demonstrated that unreliable synapses largely influence both the emergence and dynamical behaviors of clusters in an all - to - all pulse - coupled neuronal network , and can make the whole network relax to clusters of identical size ( friedrich and kinzel 2009 ) .although the signal propagation in multilayered feedforward neuronal networks has been extensively studied , to the best of our knowledge the effects of unreliable synapses on the propagation of neural activity have not been widely discussed and the relevant questions still remain unclear ( but see the footnote ) . in this paper , we address these questions and provide insights by computational modeling . for this purpose, we examine both the synfire propagation and firing rate propagation in feedforward neuronal networks .we mainly investigate the signal propagation in feedforward neuronal networks composed of purely excitatory neurons connected with unreliable synapses in an all - to - all coupling fashion ( abbr .ure feedforward neuronal network ) in this work .we also compare our results with the corresponding feedforward neuronal networks ( we will clarify the meaning of `` corresponding '' later ) composed of purely excitatory neurons connected with reliable synapses in a random coupling fashion ( abbr .rre feedforward neuronal network ) .moreover , we study feedforward neuronal networks consisting of both excitatory and inhibitory neurons connected with unreliable synapses in an all - to - all coupling fashion ( abbr .urei feedforward neuronal network ) .the rest of this paper is organized as follows .the network architecture , neuron model , and synapse model used in this paper are described in sec .[ sec:2 ] . besides these, the measures to evaluate the performance of synfire propagation and firing rate propagation , as well as the numerical simulation method are also introduced in this section .the main results of the present work are presented in sec .[ sec:3 ] .finally , a detailed conclusion and discussion of our work are given in sec .[ sec:4 ] .in this subsection , we introduce the network topology used in this paper . herewe only describe how to construct the ure feedforward neuronal network . the methods about how to buildthe corresponding rre feedforward neuronal network and the urei feedforward neuronal network will be briefly given in secs .[ sec:3d ] and [ sec:3e ] , respectively .the architecture of the ure feedforward neuronal network is schematically shown in figure [ fig:1 ] .the network totally contains layers , and each layer is composed of excitatory neurons . since neurons in the first layer are responsible for receiving and encoding the external input signal , we therefore call this layer sensory layer and neurons in this layer are called sensory neurons . in contrast , the function of neurons in the other layers is to propagate neural activities .based on this reason , we call these layers transmission layers and the corresponding neurons cortical neurons . because the considered neuronal network is purely feedforward , there is no feedback connection from neurons in downstream layers to neurons in upstream layers , and there is also no connection among neurons within the same layer . for simplicity , we call the -th neuron in the -th layer neuron in the following . network architecture of the ure feedforward neuronal network .the network totally contains 10 layers .the first layer is the sensory layer and the others are the transmission layers .each layer consists of 100 excitatory neurons . for clarity ,only 6 neurons are shown in each layer.,width=566 ] we now introduce the neuron model used in the present work .each cortical neuron is modeled by using the integrate - and - fire ( if ) model ( nordlie et al .2009 ) , which is a minimal spiking neuron model to mimic the action potential firing dynamics of biological neurons .the subthreshold dynamics of a single if neuron obeys the following differential equation : with the total input current here and , represents the membrane potential of neuron , ms is the membrane time constant , mv is the resting membrane potential , m denotes the membrane resistance , and is the total synaptic current .the noise current represents the external or intrinsic fluctuations of the neuron , where is a gaussian white noise with zero mean and unit variance , and is referred to as the noise intensity of the cortical neurons . in this work , a deterministic threshold - reset mechanismis implemented for spike generation .whenever the membrane potential of a neuron reaches a fixed threshold at mv , the neuron fires a spike , and then the membrane potential is reset according to the resting potential , where it remains clamped for a 5-ms refractory period .on the other hand , we use different models to simulate the sensory neurons depending on different tasks . to study the synfire propagation, we assume that each sensory neuron is a simple spike generator , and control their firing behaviors by ourselves .while studying the firing rate propagation , the sensory neuron is modeled by using the if neuron model with the same expression ( see eq .( [ eq:1 ] ) ) and the same parameter settings as those for cortical neurons . for each sensory neuron ,the total input current is given by where index neurons .the noise current has the same form as that for cortical neurons but with the noise intensity . is a time - varying external input current which is injected to all sensory neurons . for each run of the simulation ,the external input current is constructed by the following process .let denote an ornstein - uhlenbeck process , which is described by where is a gaussian white noise with zero mean and unit variance , is a correlation time constant , and is a diffusion coefficient . the external input current defined as parameter can be used to denote the intensity of the external input signal . in this work ,we choose and ms . by its definition, the external input current corresponds to a gaussian - distributed white noise low - pass filtered at 80 ms and half - wave rectified .it should be noted that this type of external input current is widely used in the literature , in particular in the research papers which study the firing rate propagation ( van rossum et al .2002 ; vogels and abbott 2005 ; wang and zhou 2009 ) . the synaptic interactions between neurons are implemented by using the modified conductance - based model .our modeling methodology is inspired by the phenomenon of probabilistic transmitter release of the real biological synapses .here we only introduce the model of unreliable excitatory synapses , because the propagation of neural activity is mainly examined in ure feedforward neuronal networks in this work .the methods about how to model reliable excitatory synapses and unreliable inhibitory synapses will be briefly introduced in secs .[ sec:3d ] and [ sec:3e ] , respectively .the total synaptic current onto neuron is the linear sum of the currents from all incoming synapses , in this equation , the outer sum runs over all synapses onto this particular neuron , is the conductance from neuron to neuron , and mv is the reversal potential of the excitatory synapse . whenever the neuron emits a spike , an increment is assigned to the corresponding synaptic conductances according to the synaptic reliability parameter , which process is given by where denotes the synaptic reliability parameter of the synapse from neuron to neuron , and stands for the relative peak conductance of this particular excitatory synapse which is used to determine its strength .for simplicity , we assume that , that is , the synaptic strength is identical for all excitatory connections .parameter is defined as the successful transmission probability of spikes .when a presynaptic neuron fires a spike , we let the corresponding synaptic reliability variables with probability and with probability . that is to say , whether the neurotransmitter is successfully released or not is in essence controlled by a bernoulli on - off process in the present work . in other time , the synaptic conductance decays by an exponential law : with a fixed synaptic time constant . in the case of synfire propagation , we choose ms , and in the case of firing rate propagation , we choose ms . we now introduce several useful measures used to quantitatively evaluate the performance of the two different propagation modes : the synfire mode and firing rate mode .the propagation of synfire activity is measured by the survival rate and the standard deviation of the spiking times of the synfire packet ( gewaltig et al .let us first introduce how to calculate the survival rate for the synfire propagation . in our simulations , we find that the synfire propagation can be divided into three types : the failed synfire propagation , the stable synfire propagation , as well as the synfire instability propagation ( for detail , see sec .[ sec:3a ] ) . for neurons in each layer, a threshold method is developed to detect the local highest `` energy '' region . to this end, we use a 5 ms moving time window with 0.1 ms sliding step to count the number of spikes within each window . here a high energy region means that the number of spikes within the window is larger than a threshold . since we use a moving time window with small sliding step, there might be a continuous series of windows contain more than 50 spikes around a group of synchronous spikes . in this work ,we only select the first window which covers the largest number of spikes around a group of synchronous spikes as the local highest energy region .we use the number of local highest energy region to determine which type of synfire propagation occurs .if there is no local highest energy region detected in the final layer of the network , we consider it as the failed synfire propagation .when two or more separated local highest energy regions are detected in one layer , we consider it as the synfire instability propagation .otherwise , it means the occurrence of the stable synfire propagation . for each experimental setting, we carry out the simulation many times .the survival rate of the synfire propagation is defined as the ratio of the number of occurrence of the stable synfire propagation to the total number of simulations . in additional simulations, it turns out that the threshold value can vary in a wide range without altering the results . under certain conditions, noise can help the feedforward neuronal network produce the spontaneous spike packets , which promotes the occurrence of synfire instability propagation and therefore decreases the survival rate . for stable synfire propagation, there exists only one highest energy region for neurons in each layer .spikes within this region are considered as the candidate synfire packet , which might also contain a few spontaneous spikes caused by noise and other factors . in this work ,an adaptive algorithm is introduced to eliminate spontaneous spikes from the candidate synfire packet .suppose now that there is a candidate synfire packet in the -th layer with the number of spikes it contains and the corresponding spiking times .the average spiking time of the candidate synfire packet is therefore given by thus the standard deviation of the spiking times in the -th layer can be calculated as follows : ^ 2}. \end{split } \label{eq:10}\ ] ] we remove the -th spike from the candidate synfire packet if it satisfies : , where is a parameter of our algorithm .we recompute the average spiking time as well as the standard deviation of the spiking times for the new candidate synfire packet , and repeat the above eliminating process , until no spike is removed from the new candidate synfire packet anymore .we define the remaining spikes as the synfire packet , which is characterized by the final values of and .parameter determines the performance of the proposed algorithm .if is too large , the synfire packet will lose several useful spikes at its borders , and if is too small , the synfire packet will contain some noise data . in our simulations, we found that can result in a good compromise between these two extremes .it should be emphasized that our algorithm is inspired by the method given in ( gewaltig et al .next , we introduce how to measure the performance of the firing rate propagation .the performance of firing rate propagation is evaluated by combining it with a population code .specifically , we compute how similar the population firing rates in different layers to the external input current ( van rossum et al .2002 ; vogels and abbott 2005 ) . to do this ,a 5 ms moving time window with 1 ms sliding step is also used to estimate the population firing rates for different layers as well as the smooth version of the external input current .the correlation coefficient between the population firing rate of the -th layer and external input current is calculated by \left[r_i(k)-\overline{r}_i\right]\right\rangle_t}{\sqrt{\left\langle \left[i_s(k+\tau)-\overline{i}_s\right]^2\right\rangle_t\left \langle\left[r_i(k)-\overline{r}_i\right]^2\right\rangle_t } } , \end{split } \label{eq:11}\ ] ] where denotes the average over time .here we use the maximum cross - correlation coefficient to quantify the performance of the firing rate propagation in the -th layer .note that is a normalization measure and a larger value corresponds to a better performance . in all numerical simulations, we use the standard euler - maruyama integration scheme to numerically calculate the aforementioned stochastic differential eqs .( [ eq:1])-([eq:8 ] ) ( kloeden et al . 1994 ) .the temporal resolution of integration is fixed at 0.02 ms for calculating the measures of the synfire mode and at 0.05 ms for calculating the measures of the firing rate mode , as the measurement of the synfire needs higher precise .in additional simulations , we have found that further reducing the integration time step does not change our numerical results in a significant way .for the synfire mode , all simulations are executed at least 100 ms to ensure that the synfire packet can be successfully propagated to the final layer of the considered network .while studying the firing rate mode , we perform all simulations up to 5000 ms to collect enough spikes for statistical analysis .it should be noted that , to obtain convincing results , we carry out several times of simulations ( at least 200 times for the synfire mode and 50 times for the firing rate mode ) for each experimental setting to compute the corresponding measures .in this section , we report the main results obtained in the simulation .we first systematically investigate the signal propagation in the ure feedforward neuronal networks .then , we compare these results with those for the corresponding rre feedforward neuronal networks . finally , we further study the signal propagation in the urei feedforward neuronal networks . herewe study the role of unreliable synapses on the propagation of synfire packet in the ure feedforward neuronal networks . in the absence of noise, we artificially let each sensory neuron fire and only fire an action potential at the same time ( and ms ) .without loss of generality , we let all sensory neurons fire spikes at ms .figure [ fig:2 ] shows four typical spike raster diagrams of propagating synfire activity .note that the time scales in figs .[ fig:2a]-[fig:2d ] are different .the ure feedforward neuronal network with both small successful transmission probability and small excitatory synaptic strength badly supports the synfire propagation . in this case ,due to high synaptic unreliability and weak excitatory synaptic interaction between neurons , the propagation of synfire packet can not reach the final layer of the whole network ( see fig . [ fig:2a ] ) .for suitable values of and , we find that the synfire packet can be stably transmitted in the ure feedforward neuronal network .moreover , it is obvious that the width of the synfire packet at any layer for is much narrower than that of the corresponding synfire packet for ( see figs .[ fig:2b ] and [ fig:2c ] ) . at the same time , the transmission speed is also enhanced with the increasing of .these results indicate that the neuronal response of the considered network is much more precise and faster for suitable large successful transmission probability .however , our simulation results also reveal that a strong excitatory synaptic strength with large value of might destroy the propagation of synfire activity . as we see from fig .[ fig:2d ] , the initial tight synfire packet splits into several different synfire packets during the transmission process . such phenomenon is called the `` synfire instability '' ( tetzlaff et al . 2002 ; tetzlaff et al .2003 ) , which mainly results from the burst firings of several neurons caused by the strong excitatory synaptic interaction as well as the stochastic fluctuation of the synaptic connections .+ + in fig .[ fig:3a ] , we depict the survival rate of synfire propagation as a function of the successful transmission probability for different values of excitatory synaptic strength , with the noise intensity .we find that each survival rate curve can be at least characterized by one corresponding critical probability . for small ,due to low synaptic reliability , any synfire packet can not reach the final layer of the ure feedforward neuronal network .once the successful transmission probability exceeds the critical probability , the survival rate rapidly transits from 0 to 1 , suggesting that the propagation of synfire activity becomes stable for a suitable high synaptic reliability . on the other hand , besides the critical probability , we find that the survival rate curve should be also characterized by another critical probability if the excitatory synaptic strength is sufficiently strong ( for example , ns in fig .[ fig:3a ] ) . in this case ,when , our simulation results show that the survival rate rapidly decays from 1 to 0 , indicating that the network fails to propagate the stable synfire packet again .however , it should be noted that this does not mean that the synfire packet can not reach the final layer of the network in this situation , but because the excitatory synapses with both high reliability and strong strength lead to the occurrence of the redundant synfire instability in transmission layers . to systematically establish the limits for the appearance of stable synfire propagation as well as to check thatwhether our previous results can be generalized within a certain range of parameters , we further calculate the survival rate of synfire propagation in the panel , which is shown in fig .[ fig:3b ] .as we see , the whole panel can be clearly divided into three regimes .these regimes include the failed synfire propagation regime ( regime i ) , the stable synfire propagation regime ( regime ii ) , and the synfire instability propagation regime ( regime iii ) .our simulation results reveal that transitions between these regimes are normally very fast and therefore can be described as a sharp transition .the data shown in fig . [ fig:3b ] further demonstrate that synfire propagation is determined by the combination of both the successful transmission probability and excitatory synaptic strength . for a lower synaptic reliability , the urefeedforward neuronal network might need a larger to support the stable propagation of synfire packet . in reality ,not only the survival rate of the synfire propagation but also its performance is largely influenced by the successful transmission probability and the strength of the excitatory synapses . in figs .[ fig:3c ] and [ fig:3d ] , we present the standard deviation of the spiking times of the output synfire packet for different values of and , respectively . note that here we only consider parameters and within the stable synfire propagation regime .the results illustrated in fig .[ fig:3c ] clearly demonstrate that the propagation of synfire packet shows a better performance for a suitable higher synaptic reliability .for the ideal case , the ure feedforward neuronal network even has the capability to propagate the perfect synfire packet ( and ms ) in the absence of noise .on the other hand , it is also found that for a fixed the performance of synfire propagation becomes better and better as the value of is increased ( see fig .[ fig:3d ] ) .the above results indicate that both high synaptic reliability and strong excitatory synaptic strength are able to help the ure feedforward neuronal network maintain the precision of neuronal response in the stable synfire propagation regime . up to now, we only use the perfect initial spike packet ( and ms ) to evoke the synfire propagation .this is a special case which is simplified for analysis , but it is not necessary to restrict this condition . to understand how a generalized spike packet is propagated through the ure feedforward neuronal network , we randomly choose neurons from the sensory layer , and let each of these neurons fire and only fire a spike at any moment according to a gaussian distribution with the standard deviation . in figs .[ fig:4a ] and [ fig:4b ] , we plot the survival rate of the synfire propagation as a function of and for four different values of successful transmission probability , respectively . when the successful transmission probability is not too large ( for example , , 0.27 , and 0.3 in figs .[ fig:4a ] and [ fig:4b ] ) , the synfire activity is well build up after several initial layers for sufficiently strong initial spike packet ( large and small ) , and then this activity can be successfully transmitted along the entire network with high survival rate . in this case ,too weak initial spike packet ( small and large ) leads to the propagation of the neural activities becoming weaker and weaker with the increasing of layer number .finally , the neural activities are stopped before they reach the final layer of the network . moreover , with the increasing of the successful transmission probability , neurons in the downstream layers will share more common synaptic currents from neurons in the corresponding upstream layers .this means that neurons in the considered network have the tendency to fire more synchronously for suitable larger ( not too large ) . on the other hand , for sufficiently high synaptic reliability ( for instance , in figs .[ fig:4a ] and [ fig:4b ] ) , a large or a suitable large may result in the occurrence of synfire instability , which also reduces the survival rate of the synfire propagation .therefore , for a fixed , the ure feedforward neuronal network with suitable higher synaptic reliability has the ability to build up stable synfire propagation from a slightly weaker initial spike packet ( see figs .[ fig:4a ] and [ fig:4b ] ) . figures [ fig:4c ] and [ fig:4d ] illustrate the values of and versus the layer number for different initial spike packets and several different intrinsic system parameters of the network ( the successful transmission probability and excitatory synaptic strength ) .for each case shown in figs .[ fig:4c ] and [ fig:4d ] , once the synfire propagation is successfully established , converges fast to the saturated value 100 and approaches to an asymptotic value .although the initial spike packet indeed determines whether the synfire propagation can be established or not as well as influences the performance of synfire propagation in the first several layers , but it does not determine the value of in deep layers provided that the synfire propagation is successfully evoked .for the same intrinsic system parameters , if we use different initial spike packets to evoke the synfire propagation , the value of in deep layers is almost the same for different initial spike packets ( see fig .[ fig:4d ] ) .the above results indicate that the performance of synfire propagation in deep layers of the ure feedforward neuronal network is quite stubborn , which is mainly determined by the intrinsic parameters of the network but not the parameters of the initial spike packet .in fact , many studies have revealed that the synfire activity is governed by a stable attractor in the space ( diesmann et al .1999 ; diesmann et al . 2001 ; diesmann 2002 ; gewaltig et al .our above finding is a signature that the stable attractor of synfire propagation does also exist for the feedforward neuronal networks with unreliable synapses .next , we study the dependence of synfire propagation on neuronal noise .it is found that both the survival rate of synfire propagation and its performance are largely influenced by the noise intensity .there is no significant qualitative difference between the corresponding survival rate curves in low synaptic reliable regime .however , we find important differences between these curves for small in high synaptic reliable regime , as well as for large in intermediate synaptic reliable regime , that is , during the transition from the successful synfire regime to the synfire instability propagation regime ( see from fig .[ fig:5a ] ) . for each case, it is obvious that the top region of the survival rate becomes smaller with the increasing of noise intensity .this is at least due to the following two reasons : ( i ) noise makes neurons desynchronize , thus leading to a more dispersed synfire packet in each layer . for relatively high synaptic reliability , a dispersed synfire packet has the tendency to increase the occurrence rate of the synfire instability .( ii ) noise with large enough intensity results in several spontaneous neural firing activities at random moments , which also promote the occurrence of the synfire instability .figure [ fig:5b ] presents the value of as a function of the noise intensity for different values of successful transmission probability .as we see , the value of becomes larger and larger as the noise intensity is increased from 0 to 0.1 ( weak noise regime ) .this is also due to the fact that the existence of noise makes neurons desynchronize in each layer .however , although noise tends to reduce the synchrony of synfire packet , the variability of in deep layers is quite low ( data not shown ) .the results suggest that , in weak noise regime , the synfire packet can be stably transmitted through the feedforward neuronal network with small fluctuation in deep layers , but displays slightly worse performance compared to the case of .further increase of noise will cause many spontaneous neural firing activities which might significantly deteriorate the performance of synfire propagation .however , it should be emphasized that , although the temporal spread of synfire packet tends to increase as the noise intensity grows , several studies have suggested that under certain conditions the basin of attraction of synfire activity reaches a maximum extent ( diesmann 2002 ; postma et al . 1996 ; boven and aertsen 1990 ) .such positive effect of noise can be compared to a well known phenomenon called aperiodic stochastic resonance ( collins et al .1995b ; collins et al .1996 ; diesmann 2002 ) . in this subsection, we examine the firing rate propagation in ure feedforward neuronal networks . to this end , we assume that all sensory neurons are injected to a same time - varying external current ( see sec . [ sec:2b ] for detail ) .note now that the sensory neurons are modeled by using the integrate - and - fire neuron model in the study of the firing rate propagation .the maximum cross - correlation coefficient between the smooth version of external input current and the population firing rate of sensory neurons for different noise intensities.,width=294 ] before we present the results of the firing rate propagation , let us first investigate how noise influences the encoding capability of sensory neurons by the population firing rate .this is an important preliminary step , because how much input information represented by sensory neurons will directly influence the performance of firing rate propagation .the corresponding results are plotted in figs .[ fig:6 ] and [ fig:7 ] , respectively .when the noise is too weak , the dynamics of sensory neurons is mainly controlled by the same external input current , which causes neurons to fire spikes almost at the same time ( see figs . [ fig:7a ] and [ fig:7b ] ) . in this case , the information of the external input current is poorly encoded by the population firing rate since the synchronous neurons have the tendency to redundantly encode the same aspect of the external input signal .when the noise intensity falls within a special intermediate range ( about 0.5 - 10 ) , neuronal firing is driven by both the external input current and noise . with the help of noise ,the firing rate is able to reflect the temporal structural information ( i.e. , temporal waveform ) of the external input current to a certain degree ( see figs .[ fig:7c ] to [ fig:7f ] ) , and therefore has large value in this situation . for too large noise intensity , the external input current is almost drowned in noise , thus resulting that the input information can not be well read from the population firing rate of sensory neurons again . on the other hand, sensory neurons can fire `` false '' spikes provided that they are driven by sufficiently strong noise ( as for example at ms in fig .[ fig:7e ] ) .although the encoding performance of the sensory neurons might be good enough in this case , our numerical simulations reveal that such false spikes will seriously reduce the performance of the firing rate propagation in deep layers , which will be discussed in detail in the later part of this section . by taking these factors into account, we consider the noise intensity of sensory neurons to be within the range of 0.5 to 1 in the present work .an example of the firing rate propagation in the ure feedforward network .here we show the smooth version of the external input current , as and the population firing rates of layers 1 , 2 , 4 , 6 , 8 , and 10 , respectively .system parameters are ns , , and . ]figure [ fig:8 ] shows a typical example of the firing rate propagation . in view of the overall situation, the firing rate can be propagated rapidly and basically linearly in the ure feedforward neuronal network .however , it should be noted that , although the firing rates of neurons from the downstream layers tend to track those from the upstream layers , there are still several differences between the firing rates for neurons in two adjacent layers .for example , it is obvious that some low firing rates may disappear or be slightly amplified in the first several layers , as well as some high firing rates are weakened to a certain degree during the whole transmission process .therefore , as the neural activities are propagated across the network , the firing rate has the tendency to lose a part of local detailed neural information but can maintain a certain amount of global neural information . as a result , the maximum cross - correlation coefficient between and basically drops with the increasing of the layer number .let us now assess the impacts of the unreliable synapses on the performance of firing rate propagation in the ure feedforward neuronal network .figure [ fig:9a ] presents the value of versus the success transmission probability for various excitatory synaptic strengths . for a fixed value of , a bell - shaped curve is clearly seen by changing the value of successful transmission probability , indicating that the firing rate propagation shows the best performance at an optimal synaptic reliability level .this is because , for each value of , a very small will result in the insufficient firing rate propagation due to low synaptic reliability , whereas a sufficiently large can lead to the excessive propagation of firing rate caused by burst firings .based on above reasons , the firing rate can be well transmitted to the final layer of the ure feedforward neuronal network only for suitable intermediate successful transmission probabilities .moreover , with the increasing of , the considered network needs a relatively small to support the optimal firing rate propagation . in fig .[ fig:9b ] , we plot the value of as a function of the excitatory synaptic strength for different values of . here the similar results as those shown in fig .[ fig:9a ] can be observed .this is due to the fact that increasing and fixing the value of is equivalent to increasing and fixing the value of to a certain degree . according to the aforementioned results , we conclude that both the successful transmission probability and excitatory synaptic strength are critical for firing rate propagation in ure feedforward networks , and better choosing of these two unreliable synaptic parameters can help the cortical neurons encode neural information more accurately .next , we examine the dependence of the firing rate propagation on neuronal noise .the corresponding results are plotted in figs .[ fig:10 ] and [ fig:11 ] , respectively .figure [ fig:10a ] demonstrates that the noise of cortical neurons plays an important role in firing rate propagation .noise of cortical neurons with appropriate intensity is able to enhance their encoding accuracy .it is because appropriate intermediate noise , on the one hand , prohibits synchronous firings of cortical neurons in deep layers , and on the other hand , ensures that the useful neural information does not drown in noise .however , the level of enhancement is largely influenced by the noise intensity of sensory neurons .as we see , for a large value of , such enhancement is weakened to a great extent .this is because slightly strong noise intensity of sensory neurons will cause these neurons to fire several false spikes and a part of these spikes can be propagated to the transmission layers .if enough false spikes appear around the weak components of the external input current , these spikes will help the network abnormally amplify these weak components during the whole transmission process .the aforementioned process can be seen clearly from an example shown in fig .[ fig:11 ] . as a result, the performance of the firing rate propagation might be seriously deteriorated in deep layers .however , it should be noted that this kind of influence typically needs the accumulation of several layers .our simulation results show that the performance of firing rate propagation can be well maintained or even becomes slightly better ( depending on the noise intensity of sensory neurons , see fig .[ fig:6 ] ) in the first several layers for large ( see fig . [ fig:10b ] ) .in fact , the above results are based on the assumption that each cortical neuron is driven by independent noise current with the same intensity .our results can be generalized from the sensory layer to the transmission layers if we suppose that noise intensities for neurons in different transmission layers are different .all these results imply that better tuning of the noise intensities of both the sensory and cortical neurons can enhance the performance of firing rate propagation in the ure feedforward neuronal network .( color online ) an example of weak external input signal amplification .system parameters are the successful transmission probability , excitatory synaptic strength ns , and noise intensities and , respectively.,width=453 ] from the numerical results depicted in secs .[ sec:3a ] and [ sec:3b ] , we find that increasing with fixed has similar effects as increasing while keeping fixed for both the synfire mode and firing rate mode .some persons might therefore postulate that the signal propagation dynamics in feedforward neuronal networks with unreliable synapses can be simply determined by the average amount of received neurotransmitter for each neuron in a time instant , which can be reflected by the product of . to check whether this is true ,we calculate the measures of these two signal propagation modes as a function of for different successful transmission probabilities .if this postulate is true , the ure feedforward neuronal network will show the same propagation performance for different values of at a fixed .our results shown in figs . [ fig:12a]-[fig:12c ] clearly demonstrate that the signal propagation dynamics in the considered network can not be simply determined by the product or , equivalently , by the average amount of received neurotransmitter for each neuron in a time instant .for both the synfire propagation and firing rate propagation , although the propagation performance exhibits the similar trend with the increasing of , the corresponding measure curves do not superpose in most parameter region for each case , and in some parameter region the differences are somewhat significant ( see figs .[ fig:12b ] and [ fig:12c ] ) .this is because of the stochastic effect of neurotransmitter release , that is , the unreliability of neurotransmitter release will add randomness to the system .different successful transmission probabilities may introduce different levels of randomness , which will further affect the nonlinear spiking dynamics of neurons .therefore , the ure feedforward neuronal network might display different propagation performance for different values of even at a fixed .if we set the value of constant , a low synaptic reliability will introduce large fluctuations in the synaptic inputs . for small , according to the above reason , some neurons will fire spikes more than once in the large regime .this mechanism increases the occurrence rate of the synfire instability .thus , the ure feedforward neuronal network has the tendency to stop the stable synfire propagation for a small synaptic transmission probability ( see fig . [ fig:12a ] ) . on the other hand , a high synaptic reliability will introduce small fluctuations in the synaptic inputs for a fixed .this makes neurons in the considered network fire spikes almost synchronously for a large , thus resulting the worse performance for the firing rate propagation in large regime ( see fig . [ fig:12c ] ) .our above results suggest that the performance of the signal propagation in feedforward neuronal networks with unreliable synapses is not only purely determined by the change of synaptic parameters , but also largely influenced by the stochastic effect of neurotransmitter release . in this subsection, we make comparisons on the propagation dynamics between the ure and the rre feedforward networks .we first introduce how to generate a corresponding rre feedforward neuronal network for a given ure feedforward neuronal network .suppose now that there is a ure feedforward neuronal network with successful transmission probability .a corresponding rre feedforward neuronal network is constructed by using the connection density ( on the whole ) , that is , a synapse from one neuron in the upstream layer to one neuron in the corresponding downstream layer exists with probability . as in the ure feedforward neuronalnetwork given in sec .[ sec:2a ] , there is no feedback connection from downstream neurons to upstream neurons and also no connection among neurons within the same layer in the rre feedforward neuronal network .it is obvious that parameter has different meanings in these two different feedforward neuronal network models .the synaptic interactions between neurons in the rre feedforward neuronal network are also implemented by using the conductance - based model ( see eqs .( [ eq:6 ] ) and ( [ eq:7 ] ) for detail ) .however , here we remove the constraint of the synaptic reliability parameter for the rre feedforward neuronal network , e.g. , in all cases .a naturally arising question is what are the differences , if have , between the synfire propagation and firing propagation in ure feedforward neuronal networks and those in rre feedforward neuronal networks , although the numbers of active synaptic connections that taking part in transmitting spikes in a time instant are the same from the viewpoint of mathematical expectation .( color online ) the difference between the synfire propagation in the ure feedforward neuronal network and the rre feedforward neuronal network . herewe show the value of survival rate as a function of for different network models . in all cases , and .other system parameters are ns and ( dot : `` '' , and circle : `` '' ) , and ns and ( square : `` '' , and asterisk : `` '' ) .each data point is calculated based on 500 different independent simulations with different random seeds.,width=294 ] for the synfire propagation , our simulation results indicate that , compared to the rre feedforward neuronal network , the ure feedforward neuronal network is able to suppress the occurrence of synfire instability to a certain degree , which can be seen clearly in fig .[ fig:13 ] .typically , this phenomenon can be observed in strong excitatory synaptic strength regime . due to the heterogeneity of connectivity, some neurons in the rre feedforward neuronal network will have more input synaptic connections than the other neurons in the same network . for large value of , these neurons tend to fire spikes very rapidly after they received synaptic currents . if the width of the initial spike packet is large enough , these neurons might fire spikes again after their refractory periods , which are induced by a few spikes from the posterior part of the dispersed initial spike packet .these spikes may increase the occurrence rate of the synfire instability . while in the case of ure feedforward neuronal network, the averaging effect of unreliable synapses tends to prohibit neurons fire spikes too quickly .therefore , under the equivalent parameter conditions , less neurons can fire two or more spikes in the ure feedforeard neuronal network . as a result ,the survival rate of the synfire propagation for the ure feedforeard neuronal network is larger than that for the rre feedforward neuronal network ( see fig .[ fig:13 ] ) , though not so significant . in further simulations ,we find interesting results in small regime for the firing rate propagation . compared to the case of the ure feedforward neuronal network, the rre feedforward neuronal network can better support the firing rate propagation in this small regime for strong excitatory synaptic strength ( see fig .[ fig:14a ] ) .it is because the long - time averaging effect of unreliable synapses at small tends to make neurons fire more synchronous spikes in the ure feedforward neuronal network through the homogenization process of synaptic currents .however , with the increasing of , neurons in the downstream layers have the tendency to share more common synaptic currents from neurons in the corresponding upstream layers for both types of feedforward neuronal networks .the aforementioned factor makes the difference of the performance of firing rate propagation between these two types of feedforward neuronal networks become small so that the curves almost coincide with each other for the case of ( see fig .[ fig:14b ] ) .although from the above results we can not conclude that unreliable synapses have advantages and play specific functional roles in signal propagation , not like those results shown in the previous studies ( goldman et al .2002 ; goldman 2004 ) , at least it is shown that the signal propagation activities are different in ure and rre to certain degrees .we should be cautioned when using random connections to replace unreliable synapses in modelling research .however , it should be noted that the rre feedforward neuronal network considered here is just one type of diluted feedforward neuronal networks .there exists several other possibilities to construct the corresponding diluted feedforward neuronal networks ( hehl et al . 2001 ) .the similar treatments for these types of diluted feedforward neuronal networks require further investigation . in this subsection , we further study the signal propagation in the feedforward neuronal networks composed of both excitatory and inhibitory neurons connected in an all - to - all coupling fashion ( i.e. , the urei feedforward neuronal networks ) .this study is necessary because real biological neuronal networks , especially mammalian neocortex , consist not only of excitatory neurons but also of inhibitory neurons .the urei feedforward neuronal network studied in this subsection has the same topology as that shown in fig .[ fig:1 ] . in simulations, we randomly choose 80 neurons in each layer as excitatory and the rest of them as inhibitory , as the ratio of excitatory to inhibitory neurons is about in mammalian neocortex .the dynamics of the unreliable inhibitory synapse is also modeled by using eqs .( [ eq:6 ] ) and ( [ eq:7 ] ) .the reversal potential of the inhibitory synapse is fixed at -75 mv , and its strength is set as , where is a scale factor used to control the relative strength of inhibitory and excitatory synapses . since the results of the signal propagation in urei feedforward neuronal networks are quite similar to those in ure feedforward neuronal networks , we omit most of them and only discuss the effects of inhibition in detail . figure [ fig:15 ] shows the survival rate of synfire propagation in the panel for three different excitatory synaptic strengths . depending on whether the synfire packet can be successfully and stably transmitted to the final layer of the urei feedforward neuronal network ,the whole panel can also be divided into three regimes . for each considered case, the network with both small successful transmission probability and strong relative strength of inhibitory and excitatory synapses ( failed synfire regime ) prohibits the stable propagation of the synfire activity . while in the case of high synaptic reliability and small ( synfire instability propagation regime ), the synfire packet also can not be stably transmitted across the whole network due to the occurrence of synfire instability .therefore , the urei feedforward neuronal network is able to propagate the synfire activity successfully in a stable way only for suitable combination of parameters and .moreover , due to the competition between excitation and inhibition , the transitions between these different regimes can not be described as a sharp transition anymore , in particular , for large scale factor .our results suggest that such non - sharp character is strengthen with the increasing of .on the other hand , the partition of these different propagation regimes depends not only on parameters and but also on the excitatory synaptic strength . as the value of decreased , both the synfire instability propagation regime and stable synfire propagation regime are shifted to the upper left of the panel at first , and then disappear one by one ( data not shown ) .in contrast , a strong excitatory synaptic strength has the tendency to extend the areas of the synfire instability propagation regime , and meanwhile makes the stable synfire propagation regime move to the lower right of the panel .( color online ) effect of inhibition on firing rate propagation .here we show the value of as a function of scale factor for different excitatory synaptic strengths .system parameters are , and in all cases . each data point is calculated based on 50 different independent simulations with different random seeds.,width=294 ] for the case of firing rate propagation , we plot the value of versus the scale factor for different excitatory synaptic strengths in fig .[ fig:16 ] , with a fixed successful transmission probability .when the excitatory synaptic strength is small ( for instance ns ) , due to weak excitatory synaptic interaction between neurons the urei feedforward neuronal network can not transmit the firing rate sufficiently even for . in this case ,less and less neural information can be propagated to the final layer of the considered network with the increasing of .therefore , monotonically decreases with the scale factor at first and finally approaches to a low steady state value .note that here the low steady state value is purely induced by the spontaneous neural firing activities , which are caused by the additive gaussian white noise .as the excitatory synaptic strength grows , more neural information can be successfully transmitted for small value of .when is increased to a rather large value , such as ns , the coupling is so strong that too small scale factor will lead to the excessive propagation of firing rate .however , in this case , the propagation of firing rate can still be suppressed provided that the relative strength of inhibitory and excitatory synapses is strong enough . as a result , there always exists an optimal scale factor to best support the firing rate propagation for each large excitatory synaptic strength ( see fig .[ fig:16 ] ) .if we fix the value of ( not too small ) , then the similar results can also be observed by changing the scale factor for a large successful transmission probability ( data not shown ) .once again , this is due to the fact that increasing and fixing is equivalent to increasing and fixing to a certain degree .the feedforward neuronal network provides us an effective way to examine the neural activity propagation through multiple brain regions .although biological experiments suggested that the communication between neurons is more or less unreliable ( abeles 1991 ; raastad et al .1992 ; smetters and zador 1996 ) , so far most relevant computational studies only considered that neurons transmit spikes based on reliable synaptic models .in contrast to these previous work , we took a different approach in this work .here we first built a 10-layer feedforward neuronal network by using purely excitatory neurons , which are connected with unreliable synapses in an all - to - all coupling fashion , that is , the so - called ure feedforward neuronal network in this paper .the goal of this work was to explore the dependence of both the synfire propagation and firing rate propagation on unreliable synapses in the ure neuronal network , but was not limited this type of feedforward neuronal network .our modelling methodology was motivated by experimental results showing the probabilistic transmitter release of biological synapses ( branco and staras 2009 ; katz 1966 ; katz 1969 ; trommershuser et al .1999 ) . in the study of synfire mode, it was observed that the synfire propagation can be divided into three types ( i.e. , the failed synfire propagation , the stable synfire propagation , and the synfire instability propagation ) depending on whether the synfire packet can be successfully and stably transmitted to the final layer of the considered network .we found that the stable synfire propagation only occurs in the suitable region of system parameters ( such as the successful transmission probability and excitatory synaptic strength ) .for system parameters within the stable synfire propagation regime , it was found that both high synaptic reliability and strong excitatory synaptic strength are able to support the synfire propagation in feedforward neuronal networks with better performance and faster transmission speed .further simulation results indicated that the performance of synfire packet in deep layers is mainly influenced by the intrinsic parameters of the considered network but not the parameters of the initial spike packet , although the initial spike packet determines whether the synfire propagation can be evoked to a great degree .in fact , this is a signature that the synfire activity is governed by a stable attractor , which is in agreement with the results given in ( diesmann et al .1999 ; diesmann et al .2001 ; diesmann 2002 ; gewaltig et al . 2001 ) . in the study of firing rate propagation, our simulation results demonstrated that both the successful transmission probability and the excitatory synaptic strength are critical for firing rate propagation . too small successful transmission probability or too weak excitatory synaptic strength results in the insufficient firing rate propagation , whereas too large successful transmission probability or too strong excitatory synaptic strength has the tendency to lead to the excessive propagation of firing rate . theoretically speaking , better tuning of these two synaptic parameterscan help neurons encode the neural information more accurately .on the other hand , neuronal noise is ubiquitous in the brain .there are many examples confirmed that noise is able to induce many counterintuitive phenomena , such as stochastic resonance ( collins et al .1995a ; collins et al .1995b ; collins et al .1996 ; chialvo et al . 1997 ; guo and li 2009 ) and coherence resonance ( pikovsky and kurths 1996 ; lindner and schimansky - geier 2002 ; guo and li 2009 ) .here we also investigated how the noise influences the performance of signal propagation in ure feedforward neuronal networks .the numerical simulations revealed that noise tends to reduce the performance of synfire propagation because it makes neurons desynchronized and causes some spontaneous neural firing activities .further studies demonstrated that the survival rate of synfire propagation is also largely influenced by the noise .in contrast to the synfire propagation , our simulation results showed that noise with appropriate intensity is able to enhance the performance of firing rate propagation in ure feedforward neuronal networks .in essence , it is because suitable noise can help neurons in each layer maintain more accurate temporal structural information of the the external input signal .note that the relevant mechanisms about noise have also been discussed in several previous work ( van rossum et al .2002 ; masuda and aihara 2002 ; masuda and aihara 2003 ) , and our results are consistent with the findings given in these work .furthermore , we have also investigated the stochastic effect of neurotransmitter release on the performance of signal propagation in the ure feedforward neuronal networks .for both the synfire propagation and firing rate propagation , we found that the ure feedforward neuronal networks might display different propagation performance , even when their average amount of received neurotransmitter for each neuron in a time instant remains unchanged .this is because the unreliability of neurotransmitter release will add randomness to the system .different synaptic transmission probabilities will introduce different levels of stochastic effect , and thus might lead to different spiking dynamics and propagation performance .these findings revealed that the signal propagation dynamics in feedforward neuronal networks with unreliable synapses is also largely influenced by the stochastic effect of neurotransmitter release .finally , two supplemental work has been also performed in this paper . in the first work , we compared both the synfire propagation and firing rate propagation in ure feedforward neuronal networks with the results in corresponding feedforward neuronal networks composed of purely excitatory neurons but connected with reliable synapses in an random coupling fashion ( rre feedforward neuronal network ) .our simulations showed that several different results exist for both the synfire propagation and firing rate propagation in these two different feedforward neuronal network models .these results tell us that we should be cautioned when using random connections to replace unreliable synapses in modelling research . in the second work, we extended our results to more generalized feedforward neuronal networks , which consist not only of the excitatory neurons but also of inhibitory neurons ( urei feedforward neuronal network ) .the simulation results demonstrated that inhibition also plays an important role in both types of neural activity propagation , and better choosing of the relative strength of inhibitory and excitatory synapses can enhance the transmission capability of the considered network .the results presented in this work might be more realistic than those obtained based on reliable synaptic models .this is because the communication between biological neurons indeed displays the unreliable properties .in real neural systems , neurons may make full use of the characteristics of unreliable synapses to transmit neural information in an adaptive way , that is , switching between different signal propagation modes freely as required .further work on this topic includes at least the following two aspects : ( i ) since all our results are derived from numerical simulations , an analytic description of the synfire propagation and firing rate propagation in our considered feedforward neuronal networks requires investigation .( ii ) intensive studies on signal propagation in the feedforward neuronal network with other types of connectivity , such as the mexican - hat - type connectivity ( hamaguchi et al . 2004 ; hamaguchi and aihara 2005 ) and the gaussian - type connectivity ( van rossum et al . 2002 ) , as well as in the feedforward neuronal network imbedded into a recurrent network ( aviel et al . 2003 ; vogels and abbott 2005 ; kumar et al . 2008 ) , from the unreliable synapses point of view are needed as well .we thank feng chen , yuke li , qiuyuan miao , xin wei and qunxian zheng for valuable discussions on an early version of this manuscript .we gratefully acknowledge the anonymous reviewers for providing useful comments and suggestion , which greatly improved our paper .we also sincerely thank one reviewer for reminding us of a critical citation ( trommershuser and diesmann 2001 ) .this work is supposed by the national natural science foundation of china ( grant no .60871094 ) , the foundation for the author of national excellent doctoral dissertation of pr china , and the fundamental research funds for the central universities ( grant no . 1a5000 - 172210126 ) .daqing guo would also like to thank the award of the ongoing best phd thesis support from the university of electronic science and technology of china .boven , k. , h. , aertsen , a. , m. , h. , j. ( 1990 ) .dynamics of activity in neuronal networks give rise to fast modulations of functional connectivity . in _parallel processing and neural systems and computers _( pp . 53 - 56 ) .amsterdam : elsevier .raastad , m. , storm , j. f. , andersen p. ( 1992 ) .putative single quantum and single fibre excitatory postsynaptic currents show similar amplitude range and variability in rat hippocampal slices .j. neurosci ._ , 4 , 113 - 117 .shinozaki , t. , okada , m. , reyes , a. d. , c , h. ( 2010 ) .flexible traffic control of the synfire - mode transmission by inhibitory modulation : nonlinear noise reduction .e _ , 81 , 011913 .trommershuser j. , marienhagen j. , zippelius a. ( 1999 ) , stochastic model of central synapses : slow diffusion of transmitter interacting with spatially distributed receptors and transporters ._ j theor ._ , 198 , 101 - 120 .
in this paper , we systematically investigate both the synfire propagation and firing rate propagation in feedforward neuronal network coupled in an all - to - all fashion . in contrast to most earlier work , where only reliable synaptic connections are considered , we mainly examine the effects of unreliable synapses on both types of neural activity propagation in this work . we first study networks composed of purely excitatory neurons . our results show that both the successful transmission probability and excitatory synaptic strength largely influence the propagation of these two types of neural activities , and better tuning of these synaptic parameters makes the considered network support stable signal propagation . it is also found that noise has significant but different impacts on these two types of propagation . the additive gaussian white noise has the tendency to reduce the precision of the synfire activity , whereas noise with appropriate intensity can enhance the performance of firing rate propagation . further simulations indicate that the propagation dynamics of the considered neuronal network is not simply determined by the average amount of received neurotransmitter for each neuron in a time instant , but also largely influenced by the stochastic effect of neurotransmitter release . second , we compare our results with those obtained in corresponding feedforward neuronal networks connected with reliable synapses but in a random coupling fashion . we confirm that some differences can be observed in these two different feedforward neuronal network models . finally , we study the signal propagation in feedforward neuronal networks consisting of both excitatory and inhibitory neurons , and demonstrate that inhibition also plays an important role in signal propagation in the considered networks . * keywords : * feedforward neuronal network , unreliable synapse , signal propagation , synfire chain , firing rate
in this paper , we consider the following equilibrium problem ( ) in the sense of blum and otteli , which consists in finding a point such that where is a nonempty , closed and convex subset of a real banach space and is an equilibrium bifunction , i.e. , for all . the solution set of ( ) is denoted by .the equilibrium problem which also known under the name of ky fan inequality covers , as special cases , many well - known problems , such as the optimization problem , the variational inequality problem and nonlinear complementarity problem , the saddle point problem , the generalized nash equilibrium problem in game theory , the fixed point problem and others ; ( see ) .also numerous problems in physic and economic reduce to find a solution of an equilibrium problem .many methods have been proposed to solve the equilibrium problems see for instance . in 1980,cohen introduced a useful tool for solving optimization problem which is known as auxiliary problem principle and extended it to variational inequality .in auxiliary problem principle a sequence is generated as follows : is a unique solution of the following strongly convex problem where .recently , mastroeni extended the auxiliary problem principle to equilibrium problems under the assumptions that the equilibrium function is strongly monotone on and that satisfies the following lipschitz- type condition : for all where . to avoid the monotonicity of , motivated by antipin , tran et al . have used an extrapolation step in each iteration after solving ( [ equ.1 ] ) and suppose that is pseudomonotone on which is weaker than monotonicity assumption .they assumed was the unique solution of ( [ equ.1 ] ) and the unique solution of the following strongly convex problem is denoted by .in special case , when the problem is a variational inequality problem , this method reduces to the classical extragradient method which has been introduced by korpelevich .the extragradient method is well known because of its efficiency in numerical tests . in the recent years, many authors obtained extragradient algorithms for solving ( ) in hilbert spaces where convergence of the proposed algorithms was required to satisfy a certain lipschitz - type condition .lipschitz - type condition depends on two positive parameters and which in some cases , they are unknown or difficult to approximate . in other to avoid this requirement , authors used the linesearch technique in a hilbert space to obtain convergent algorithms for solving equilibrium problem . in this paper , we consider the following auxiliary equilibrium problem ( ) for finding such that for all , where is a regularization parameter and be a nonnegative differentiable convex bifunction on with respect to the second argument , for each fixed , such that 1 . for all , 2 . for all . where denotes the gradient of the function at . in the recent years , many authors studied the problem of finding a common element of the set of fixed points of a nonlinear mapping and the set of solutions of an equilibrium problem in the framework of hilbert spaces and banach spaces , see for instance . in all of these methods ,authors have used metric projection in hilbert spaces and generalized metric projection in banach spaces . in this paper , motivated d. q. tran et al . and p. t. vuong et al . , we introduce new extragradient and linesearch algorithms for finding a common element of the set of solutions of an equilibrium problem and the set of fixed points of a relatively nonexpansive mapping in banach spaces , by using sunny generalized nonexpansive retraction . using this method, we prove strong convergence theorems under suitable conditions .we denote by the normalized duality mapping from to defined by also , denote the strong convergence and the weak convergence of a sequence to in by and , respectively .+ let be the unite sphere centered at the origin of .a banach space is strictly convex if , whenever and .modulus of convexity of is defined by for all ] .the banach space is called smooth if the limit exists for all .it is also said to be uniformly smooth if the limit is attained uniformly for all .every uniformly smooth banach space is smooth . if a banach space uniformly convex , then is reflexive and strictly convex .many properties of the normalized duality mapping have been given in .+ we give some of those in the following : 1 . for every , is nonempty closed convex and bounded subset of .2 . if is smooth or is strictly convex , then is single - valued .if is strictly convex , then is one - one .if is reflexive , then is onto .if is strictly convex , then is strictly monotone , that is , for all such that . 6 .if is smooth , strictly convex and reflexive and is the normalized duality mapping on , then , and , where and are the identity mapping on and , respectively .if is uniformly convex and uniformly smooth , then is uniformly norm - to - norm continuous on bounded sets of and is also uniformly norm - to - norm continuous on bounded sets of , i.e. , for and , there is a such that let be a smooth banach space , we define the function by for all . observe that , in a hilbert space , for all + it is clear from definition of that for all , if additionally assumed to be strictly convex , then also , we define the function by , + for all and . that is , for all and . + it is well known that , if is a reflexive strictly convex and smooth banach space with as its dual , then for all and all .let be a smooth banach space and be a nonempty subset of .a mapping is called generalized nonexpansive if and for all and all + let be a closed convex subset of and be a mapping .a point in is said to be an asymptotic fixed point of if contains a sequence which converges weakly to such that .the set of asymptotic fixed points of will be denoted by .a mapping is called relatively nonexpansive if and for all and .the asymptotic behavior of relatively nonexpansive mappings was studied in . is said to be relatively quasi - nonexpansive if and for all and all . the class of relatively quasi - nonexpansive mapping is broader than the class of relatively nonexpansive mappings which requires+ it is well known that , if is a strictly convex and smooth banach space , is a nonempty closed convex subset of and is a relatively quasi - nonexpansive mapping , then is a closed convex subset of .let be a nonempty subset of a banach space .a mapping is said to be sunny if for all and all .a mapping is said to be a retraction if for all . is a sunny nonexpansive retraction from onto if is a retraction which is also sunny and nonexpansive .a nonempty subset of a smooth banach space is said to be a generalized nonexpansive retract ( resp .sunny generalized nonexpansive retract ) of if there exists a generalized nonexpansive retraction ( resp .sunny generalized nonexpansive retraction ) from onto .+ if is a smooth , strictly convex and reflexive banach space , be a nonempty closed convex subset of and be the generalized metric projection of onto . then the is a sunny generalized nonexpansive retraction of onto . if is a hilbert space. then .we need the following lemmas for the proof of our main results .+ if is a convex subset of banach space , then we denote by the normal cone for at a point , that is suppose that is a banach space and let ] , such that for all .[lem15 ] let be a uniformly convex banach space .then there exists a continuous strictly increasing convex function \rightarrow [ 0 , \infty) ] with .[lem5 ] let be a uniformly convex and smooth banach space and let and be two sequences of . if and either or is bounded , then [ lem10] and be two positive and bounded sequences in , then this section , we present an algorithm for finding a solution of the ( ) which is also the common element of the set of fixed points of a relatively nonexpansive mapping .+ here , we assume that bifunction satisfies in following conditions which is nonempty , convex and closed subset of uniformly convex and uniformly smooth banach space , 1 . for all 2 . is pseudomonotone on , i.e. , for all 3 . is jointly weakly continuous on , i.e. , if and and are two sequences in converging weakly to and , respectively , then 4 . is convex , lower semicontinuous and subdifferentiable on for every 5 . satisfies -lipschitz - type condition : such that for every it is easy to see that if satisfies the properties , then the set of solutions of an equilibrium problem is closed and convex .indeed , when is a hilbert space , -lipschitz - type condition reduces to lipschitz - type condition ( [ equ.2 ] ) . + throughout the paper is a relatively nonexpansive self - mapping of .* algorithm * step 0 .: : suppose that ] for some and ] , where step 1 .: : let .: : compute and , such that step 3 .: : compute . + if and , then and go to step 4 .: : compute , where is the sunny generalized nonexpansive retraction from onto and step 5 .: : set and go to step 2 . before proving the strong convergence of the iterates generated by algorithm 1, we prove following lemmas .[ lem1 ] for every and , we obtain 1 . 2 . . by the condition ( ) for and from lemmas [ lem7 ] and [ lem8 ], we obtain if and only if this implies that and exist such that so , from definition of , we obtain for all .set , we have so , by definition of the and equality ( [ equa18 ] ) , we get for all . put in inequality [ eq40 ] , we have since and is pseudomonotone on . replacing and by , and in inequality ( [ equa17 ] ) , respectively ,we get in a similar way , since , we have for all , hence ( ) is proved .let in above inequality , we obtain combining inequalities ( [ equa1 ] ) , ( [ equa2 ] ) and ( [ equa3 ] ) , we get from inequality ( [ equa19 ] ) and ( [ eq36 ] ) , we have hence , ( ) is proved . in a real hilbert space , lemma [ lem1 ]is reduced to lemma in . in algorithm , we obtain the unique optimal solutions and .let , then using lemma [ lem1]( ) , we have set in inequality ( [ equa22 ] ) and in inequality ( [ equa23 ] ) . hence , we get since is monotone and one - one , we obtain in a similar way , also is unique . if is a real hilbert space , then algorithm is the same extragradient algorithm in provided that the sequence satisfies the conditions of step of algorithm .[ lem3 ] for every and , we get from lemma [ lem1 ] , by the convexity of and by the definition of functions and , we have we examine the stopping condition in the following lemma .[ lem6 ] let , then . if and , then .suppose , then by the definition of , condition ( ) , property of ( [ equa21 ] ) and since , we have for all .set , hence lemma [ lem2 ] implies that .+ let and , we have that and since is one - one , we get since and , it follows that and since is one - one , we get .so in a real hilbert space , lemma [ lem6 ] is the same proposition in with different proof , provided that the sequence satisfies the conditions of step of algorithm .[ eq25 ] let be a nonempty closed convex subset of a uniformly convex , uniformly smooth banach space .assume that is a bifunction which satisfies conditions and is a relatively nonexpansive mapping such that then sequences , , and generated by algorithm converge strongly to the some solution , where , and is sunny generalized nonexpansive retraction from onto .at first , using induction we show that for all .let , from lemma [ lem3 ] , we get for all . now , we show that for all .it is clear that .suppose that , i.e for all .since , using lemma [ lem4 ] , we get for all .this implies that .therefore .let .since , using successively equality ( [ eq36 ] ) , it is easy to see that the is increasing and bounded from above by , so exists .this yields that is bounded . from inequality ( [ eq14 ] ) , we know that is bounded .it is clear that , so lemma [ lem5 ] implies that and therefore converges strongly to since is uniformly norm - to - norm continuous on bounded sets , from equality ( [ equa6 ] ) , we obtain that , and exists for all . since , we have and from lemma [ lem5 ] , we deduce that , thus which implies that converges strongly to . using norm - to - norm continuity of on bounded sets ,we conclude that and therefore using lemma [ lem1 ] , we obtain . from inequality ( [ eq14 ] ) and the definition of , we derive that and are bounded .let and .so , by lemma [ lem15 ] , there exists a continuous , strictly increasing and convex function \rightarrow\mathbb{r} ] with such that for all , we have which imply by letting in inequalities ( [ equa8 ] ) and ( [ equa11 ] ) , using lemma [ lem10 ] and equality ( [ equa9 ] ) , we obtain from the properties of and , we have so from ( [ eq18 ] ) , we obtain since is uniformly norm - to - norm continuous on bounded sets . by the same reason as in the proof of ( [ equa9 ] ) , we can conclude from ( [ equa33 ] ) and ( [ equa12 ] ) that for all . using lemma [ lem1 ] , we have for all . taking the limits as in inequality ( [ equa35 ] ) , using equality ( [ equa34 ] ) , we get since and are bounded , it follows from lemma [ lem5 ] that which imply and converges strongly to now , we prove that .it follows from the definition of that for all . by letting in inequality ( [ equa36 ] ), it follows from equality ( [ equa10 ] ) , conditions and and uniformly norm - to - norm continuity of on bounded sets that because of . hence , letting , lemma [ lem2 ] implies that .now , since , from ( [ equa12 ] ) , we get .so , using the definition of , we have setting in lemma [ lem4 ] , since and is continuous respect to the first argument , we obtain also , using lemma [ eq38 ] , we have for all because of . therefore and consequently the sequences , , and converge strongly to .if is a real hilbert space , then theorem [ eq25 ] is the same theorem in for a nonexpansive mapping with different proof , provided that the sequence satisfies the conditions of step of algorithm .as we see in the previous section , -lipschitz - type condition depends on two positive parameters and . in some cases, these parameters are unknown or difficult to approximate . to avoid this difficulty , using linesearch method, we modify extragradient algorithm .we prove strong convergence of this new algorithm without assuming the -lipschitz - type condition .linesearch method has a good efficiency in numerical tests . here, we assume that bifunction satisfies in conditions , and and also in following condition which is nonempty , convex and closed of -uniformly convex , uniformly smooth banach space and is an open convex set containing , + ( ) is jointly weakly continuous on , i.e. , if and and are two sequences in converging weakly to and , respectively , then + * algorithm * step 0 .: : let , and suppose that ] for some , ] with such that for , we get and in a similar way , there exists a continuous , strictly increasing and convex function \rightarrow\mathbb{r}$ ] with such that for , we obtain it follows from inequalities ( [ eq.n1 ] ) and ( [ eq.n2 ] ) that taking the limits as in inequalities ( [ equa30 ] ) and ( [ equa31 ] ) , using lemma [ lem10 ] and equality ( [ equa43 ] ) , we obtain from the properties of and , we have since is uniformly norm - to - norm continuous on bounded sets , we obtain so , we get , because of , therefore using the definition of , we have that setting in lemma [ lem4 ] , since and is continuous respect to the first argument , we obtain also , since , it follows from lemma [ eq38 ] that for all . therefore and consequently sequences , , , and converge strongly to .now , we demonstrate theorems [ eq25 ] and [ thm1 ] with an example . also , we compare the behavior of the sequence generated by algorithms and . 1. for all , 2 . if , then for all , i.e. , is pseudomonotone , while is not monotone .3 . if and , then i.e. , is jointly weakly continuous on .4 . let .since so is convex , also , , hence is lower semicontinuous .since , thus is subdifferentiable on for each 5 . since , we get i.e. , satisfies the -lipschitz - type condition with thus , , i.e. , is relatively nonexpansive mapping . on the other hand , if for each , , then , i.e. , and consequently also , assume that , for all and . in linesearch algorithm , is the same in extragradient algorithm .assume that , , , and .so , i.e. , and is the smallest nonnegative integer such that where also , and . since , then and .\ ] ] furthermore , numerical results for algorithm show that the sequences , , , and converge strongly to . by comparing figure1 and figure2 ,we see that the speed of convergence of the sequence generated by linesearch algorithm is equal to extragradient algorithm . the computations associated with examplewere performed using matlab ( step: ) software .j. p. aubin , _ optima and equilibria _ , springer , new york , ( 1998 ) .e. blum and w. oettli , _ from optimization and variational inequality to equilibrum problems _ , the mathematics student , * 63 * ( 1994 ) , 127149 .y. c. liou , _ shrinking projection method of proximal - type for a generalized equilibrium problem , a maximal monotone operator and a pair of relatively nonexpansive mappings _ , taiwanese j. math ., * 14 * ( 2010 ) , 517540 .
in this paper , using sunny generalized nonexpansive retraction , we propose new extragradient and linesearch algorithms for finding a common element of the set of solutions of an equilibrium problem and the set of fixed points of a relatively nonexpansive mapping in banach spaces . to prove strong convergence of iterates in the extragradient method , we introduce a -lipschitz - type condition and assume that the equilibrium bifunction satisfies in this condition . this condition is unnecessary when the linesearch method is used instead of the extragradient method . a numerical example is given to illustrate the usability of our results . our results generalize , extend and enrich some existing results in the literature . [ multiblock footnote omitted ] = 15.8pt [ section ] [ section ] [ section ] [ section ] [ section ] [ section ] [ section ]
in the mathematical investigations of ecological systems , conservative dynamics are often considered non - robust , thus unrealistic as a faithful description of reality . through recent studies of stochastic , nonlinear population dynamics , however, a new perspective has emerged : the stationary behavior of a complex stochastic population dynamics almost always exhibits a divergence - free cyclic motion in its phase space , even when the corresponding system of differential equations has only stable , node - type fixed points . in particular, it has been shown that an underlying volume preserving conservative dynamics is one of the essential keys to understand the long - time complexity of such stochastic systems .the aim of the present work , following a proposal in , is to carry out a comprehesive stochastic dynamic and thermodynamic analysis of an ecological system with sustained oscillations . in the classical studies on statistical mechanics , developed by helmholtz , boltzmann , gibbs , and others ,the dynamical foundation is a hamiltonian system .the theory in generalized such an approach that requires no _ a priori _ identification of variables as position and momentum ; it also suggested a possible _ thermodynamic structure _ which is purely mathematical in nature , independent of newtonian particles . in the context of population dynamics , we shall show that the mathematical analysis yields a _ conservative ecology_. among ecological models , the lotka - volterra ( lv ) equation for predator - prey system has played an important pedagogical role , even though it is certainly not a realistic model for any engineering applications .we choose this population system in the present work because its mathematics tractability , and its stochastic counterpart in terms of a birth and death process .it can be rigorously shown that a smooth solusion to lv differential equation is the law of large numbers for the stochastic process . in biochemistry ,the birth - death process for discrete , stochastic reactions corresponding to the mass - action kinetics has been called a delbrck - gillespie process .the lv equation in non - dimensionalized form is : in which and represent the populations of a prey and its predator , each normalized with respect to its time - averaged mean populations . the term in stands for the rate of consumption of the prey by the predator , and the term in stands for the rate of prey - dependent predator growth . and , and .right panel : with , and , from outside inward , all with .we see that the larger the , the smaller the temporal variations in the prey population , relative to that of predator ., title="fig:",width=268 ] and , and .right panel : with , and , from outside inward , all with .we see that the larger the , the smaller the temporal variations in the prey population , relative to that of predator . , title="fig:",width=268 ] it is easy to check that the solutions to ( [ theeqn ] ) in phase space are level curves of a scalar function we shall use to denote the solution curve , and to denote the domain encircled by the .[ fig_1 ] shows the contours of with and with different s .let be the period of the cyclic dynamics .then it is easy to show that furthermore ( see appendix [ app - a ] ) , in which is the area of , encircled by , using lebesgue measure in the -plane .the appropriate measure for computing the area will be further discussed in sec .the parameter represents the relative temporal variations , or dynamic ranges , in the two populations : the larger the , the greater the temporal variations and range in the predator population , and the smaller in the prey population .the paper is organized as follows . in sec .[ sec2 ] , an extended conservation law is recognized for the lotka - volterra system . then the relationship among three quantities : the energy " function , the area encircled by the level set , and the parameter is developed . according to the helmholtz theorem , the conjugate variables of and are found as time averages of certain functions of population and .analysis on those novel state variables " demonstrates the tendency of change in mean ecological quantities like population range or ecological activeness when the parameter or energy varies . in sec .[ sec3 ] , we show that the area encircled by is related to the concept of entropy . in sec .[ sec4 ] , the conservative dynamics is shown to be an integral part of the stochastic population dynamics , which necessarily has the same invariant density as the deterministic conservative dynamics . in the large population limit, a separation of time scale between the fast cyclic motion on and the slow stochastic crossing of is observed in the stochastic dynamical system .the paper concludes with a discussion in sec .( [ theeqn ] ) is not a hamiltonian system , nor is it divergence - free it can be expressed , however , as with a scalar factor .one can in fact understand this scalar factor as a `` local change of measure '' , or time : for where satisfies the corresponding hamiltonian system . in sec .[ sec3 ] and [ sec4 ] below , we shall show that is an invariant density of the liouville equation for the deterministic dynamics ( [ theeqn ] ) , and more importantly the invariant density of the fokker - planck equation for the corresponding stochastic dynamics . as will be demonstrated in sec .[ sec2_2 ] and [ sec2_3 ] , statistical average of quantities according to the invariant measure can be calculated through time average of those quantities along the system s instantaneous dynamics .knowledge about the system s long term distribution is not needed during the calculation .these facts make the the natural measure for computing area .any function of , is conserved under the dynamics , as is guaranteed by the orthogonality between the vector field of ( [ theeqn ] ) and gradient : this is analogous to the conservation law " observed in hamiltonian systems .the nonlinear dynamics in ( [ theeqn ] ) , therefore , introduces a `` conservative relation '' between the populations of predator and prey according to ( [ hxy ] ) .if we call the value an `` energy '' , then the phase portrait in the left panel of fig .[ fig_1 ] suggests that the entire phase space of the dynamical system is organized according to the value of . the deep insight contained in the work of helmholtz and boltzmann is that such an energy - based organization can be further extended for different values of : therefore , the energy - based organization is no longer limited to a _orbit , nor a _ single _ dynamical system ; but rather for the entire class of parametric dynamical systems . in the classical physics of newtonian mechanical energy conservation , this yields the mechanical basis of the fundamental thermodynamic relation as a form of the first law , which extends the notion of energy conservation far beyond mechanical systems .more specifically , we see that the area in fig .[ fig_1 ] , or in fact any geometric quantification of a closed orbit , is completely determined by the parameter and initial energy value .therefore , there must exist a bivariate function , assuming the implicit function theorem applies , then one has note that in terms of the eq .( [ hfunction ] ) , a `` state '' of the ecological system is not a single point which is continuously varying with time ; rather it reflects the geometry of an entire orbit. then eq .( [ hfunction ] ) implies that any such ecological state has an `` h - energy '' , _ if _ one recognizes a geometric , state variable .( [ hfunction ] ) can be written in a differential form in which one first introduces the h - energy for an ecological system with fixed via the factor .then , holding constant , one introduces an `` -force '' corresponding to the parameter . in classical thermodynamics ,the latter is known as an `` adiabatic '' process .the helmholtz theorem expresses the two partial derivatives in ( [ dh ] ) in terms of the dynamics in eq .( [ theeqn ] ) . for canonical hamiltonian systems ,lebesgue measure is an invariant measure in the whole phase space . on the level set , the projection of the lebesgue measure , called the liouville measure , also defines an invariant measure on the sub - manifold .if the dynamics on the invariant sub - manifold is ergodic , the average with respect to the liouville measure is equal to the time average along the trajectory starting from any initial condition satisfying . as we shall show below , the invariant measure for the lv system ( [ theeqn ] ) in the whole phase space is .projection of this invariant measure onto the level set can be found by changing to intrinsic coordinates : where is the unit normal vector of the the level set ; and . noting that : we have that is : where is the projected invariant measure of the lotka - volterra system on the level set .it is worth noting that on the level set .since dynamics on is ergodic , the average of any function under the projected invariant measure on is equal to its time average over a period : under the invariant measure , the area encircled by the level curve is : using green s theorem the area can be simplified as where is the time period for the cyclic motion . furthermore , that is in which is the time average , or phase space average according to the invariant measure .we can also find the derivative of the area encircled by the level curve with respect to the parameter of the system as : in this setting , the helmholtz theorem reads in which the factor here is the mean , or , precisely like the mean kinetic energy as the notion of temperature in classical physics , and the virial theorem .the -force is then defined as it is important to note that the definition of given in the right - hand - side of ( [ f - alpha ] ) is completely independent of the notion of , even though the relation ( [ dheq28 ] ) explicitly involves the latter . is a function of both and , however .therefore , the value of -work depends on how is constrained : there are iso- processes , iso- processes , etc . the notion of an equation of state first appeared in classic thermodynamics . from a modern dynamical systems standpoint ,a fixed point as a function usually is continuously dependent upon the parameters in a mathematical model , except at bifurcation points .let be a globally asymptotically attractive fixed point , and be a parameter , then the function constitutes an _ equation of state _ for the long - time `` equilibrium '' behavior of the dynamical system .if a system has a globally asymptotically attractive limit set that is not a simple fixed point , then every geometric characteristic of the invariant manifold , say , will be a function of .in this case , could well be considered as an equation of state .an `` equilibrium state '' in this case is the entire invariant manifold .the situation for a conservative dynamical system with center manifolds is quite different . in this case , the long - time behavior of the dynamical system , the foremost , is dependent upon its initial data .an equation of state therefore is a functional relation among ( i ) geometric characteristics of a center manifold , ( ii ) parameter , and ( iii ) a new quantity , or quantities , that identifies a specific center manifold , .this is the fundamental insight of the helmholtz theorem . in ecological terms , area under the invariant measure : , gives a sense of total variation in both the predator s and the prey s populations .therefore , measures population range of both populations as a whole .the parameter , on the other hand , is the proportion of predators over preys population ranges of time variations : the new quantity can be viewed as a measure of the mean ecological `` activeness '' : it is the mean of distance " from the prey s and predator s populations and , to the fixed populations in equilibrium . for population dynamic variable , eq .[ eq2700 ] suggests a norm .then , ; and an averaged norm of per capita growth rates in the two species : and finally , is the ecological force " one needs to counteract in order to change .in other words , when is greater , more -energy change is needed to vary .it is also worth noting that is positively related to the prey s average population range .in fact we can define another distance " of the prey s population to as : , then .note that for very small : , , and in the top row , different views among in the second row , and among in the third row.,title="fig:",width=182 ] , , and in the top row , different views among in the second row , and among in the third row.,title="fig:",width=182 ] , , and in the top row , different views among in the second row , and among in the third row.,title="fig:",width=182 ] , , and in the top row , different views among in the second row , and among in the third row.,title="fig:",width=182 ] , , and in the top row , different views among in the second row , and among in the third row.,title="fig:",width=182 ] , , and in the top row , different views among in the second row , and among in the third row.,title="fig:",width=182 ] , , and in the top row , different views among in the second row , and among in the third row.,title="fig:",width=182 ] , , and in the top row , different views among in the second row , and among in the third row.,title="fig:",width=182 ] , , and in the top row , different views among in the second row , and among in the third row.,title="fig:",width=182 ] fig .[ fig_2 ] shows various forms of `` the equation of state '' , e.g. , relationships among the triplets , , and in the first row ; among the triplet in the second row ; and among the triplet in the third row .the second row shows that the relation among is just like that among in ideal gas model : mean ecological activeness increases nearly linearly with the ecological force for constant , and with the proportion of the predator s population range over the prey s , for constant ; ecological force and the proportion of population ranges are inversely related under constant mean activeness . and when , .other features can be observed by looking into the details of each column . the first column of fig .[ fig_2 ] demonstrates that as the proportion of population ranges increases , the ecological force is alleviated ( for given -energy or ecological activeness ) .this is due to the positive relationship between the ecological force and the prey s population range ( as shown in eq .[ f_alpha ] ) . since is the proportion of the predator s population range over the prey s , and be inversely related when any resource , -energy , or activity , , remains constant .this fact means that on an iso- or iso- curve , when the proportion is large , relatively less -energy change is needed to reduce it .the first column also demonstrates an inverse relationship between and the total population range for any given , which reflects the fact that as the proportion of the predator s population range over the prey s increase , the total population range of the two species would actually decrease .the second column of fig .[ fig_2 ] demonstrates that : the ecological force and the total population range increases with the mean activeness ( with given -energy or ) .this observation means that it would also take more -energy to change the proportion of the predator s population range over the prey s , if mean ecological activeness rises , and that more population range would be explored with more ecological activeness .the third column about the relation between and is interesting : under constant -energy , as the proportion of population ranges increases , the ecological activeness decreases , in accordance with the drop in the total population range as shown in fig [ fig_1 ] .but when the total population range or the ecological force is to remain constant , ecological activeness actually increases with .this means that under constant resource ( -energy ) , the proportion of the predator s population range over the prey s restricts mean ecological activeness .but if we fix the ecological force or total population range ( supplying more -energy ) , an increase in predator s population range over prey s can increase ecological activeness .nonlinear dynamics described by eq .( [ theeqn ] ) has a linear , first - order partial differential equation ( pde ) representation a solution to ( [ thepde ] ) can be obtained via the method of characteristics , exactly via ( [ theeqn ] ) .( [ thepde ] ) sometime is called the liouville equation for the ordinary differential equations ( [ theeqn ] ). it also has an adjoint : note that while the orthogonality in eq .( [ invmeasure ] ) indicates that is a stationary solution to eq .( [ thepde2 ] ) , it is not a stationary invariant density to ( [ thepde ] ) .this is due to the fact that vector field is not divergence free , but rather as in ( [ non - df ] ) the scalar factor .then it is easy to verify that is a stationary solution to ( [ thepde ] ) : it is widely known that a volume - preserving , divergence - free conservative dynamics has a conserved entropy = -\int_{\mathbb{r } } u(x , t)\ln u(x , t)\rd x ] even for interpretations other then it s . with the defined in ( [ thefpe ] ) and ( [ sde ] ) ,let us now consider the stochastic functional therefore , for very large populations , i.e. , small , this suggests a separation of time scales between the cyclic motion on and slow , stochastic level crossing .the method of averaging is applicable here : with where denotes the average of on the level set .then , using the it integral , the distribution of follows a fokker - planck equation : and the stationary solution for eq .( [ varyingh ] ) is : the steady state distributions of under different s are shown in fig .[ fig_3 ] .the steady state distribution does not depend on the volume size ., the steady state distribution with respect to in logarithmic scale .the slowly fluctuating energy " ranges from to infinity .its steady state distribution eventually increases without bound as increases.,width=336 ] when is big enough , increases with without bound , since is a positive increasing function .hence , is not normalizable on the entire , reflecting the unstable nature of the system .the fluctuation approaches zero when approaches .consequently , the absorbing effect at makes another possible local maximum .it is usually an obligatory step in understanding an ode to analyze the dependence of a steay state as an implicit function of the parameters .one of the important phenomena in this regard is the thom - zeeman catastrophe . from this broad perspective, the analysis developed by helmholtz and boltzmann in 1884 is an analysis of the geometry of a `` non - constant but steay solution '' , as a function of its parameter(s ) and initial conditions . in the context of lv equation ( [ theeqn ] ) ,the geometry is characterized by the area encircled by a periodic solution , , where is specified by the the initial data : .the celebrated helmholtz theorem then becomes our eq .( [ dheq28 ] ) since eq .( [ theeqn ] ) has a conserved quantity , eq . ( [ dheq ] ) can , and should be , interpreted as an extended conservation law , beyond the dynamics along a single trajectory , that includes both variations in and in . the partial derivatives in ( [ dheq ] )can be shown as time averages of ecological activeness or , and variation in the prey s population .those conjugate variables , along with parameter , conserved quantity , and encompassed area constitutes a set of state variables " describing the state of an ecological system in its stationary , cyclic state .this is one of the essences of boltzmann s statistical mechanics . for the monocyclic lotka - volterra system ,the dynamics are relatively simple .hence , the state variables have monotonic relationships , the same as that observed in ideal gas models .when the system s dynamics become more complex ( e.g. have more than one attractor , hopf bifurcation ) , relations among the state variables will reflect that complexity ( e.g. develop a cusp , exhibiting a phase transition in accordance ) . when the populations of predator and prey are finite , the stochastic predator - prey dynamics is unstable . this factis reflected in the non - normalizable steady state distribution on , and the destabilizing effect of the gradient dynamics in the potential - current decomposition .this is particular to the lv model we use ; it is not a problem for the general theory if we study a more realistic model as in . despite the unstable dynamics ,the stochastic model system is structurally stable : its dynamics persists under sufficiently small perturbations .this implies conservative dynamical systems like ( [ theeqn ] ) are meaningful mathematical models , when interpreted correctly , for ecological realities .indeed , all ecological population dynamics can be represented by birth - death stochastic processes . except for systems with detailed balance , which rarely holds true , almost all such dynamics have underlying cyclic , stationary conservative dynamics .the present work shows that a hidden conservative ecological dynamics can be revealed through mathematical analyses .to recognize such a conservative ecology , however , several novel quantities need to be defined , developed , and becoming a part of ecological vocabulary .this is the intellectual legacy of helmholtz s and boltzmann s mechanical theory of heat . 0.3 cm * authors contributions . *m. and h. q. contributed equally to this work .* competing interests . *we declare there are no competing interests. * acknowledgements . *we thank r.e .omalley , jr . and l.f .thompson for carefully reading the manuscripts and many useful comments .99 murray , j. d. ( 2002 ) _ mathematical biology i : an introduction _ , 3rd ed . , springer , new york .kot , m. ( 2001 ) _ elements of mathematical ecology _ , cambridge univ .press , uk .qian , h. ( 2011 ) nonlinear stochastic dynamics of mesoscopic homogeneous biochemical reaction systems - an analytical theory ( invited article ) ._ nonlinearity _ , * 24 * , r19r49 .( doi:10.1088/0951 - 7715/24/6/r01 ) zhang , x .-j . , qian , h. and qian , m. ( 2013 ) stochastic theory of nonequilibrium steady states and its applications .part i. _ phys .rep . _ * 510 * , 185 .( doi:10.1016/j.physrep.2011.09.002 ) ao , p. ( 2005 ) laws in darwinian evolutionary theory ( review ) . _ phys ._ * 2 * , 117156 .( doi:10.1016/j.plrev.2005.03.002 ) qian , h. , qian , m. and wang , j .- z .( 2012 ) circular stochastic fluctuations in sis epidemics with heterogeneous contacts among sub - populations ._ theoret .biol . _ * 81 * , 223231 .( doi:10.1016/j.tpb.2012.01.002 ) wang , j. , xu , l. and wang , e. ( 2008 ) potential landscape and flux framework of nonequilibrium networks : robustness , dissipation , and coherence of biochemical oscillations .usa _ * 105 * , 1227112276 .( doi:10.1073/pnas.0800579105 ) qian , h. ( 2013 ) a decomposition of irreversible diffusion processes without detailed balance . _ j. math_ * 54 * , 053302 .( doi:10.1063/1.4803847 ) qian , h. ( 2014 ) the zeroth law of thermodynamics and volume - preserving conservative system in equilibrium with stochastic damping .a _ * 378 * , 609616 . ( doi:10.1016/j.physleta.2013.12.028 ) gallavotti , g. ( 1999 ) _ statistical mechanics : a short treatise _ , springer , berlin .khinchin , a. i. ( 1949 ) _ mathematical foundations of statistical mechanics _ , dover , new york .campisi , m. ( 2005 ) on the mechanical foundations of thermodynamics : the generalized helmholtz theorem . _ studies in history and philosophy of modern physics _ , * 36 * , 275290 .( doi:10.1016/j.shpsb.2005.01.001 ) ma , y .- a . and qian , h. ( 2015 ) universal ideal behavior and macroscopic work relation of linear irreversible stochastic thermodynamics ._ new j. phys . _ * 17 * , 065013 .( doi:10.1088/1367 - 2630/17/6/065013 ) lotka , a. j. ( 1925 ) _ elements of physical biology _ , williams & wilkins , baltimore , md .planck , m. ( 1969 ) _ treatise on thermodynamics _, dover , new york .pauli , w. ( 1973 ) _ pauli lectures on physics : thermodynamics and the kinetic theory of gas _ , vol .3 , the mit press , cambridge , ma . allen , l. j. s. ( 2010 ) _ an introduction to stochastic processes with applications to biology _, chapman & hall / crc , new york .grasman , j. and van herwaarden , o. a. ( 1999 ) _ asymptotic methods for the fokker - planck equation and the exit problem in applications _ , springer , berlin .kurtz , t. g. ( 1981 ) _ approximation of population processes ._ siam pub , philadelphia .qian , h. ( 2015 ) thermodynamics of the general diffusion process : equilibrium supercurrent and nonequilibrium driven circulation with dissipation .j. special topics _ , * 224 * , 781799 .( doi:10.1140/epjst / e2015 - 02427 - 6 ) andrey , l. ( 1985 ) the rate of entropy change in non - hamiltonian systems . _lett . a _ * 111 * , 4546 .( doi:10.1016/0375 - 9601(85)90799 - 6 ) zhang , f. , xu , l. , zhang , k. , wang , e. , and wang , j. ( 2012 ) the potential and flux landscape theory of evolution ._ j chem phys . _ * 137 * , 065102 .( doi:10.1063/1.4734305 ) gardiner , c. w. ( 2004 ) _ handbook of stochastic methods : for physics , chemistry and the natural sciences _ , 3rd ed . , springer - verlag , berlin .freidlin , m. i. and wentzell , a. d. ( 2012 ) _ random perturbations of dynamical systems _, springer - verlag , new york .zhu , w.q .( 2006 ) nonlinear stochastic dynamics and control in hamiltonian formulation . __ , * 59 * , 230248 .ao , p. , qian , h. , tu , y. , and wang , j. ( 2013 ) a theory of mesoscopic phenomena : time scales , emergent unpredictability , symmetry breaking and dynamics across different levels , arxiv:1310.5585 .xu , l. , zhang , f. , zhang , k. , wang , e. , and wang , j. ( 2014 ) the potential and flux landscape theory of ecology ._ plos one _ * 9 * , e86746 .( doi:10.1371/journal.pone.0086746 ) gallavotti , g. , reiter , w. l. and yngvason , j. ( 2007 ) _boltzmann s legacy _ ,esi lectures in mathematics and physics , eur .soc . pub .[ sec : aa ] similarly [ sec6.2 ] using the divergence theorem and noting that , we obtain for the time evolution of the relative entropy : & = & \int_{\mathbb{r}^2}\frac{\partial u(x , y , t)}{\partial t } \left[\ln\left ( \frac{u(x , y , t)}{g^{-1}(x , y)}\right ) + 1 \right ] \rd x\rd y \nonumber\\[6pt ] & = & -\int_{\mathbb{r}^2 } \nabla\cdot\big ( ( f , g)u(x , y , t)\big ) \ln\left ( \frac{u(x , y , t)}{g^{-1}(x , y)}\right ) \rd x\rd y + \frac{\partial}{\partial t}\int_{\mathbb{r}^2 } u(x , y , t ) \rd x\rd y \nonumber\\[6pt ] & = & -\int_{\mathbb{r}^2 } \nabla\cdot\big ( ( f , g)u(x , y , t)\big ) \ln\left ( \frac{u(x , y , t)}{g^{-1}(x , y)}\right ) \rd x\rd y\ \nonumber\\[6pt ] & = & -\int_{\mathbb{r}^2 } \nabla\cdot\big(g^{-1}f , g^{-1}g\big ) \times \left(\frac{u}{g^{-1}}\right ) \ln\left(\frac{u}{g^{-1}}\right ) \rd x\rd y \nonumber\\[6pt ] & & -\int_{\mathbb{r}^2 } \big(g^{-1}f , g^{-1}g\big ) \cdot \nabla\left[\left(\frac{u}{g^{-1}}\right ) \ln\left(\frac{u}{g^{-1}}\right ) - \left(\frac{u}{g^{-1}}\right)\right ] \rd x\rd y \nonumber\\[6pt ] & = & \int_{\mathbb{r}^2 } \left [ \left(\frac{u}{g^{-1}}\right ) \ln\left(\frac{u}{g^{-1}}\right)-\left(\frac{u}{g^{-1}}\right ) \right ] \nabla\cdot \big(g^{-1}f , g^{-1}g\big ) \\rd x\rd y \= \ 0.\end{aligned}\ ] ] a more general result can be obtained in parallel for arbitrary differentiable and : & = & \int_{\mathfrak{d } } \frac{\partial u(x , y , t)}{\partial t } \left [ \psi\left ( \frac{u}{g^{-1}\rho(h)}\right ) + \frac{u}{g^{-1}\rho(h ) } \psi'\left ( \frac{u}{g^{-1}\rho(h)}\right ) \right ] \rd x\rd y \nonumber\\[6pt ] & = & -\int_{\mathfrak{d } } \nabla\cdot\big ( ( f , g ) u\big ) \left [ \psi\left ( \frac{u}{g^{-1}\rho(h)}\right ) + \frac{u}{g^{-1}\rho(h ) } \psi'\left ( \frac{u}{g^{-1}\rho(h)}\right ) \right ] \rd x\rd y \nonumber\\[6pt ] & = & -\int_{\mathfrak{d } } \big(g^{-1}\rho(h)f , g^{-1}\rho(h)g\big ) \cdot \nabla\left(\frac{u}{g^{-1}\rho(h ) } \right ) \nonumber\\ & & \times \left [ \psi\left ( \frac{u}{g^{-1}\rho(h)}\right ) + \frac{u}{g^{-1}\rho(h ) } \psi'\left ( \frac{u}{g^{-1}}\rho(h)\right ) \right ] \rd x\rd y \nonumber\\[6pt ] & = & -\int_{\mathfrak{d } } \big(g^{-1}\rho(h)f , g^{-1}\rho(h)g\big ) \cdot \nabla \left [ \frac{u}{g^{-1}\rho(h ) } \psi\left ( \frac{u}{g^{-1}\rho(h)}\right ) \right ] \rd x\rd y \nonumber\\[6pt ] & = & -\int_{\mathfrak{d } } \nabla\cdot\left\ { \big ( g^{-1}\rho(h)f , g^{-1}\rho(h)g\big ) \left [ \frac{u}{g^{-1}\rho(h ) } \psi\left ( \frac{u}{g^{-1}\rho(h)}\right ) \right ] \right\ } \rd x\rd y \nonumber\\[6pt ] & = & \int_{\partial\mathfrak{d } } \left\ { u(x , y , t ) \psi\left ( \frac{u(x , y , t)}{g^{-1}(x , y)\rho(h)}\right ) \big(f , g\big ) \right\ } \times ( \rd x,\rd y).\end{aligned}\ ] ]
we carry out mathematical analyses , _ la _ helmholtz s and boltzmann s 1884 studies of monocyclic newtonian dynamics , for the lotka - volterra ( lv ) equation exhibiting predator - prey oscillations . in doing so a novel thermodynamic theory " of ecology is introduced . an important feature , absent in the classical mechanics , of ecological systems is a natural stochastic population dynamic formulation of which the deterministic equation ( e.g. , the lv equation studied ) is the infinite population limit . invariant density for the stochastic dynamics plays a central role in the deterministic lv dynamics . we show how the conservation law along a single trajectory extends to incorporate both variations in a model parameter and in initial conditions : helmholtz s theorem establishes a broadly valid conservation law in a class of ecological dynamics . we analyze the relationships among mean ecological activeness , quantities characterizing dynamic ranges of populations and , and the ecological force . the analyses identify an entire orbit as a stationary ecology , and establish the notion of equation of ecological states " . studies of the stochastic dynamics with finite populations show the lv equation as the robust , fast cyclic underlying behavior . the mathematical narrative provides a novel way of capturing long - term dynamical behaviors with an emergent _ conservative ecology_. = 6 . in = 9.0 in = -0.5 in = -0.85 in = 0.23 in * key words : * conservation law , ecology , equation of states , invariant density , stochastic thermodynamics
the process of growing films by the deposition of atoms on a substrate is of considerable experimental and theoretical interest .while there has been a lot of research on the process of kinetic roughening leading to a self - affine interface profile , there has been much recent experimental and theoretical interest in a different mode of surface growth involving the formation of `` mounds '' which are pyramid - like or `` wedding - cake - like '' structures .the precise experimental conditions that determine whether the growth morphology would be kinetically rough or dominated by mounds are presently unclear .however , many experiments show the formation of mounds that _ coarsen _ ( the typical lateral size of the mounds increases ) with time . during this process , the typical slope of the sides of the pyramid - like mounds may or may not remain constant . if the slope remains constant in time , the system is said to exhibit _slope selection_. as the mounds coarsen , the surface roughness characterized by the root - mean - square width of the interface increases .eventually , at very long times , the system is expected to evolve to a single - mound structure in which the mound size is equal to the system size .there are obvious differences between the structures of kinetically rough and mounded interfaces . in the first case ,the interface is rough in a self - affine way at length scales shorter than a characteristic length that initially increases with time and eventually saturates at a value comparable to the sample size . in the second case ,the characteristic length is the typical mound size whose time - dependence is qualitatively similar to that of .the interface in this case looks well - ordered at length scales shorter than . nevertheless , there are certain similarities between the gross features of these two kinds of surface growth .first consider the simpler situation in which the slope of the sides of the mounds remains constant in time .simple geometry tells us that if the system evolves to a single - mound structure at long times , then the `` roughness exponent '' must be equal to unity .also , the height - difference correlation function is expected to be proportional to for .this is consistent with .if the mound size increases with time as a power law , , during coarsening , then the interface width , which is essentially the height of a typical mound , should also increase with time as a power law with the same exponent .thus , dynamic scaling with `` growth exponent '' equal to , and `` dynamical exponent '' equal is recovered .if the mound slope increases with time as a power law , ( this is known in the literature as _ steepening _ ) , then one obtains behavior similar to anomalous dynamical scaling with , .these similarities between the gross scaling properties of kinetic roughening with a large value of and mound formation with power - law coarsening make it difficult to experimentally distinguish between these two modes of surface growth .this poses a problem in the interpretation of experimental results .existing experiments on mound formation show a wide variety of behavior . without going into the details of individual experiments, we note that some experiments show mound coarsening with a time - independent `` magic '' slope , whereas other experiments do not show any slope selection .the detailed morphology of the mounds varies substantially from one experiment to another .the reported values of the coarsening exponent show a large variation in the range 0.15 - 0.4 .traditionally , the formation of mounds has been attributed to the presence of the so - called ehrlich - schwoebel ( es ) step - edge barrier that hinders the downward motion of atoms across the edge of a step .this step - edge diffusion bias makes it more likely for an atom diffusing on a terrace to attach to an ascending step than to a descending one .this leads to an effective `` uphill '' surface current that has a destabilizing effect , leading to the formation of mounded structures as the atoms on upper terraces are prevented by the es barrier from coming down .this destabilizing effect is usually represented in continuum growth equations by a _linear instability_. such growth equations usually have a `` conserved '' form in which the time - derivative of the height is assumed to be equal to the negative of the divergence of a nonequilibrium surface current .the effects of an es barrier are modeled in these equations by a term in that is proportional to the gradient of the height ( for small values of the gradient ) with a _ positive _ proportionality constant .such a term is manifestly unstable , leading to unlimited exponential growth of the fourier components of the height .this instability has to be controlled by other nonlinear terms in the growth equation in order to obtain a proper description of the long - time behavior .a number of different choices for the nonlinear terms have been reported in the literature .if the `` es part '' of has one or more stable zeros as a function of the slope , then the slope of the mounds that form as a result of the es instability is expected to stabilize at the corresponding value(s ) of at long times .the system would then exhibit slope selection .if , on the other hand , this part of does not have a stable zero , then the mounds are expected to continue to steepen with time .analytic and numerical studies of such continuum growth equations have produced a wide variety of results , such as power - law coarsening and slope selection with or in two dimensions , power - law coarsening accompanied by a steepening of the mounds , and a complex coarsening process in which the growth of the mound size becomes extremely slow after a characteristic size is reached .there are several atomistic , cellular - automaton - type models that incorporate the effects of an es diffusion barrier .formation and coarsening of mounds in the presence of an es barrier have also been studied in a 1d model with both discrete and continuum features .we also note that a new mechanism for mounding instability has been discovered recently .this instability , generated by fast diffusion along the edges of monatomic steps , leads to the formation of quasi - regular shaped mounds in two or higher dimensions .the effects of this instability have been studied in simulations .the wide variety of results obtained from simulations of different models , combined with similar variations in the experimental results , have made it very difficult to identify the microscopic mechanism of mound formation in surface growth . in this paper , we show that mound formation , slope selection and power - law coarsening in a class of one - dimensional ( 1d ) continuum growth equations and discrete atomistic models can occur from a mechanism that is radically different from the ones mentioned above .our study is based on the conserved nonlinear growth equation proposed by villain and by lai and das sarma , and an atomistic version of this equation .we have studied the behavior of the continuum equation by numerical integration , and the behavior of the atomistic model by stochastic simulation .previous work on these systems showed that they exhibit a _ nonlinear _ instability , in which pillars ( grooves ) with height ( depth ) greater than a critical value continue to grow rapidly .this instability can be controlled by the introduction of an infinite number of higher - order gradient nonlinearities .when the parameter that describes the effectiveness of control is sufficiently large , the controlled models exhibit kinetic roughening , characterized by usual dynamical scaling with exponent values close to those expected from dynamical renormalization group calculations . as the value of the control parameteris decreased , these models exhibit transient multiscaling of height fluctuations . foryet smaller values of the control parameter , the rapid growth of pillars or grooves causes a breakdown of dynamical scaling , with the width versus time plot showing a sharp upward deviation from the power - law behavior found at short times ( before the onset of the nonlinear instability ) .we report here the results of a detailed study of the behavior of these models in the regime of small values of the control parameter where conventional kinetic roughening is not observed .we find that in this regime , the interface self - organizes into a sawtooth - like structure with a series of triangular , pyramid - like mounds .these mounds coarsen in time , with larger mounds growing at the expense of smaller ones . in this coarsening regime ,a power - law dependence of the interface width on time is recovered .the slope of the mounds remains constant during the coarsening process . in section [ models ] , the growth equation and the atomistic model studied in this work are defined and the numerical methods we have used to analyze their behavior are described .the basic phenomenology of mound formation and slope selection in these systems is described in detail in section [ mound ] .specifically , we show that the nonlinear mechanism of mound formation in these systems is `` generic '' in the sense that the qualitative behavior does not depend on the specific form of the function used for controlling the instability .in particular , we find very similar behavior for two different forms of the control function : one used in earlier studies of these systems , and the other one proposed by politi and villain from physical considerations . since the linear instability used conventionally to model the es mechanism is explicitly absent in our models , our work shows that the presence of step - edge barriers is not essential for mound formation .the slope selection found in our models is a true example of nonlinear pattern formation : since the nonequilibrium surface current in our models vanishes for all values of constant slope , the selected value of the slope can not be predicted in any simple way .this is in contrast to the behavior of es - type models where slope selection occurs only if the surface current vanishes at a specific value of the slope .next , in section [ pt ] , we show that the change in the dynamical behavior of the system ( from kinetic roughening to mound formation and coarsening ) may be described as a first - order nonequilibrium phase transition .since the instability in our models is a nonlinear one , the flat interface is locally stable in the absence of noise for all values of the model parameters ( the strength of the nonlinearity and the value of the control parameter ) .the mounded phase corresponds to a different stationary solution of the dynamical equations in the absence of noise .we use a linear stability analysis to find the `` spinodal boundary '' in the two - dimensional parameter space across which the mounded stationary solution becomes locally unstable .we show that the results of this numerical stability analysis can also be obtained from simple analytic arguments . to obtain the phase boundary in the presence of noise ,we first define an _ order parameter _ that is zero in the kinetically rough phase and nonzero in the mounded phase .we combine the numerically obtained results for this order parameter for different sample sizes with finite - size scaling to confirm that this order parameter exhibits the expected behavior in the two phases .the phase boundary that separates the mounded phase from the kinetically rough one is obtained numerically .the phase boundaries for the continuum model with two different forms of the control function and the atomistic model are found to be qualitatively similar .the results of a detailed study of the process of coarsening of the mounds are reported in section [ coars ] .surprisingly , we find that the coarsening exponents of the continuum equation and its atomistic version are different .we propose a possible explanation of this result on the basis of an analysis of the coarsening process in which the problem is mapped to that of of a brownian walker in an attractive force field . in this mapping ,the brownian walk is supposed to describe the noise - induced random motion of the peak of a mound , and the attractive `` force '' represents the interaction between neighboring mounds that leads to coarsening .we show that the numerical results obtained for the dynamics of mounds in the atomistic model are consistent with this explanation . in section [ conserv ], we consider the behavior of the continuum growth equation for `` conserved '' noise statistics .the nonlinear instability found in the nonconserved case is expected to be present in the conserved case also .however , there is an important difference between the two cases .the nonconserved model exhibits anomalous dynamical scaling , so that the typical nearest - neighbor height difference continues to increase with time , and the instability is always reached at sufficiently long times , even if the starting configuration is perfectly flat . since the continuum model with conserved noise statistics exhibits usual dynamic scaling with , the nearest - neighbor height difference is expected to saturate at long times if the initial state of the system is flat . under these circumstances ,the occurrence of the nonlinear instability in runs started from flat states would depend on the values of the parameters .specifically , the instability may not occur at all if the value of the nonlinear coefficient in the growth equation is sufficiently small .at the same time , the instability can be initiated by choosing an initial state with sufficiently high ( deep ) pillars ( grooves ) .since mound formation in these models is crucially dependent on the occurrence of the instability , the arguments above suggest that the nature of the long - time steady state reached in the conserved model may depend on the choice of the initial state .indeed , we find from simulations that in a region of parameter space , the mounded and kinetically rough phases are both locally stable and the steady state configuration is determined by the choice of the initial configuration of the interface .these results imply the surprising conclusion that the long - time , steady - state morphology of a growing interface , as well as the dynamics of the process by which the steady state is reached may be `` history dependent '' in the sense that the behavior would depend crucially on the choice of the initial state .a summary of our findings and a discussion of the implications of our results are provided in sec.[summ ] .a summary of the basic results of our study was reported in a recent letter .conserved growth equations ( deterministic part of the dynamics having zero time derivative for the fourier mode of the height variable ) with nonconserved noise are generally used to model nonequilibrium surface growth in molecular beam epitaxy ( mbe ) .the conservation is a consequence of absence of bulk vacancies , overhangs and desorption ( evaporation of atoms from the substrate ) under optimum mbe growth conditions .thus , integrating over the whole sample area gives the number of particles deposited .this conservation is not strictly valid because of `` shot noise '' fluctuations in the beam .the shot noise is modeled by an additive noise term in the equation of motion of the interface .the noise is generally assumed to be delta - correlated in both space and time : where is a point on a -dimensional substrate .thus , a conserved growth equation may be written in a form where is the height at point at time , and is the surface current density . the surface current models the deterministic dynamics at the growth front .as mentioned in section [ intro ] , the presence of an es step - edge barrier is modeled in continuum equations of the form of eq.([mbe ] ) by a term in that is proportional to the slope , with a positive constant of proportionality .this makes the flat surface ( constant for all ) linearly unstable .this instability is controlled by the introduction of terms involving higher powers of the local slope and higher - order spatial derivatives of .we consider the conserved growth equation proposed by villain and lai and das sarma for describing mbe - type surface growth _ in the absence of es step - edge barriers_. this equation is of the form where represents the height variable at the point at time .this equation is believed to provide a correct description of the kinetic roughening behavior observed in mbe - type experiments . in our study, we numerically integrate the 1d version of eq.([lds1 ] ) using a simple euler scheme . upon choosing appropriate units of length and time and discretizing in space and time , eq.([lds1 ] ) is written as \nonumber \\ & + & \sqrt{\delta t}\ , \eta_i(t ) , \label{lds2}\end{aligned}\ ] ] where represents the dimensionless height variable at the lattice point at dimensionless time , and are lattice versions of the derivative and laplacian operators , and is a random variable with zero average and variance equal to unity .these equations , with an appropriate choice of , are used to numerically follow the time evolution of the interface . in most of our studies, we have used the following definitions for the lattice derivatives : we have checked that the use of more accurate , left - right symmetric definitions of the lattice derivatives , involving more neighbors to the left and to the right , leads to results that are very similar to those obtained from calculations in which these simple definitions are used .we have also checked that the results obtained in the deterministic limit ( ) by using a more sophisticated integration routine closely match those obtained from the euler method with sufficiently small values of the integration time step .we have also studied an atomistic version of eq.([lds1 ] ) in which the height variables are integers .this model is defined by the following deposition rule .first , a site ( say ) is chosen at random .then the quantity is calculated for the site and all its nearest neighbors .then , a particle is added to the site that has the smallest value of among the site and its nearest neighbors . in the case of a tie for the smallest value, the site is chosen if it is involved in the tie ; otherwise , one of the sites involved in the tie is chosen randomly .the number of deposited layers provides a measure of time in this model .it was found in earlier studies that both these models exhibit a _ nonlinear instability _ in which isolated structures ( pillars for , grooves for ) grow rapidly if their height ( depth ) exceeds a critical value .this instability can be controlled by replacing in eqns.([lds2 ] ) and ( [ kds1 ] ) by where the nonlinear function is defined as being a control parameter .we call the resulting models `` model i '' and `` model ii '' , respectively .this replacement , amounts to the introduction of an infinite series of higher - order nonlinear terms .the time evolution of the height variables in model i is , thus , given by + \sqrt{\delta t}\ , \eta_i(t ) .\label{cld}\end{aligned}\ ] ] in model ii , the quantity is defined as while the function was introduced in the earlier work purely for the purpose of controlling the nonlinear instability , it turns out that the introduction of this function in the growth equation is physically meaningful .politi and villain have shown that the nonequilibrium surface current that leads to the term in eq.([lds1 ] ) should be proportional to when is small , and should go to zero when is large .the introduction of the `` control function '' satisfies this physical requirement .we have also carried out studies of a slightly different model ( which we call `` model ia '' ) in which the function is assumed to have a form suggested by politi and villain : where is , as before , a positive control parameter .this function has the same asymptotic behavior as that of the function defined in eq.([fofx ] ) .as we shall show later , the results obtained from calculations in which these two different forms of are used are qualitatively very similar .in fact , we expect that the qualitative behavior of these models would be the same for any monotonic function that satisfies the following requirements : ( i ) must be proportional to in the small- limit ; and ( ii ) it must saturate to a constant value as . we have carried out extensive simulations of both these models for different system sizes .the results reported here have been obtained for systems of sizes .there is no significant dependence of the results on .the time step used in most of our studies of models i and ia is .we have checked that very similar results are obtained for smaller values of .we used both unbounded ( gaussian ) and bounded distributions for the random variables in our simulations of models i and ia , with no significant difference in the results . for computational convenience , a bounded distribution ( uniform between and )was used in most of our calculations .unless otherwise stated , the results described in the following sections were obtained using periodic boundary conditions .the effects of using other boundary conditions will be discussed in the next section .it has been demonstrated earlier that if the control parameter is sufficiently large , then the nonlinear instability is completely suppressed and the models exhibit the usual dynamical scaling behavior with the expected exponent values , , , and .this behavior for model i is illustrated by the solid line in fig.[bigfig1 ] , which shows a plot of the width as a function of time for parameter values = and . as the value of decreased with held constant , the instability makes its appearance : the height of an isolated pillar ( for ) increases in time if . the value of is nearly independent of , while increases as is decreased . if is sufficiently large , is small and the instability does not affect the scaling behavior of global quantities such as , although transient multiscaling at length scales shorter than the correlation length may be found if is not very large . as is decreased further , becomes large , and when isolated pillars with are created at an initially flat interface through fluctuations , the rapid growth of such pillars to height leads to a sharp upward departure from the power - law scaling of with time .the time at which this departure occurs varies from run to run .this behavior for model i with and is shown by the dash - dotted line in fig.[bigfig1 ] .this instability leads to the formation of a large number of randomly distributed pillars of height close to .as the system evolves in time , the interface self - organizes to form triangular mounds of a fixed slope near these pillars .these mounds then coarsen in time , with large mounds growing larger at the expense of small ones . in this coarsening regime , a power - law growth of in time is recovered .the slope of the sides of the triangular mounds remains constant during this process . finally , the system reaches a steady state with one peak and one trough ( if periodic boundary conditions are used ) and remains in this state for longer times .the interface profiles in the kinetically rough phase ( obtained for relatively large values of ) and the mounded phase ( obtained for small ) are qualitatively different .this difference is illustrated in fig.[bigfig2 ] that shows typical interface profiles in the two different phases .this figure also shows a typical interface profile for model ia in the mounded regime , illustrating the fact that the precise choice of the control function is not crucial for the formation of mounds .the evolution of the interface structure in the mounded regime of model i is illustrated in fig.[bigfig3 ] which shows the interface profiles obtained in a typical run starting from a flat initial state at three different times : ( before the onset of the instability ) , ( after the onset of the instability , in the coarsening regime ) , and ( in the final steady state ) .this figure also shows the steady - state interface profile of a sample with the same parameters , to illustrate that the results do not depend on the sample size .very similar behavior is found for model ii . since the heights in this atomistic model can increase only by discrete amounts in each unit of discrete time ,the increase of at the onset of the instability is less rapid here than in the continuum models i and ia .nevertheless , the occurrence of the instability for small values of shows up as a sharp upward deviation of the versus plot from the initial power - law behavior with .this is illustrated by the dash - dotted line in fig.[bigfig4 ] , obtained from simulations of model ii with , .this behavior is to be contrasted with that for , , shown by the full line in fig.[bigfig4 ] , where the nonlinear instability is absent .the difference between the surface morphologies in the two regimes of model ii is illustrated in fig.[bigfig5 ] . the kinetically rough , self - affine morphology obtained for is clearly different from the mounded profile found for .the time evolution of the interface in the mounded regime of this model is illustrated in fig.[bigfig6 ] .the general behavior is clearly similar to that found for models i and ia .this figure also shows a properly scaled plot of the interface profile of a sample with the same parameters at a time in the coarsening regime .it is clear from this plot that the nature of the interface and the value of the selected slope do not depend on the sample size .the occurrence of a peak and a symmetrically placed trough in the steady - state profiles shown in figs [ bigfig3 ] and [ bigfig6 ] is a consequence of using periodic boundary conditions .the deterministic part of the growth equation of eq.([cld ] ) strictly conserves the average height if periodic boundary conditions are used .so , the average height remains very close to zero if the initial state is flat , as in most of our simulations .the steady - state profile must have at least one peak and one trough in order to satisfy this requirement . also , it is easy to show that if the slopes of the `` uphill '' and `` downhill '' parts of the steady - state profile are the same in magnitude ( this is true for our models ) , then the two extrema must be separated by .therefore , it is clear that the steady state obtained in simulations with periodic boundary conditions must have a peak and a symmetrically placed trough separated by distance . to check whether the basic phenomenology described above depends on the choice of the boundary condition ,we have carried out test simulations using two other boundary conditions : `` fixed '' boundary condition , in which the height variable to the left of , and to the right of are pinned to zero at all times ; and `` zero flux '' boundary condition with vanishing first and third derivatives of the height at the two ends of the sample .for these boundary conditions , the deterministic part of the growth equation does not strictly conserve the average height . as a result , the symmetry between the mound and the trough , found in the long - time steady state obtained for periodic boundary condition , is not present if one of the other boundary conditions is used . in particular , it is possible to stabilize a single mound or a single trough in the steady state for the other boundary conditions . since the heights at the two ends must be the same for fixed boundary condition , the two extrema in a configuration with one mound and one trough must be separated by , as shown in fig.[bigfig7 ] .the two extrema would not be separated by for the zero - flux boundary condition .these effects of boundary conditions are illustrated in fig.[bigfig7 ] which shows profiles in the mounded regime obtained for the three different boundary conditions mentioned above .it is clear from the results shown in this figure that the basic phenomenology , i.e. the formation and coarsening of mounds and slope selection , is not affected by the choice of boundary conditions . in particular, the values of the selected slope and the heights of the pillars at the top of a mound and the bottom of a trough remain unchanged when boundary conditions other than periodic are used .the selection of a `` magic slope '' during the coarsening process is clearly seen in the plots of fig.[bigfig3 ] and fig.[bigfig6 ] .more quantitatively , the probability distribution of the magnitude of the nearest - neighbor height differences is found to exhibit a pronounced peak at the selected value of the slope , and the position of this peak does not change during the coarsening process .fig.[bigfig8 ] shows a comparison of the distribution of the magnitude of the nearest - neighbor height difference for model i in the mounded and kinetically rough phases .a bimodal distribution is seen for the mounded phase , the two peaks corresponding to the values of the selected slope and the height of the pillars at the top and bottom of the pyramids .the kinetically rough phase , on the other hand , exhibits a distribution peaked at zero .fig.[bigfig9 ] shows the values of the selected slope at different times in the coarsening regime of model i. the constancy of the slope is clearly seen in this plot .all these features remain true for the discrete model .plots of the distribution at two different times in the coarsening regime of model ii are shown in fig.[bigfig10 ] .the peak position shows a small shift in the positive direction as is increased , but this shift is small compared to the width of the distributions , indicating near constancy of the selected slope .the value of the selected slope depends on the parameters and .this is discussed in the next section .the instability that leads to mound formation in our models is a nonlinear one , so that the perfectly flat state of the interface is a locally stable steady - state solution of the zero - noise growth equation for all parameter values .when the instability is absent ( e.g. for large values of the control parameter ) , this `` fixed - point '' solution of the noise - free equation is transformed to the kinetically rough steady state in the presence of noise .the mounded steady state obtained for small values of must correspond to a different fixed point of the zero - noise growth equation .such nontrivial fixed - point solutions may be obtained from the following simple calculation .the profile near the top ( ) of a triangular mound may be approximated as , where is the height of the pillar at the top of the mound and is the selected slope .this profile would not change under the dynamics of eq.([cld ] ) with no noise if the following conditions are satisfied : these conditions lead to the following pair of non - linear equations for the variables and used to parametrize the profile near the top of a mound : /c & = & 0 , \nonumber \\ 3x_1 - x_2 - \lambda [ 1-e^{-c(x_1+x_2)^2/4}]/c & = & 0 . \label{stability}\end{aligned}\ ] ] these equations admit a non - trivial solution for sufficiently small , and the resulting values of and are found to be quite close to the results obtained from numerical integration . a similar analysis for the profile near the bottom of a trough ( this amounts to replacing by in eq.([stability ] ) ) yields slightly different values for and . the full stable profile ( a fixed point of the dynamics without noise ) with one peak and one troughmay be obtained numerically by calculating the values of for which , the term multiplying in the right - hand side of eq.([cld ] ) , is zero for all .the fixed - point values of satisfy the following equations : = 0\,\ , \hbox{for\,\,all}\,\,i.\label{fp}\ ] ] a numerical solution of these coupled nonlinear equations shows that the small mismatch between the values of near the top and the bottom is accommodated by creating a few ripples near the top .the numerically obtained fixed - point profile for a system with , is shown in fig.[bigfig11 ] , along with a typical steady - state profile for the same system .the two profiles are found to be nearly identical , indicating that the mounded steady state in the presence of noise corresponds to this fixed - point solution of the noiseless discretized growth equation .fixed - point solutions of the continuum equation , eq.([lds1 ] ) , with and replaced by where has the form shown in eq.([pvform ] ) may also be obtained by a semi - analytical approach following racz _et al . _we consider stationary solutions of the continuum equation that satisfy the following first - order differential equation with appropriate boundary conditions : where is the local slope of the interface and is a constant that must be positive in order to obtain a solution that resembles a triangular mound . at large distances from the peak of the mound ,the slope would be constant , so that would vanish , whereas the second term would give a positive contribution if is positive . at the peak of the profile ,the second term would be zero because is zero , but would be negative , making the left - hand side of eq.([cont ] ) positive .while a closed - form solution of this differential equation can not be obtained , the value of at any point may be calculated with any desired degree of accuracy by numerically solving a simple algebraic equation .the height profile is then obtained by integrating with appropriate boundary conditions . in our calculation, we used the procedure of racz _ et . to take into account periodic boundary conditions . in fig .[ bigfig11 ] , we have shown a typical steady - state profile of a sample of model ia with and , and a fixed - point solution of the corresponding continuum equation . the value of the constant in eq.([cont ] )was chosen to yield the same slope as that of the steady - state profile of the discrete model .these results show that the steady - state properties for the two forms of are very similar , and the continuum equation admits stationary solutions that are very similar to those of the discretized models .the local stability of the mounded fixed point may be determined from a calculation of the eigenvalues of the stability matrix , , evaluated at the fixed point .we find that the largest eigenvalue of this matrix ( disregarding the trivial zero eigenvalue associated with an uniform displacement of the interface , for all ) crosses zero at ( see fig.([bigfig12 ] ) ) , signaling an instability of the mounded profile .the structure of eq.([cld ] ) implies that .thus , for , the dynamics of eq.([cld ] ) without noise admits two locally stable invariant profiles : a trivial , flat profile with the same for all , and a non - trivial one with one mound and one trough .depending on the initial state , the noiseless dynamics takes the system to one of these two fixed points .for example , an initial state with one pillar on a flat background is driven by the noiseless dynamics to the flat fixed point if the height of the pillar is smaller than a critical value , and to the mounded one otherwise .the `` relevant '' perturbation that makes the mounded fixed point unstable at is a uniform vertical relative displacement of the segment of the interface between the peak and the trough of the fixed - point profile .this can be seen by numerically evaluating the right eigenvector corresponding to the eigenvalue of the stability matrix that crosses zero at .this is demonstrated in the inset of fig.[bigfig12 ] . also ,examination of the time evolution of the mounded structure for values of slightly higher than shows that the instability of the structure first appears at the bottom of the trough . taking cue from these observations, the value can be obtained from a simple calculation .we consider the profile near the bottom of a trough at .as discussed above , the profile near may be parametrized as , , and the values of and may be obtained by solving a pair of nonlinear equations , eq.([stability ] ) with replaced by .we now consider a perturbation of this profile , in which the heights on one side of are all increased by a small amount ( i.e. , with ) , and use eq.([cld ] ) to calculate how changes with time , assuming its value to be small .the requirement that must decrease with time for the fixed - point structure to be locally stable leads to the following equation for the value of at which the structure becomes unstable : by substituting the numerically obtained values of and in this equation , the critical value , , of the parameter is obtained as a function of .the values obtained this way are in good agreement with those obtained from our full numerical calculation of the eigenvalues of the stability matrix .the `` spinodal '' lines ( i.e. the lines in the plane beyond which the mounded fixed point is unstable ) for models i and ia are shown in fig.[bigfig13 ] .both these lines have the expected form , .it would be interesting to carry out a similar stability analysis for the mounded stationary profile ( see fig.[bigfig11 ] ) of the continuum equation corresponding to model ia .such a calculation would have to be performed _ without discretizing space _ if we want to address the question of whether the behavior of the truly continuum equation is similar to that of the discretized versions considered here .we have not succeeded in carrying out such a calculation : since the mounded stationary profiles for the continuum equation are obtained from a numerical calculation , it would be extremely difficult , if not impossible , to carry out a linear stability analysis for such stationary solutions without discretizing space . in the presence of the noise, the perfectly flat fixed point transforms to the kinetically rough steady state , and the non - trivial fixed point evolves to the mounded steady state shown in fig.[bigfig11 ] .a dynamical phase transition at separates these two kinds of steady states . to calculate , we start a system at the mounded fixed point and follow its evolution according to eq.([cld ] ) for a long time ( typically ) to check whether it reaches a kinetically rough steady state . by repeating this procedure many times , the probability , , of a transition to a kinetically rough stateis obtained . for fixed , increases rapidly from 0 to 1 as is increased above a critical value .typical results for as a function of for model i with are shown in the inset of fig.[bigfig13 ] . the value of at which provides an estimate of .another estimate is obtained from a similar calculation of , the probability that a flat initial state evolves to a mounded steady state . as expected, increases sharply from 0 to 1 as is decreased ( see inset of fig.[bigfig13 ] ) , and the value of at which this probability is 0.5 is slightly lower than the value at which .this difference reflects finite - time hysteresis effects .the value of is taken to be the average of these two estimates , and the difference between the two estimates provides a measure of the uncertainty in the determination of .the phase boundary obtained this way is shown in fig.[bigfig13 ] , along with the results for obtained for the discrete model ii from a similar analysis .the general behavior found for all the models as the parameters and are varied is qualitatively very similar to that in equilibrium first order phase transitions of two- and three - dimensional systems as the temperature and other parameters , such as the magnetic field in spin systems , are varied . to take a standard example of an equilibrium first order transition, we consider a system with a scalar order - parameter field , described by a ginzburg - landau free energy functional that has a cubic term : = \int d{\bf r } \left[\frac{1}{2 } a \psi^2({\bf r } ) - \frac{1}{3 } b \psi^3({\bf r } ) + \frac{1}{4 } u \psi^4({\bf r})\right ] , \label{gl}\ ] ] where and are positive constants , with , and is the temperature .considering uniform states , , the free energy per unit volume may be written as it is easy to show that for , the function has two local minima , one at , and the other at a positive value of .these two minima represent the two phases of the system .this system exhibits a first order equilibrium phase transition from the disordered phase ( ) to an ordered phase with positive as the temperature is decreased .the transition temperature lies between and .the temperature at which the minimum corresponding to the ordered phase disappears is called the `` spinodal '' temperature for the ordered phase .the spinodal temperature for the disordered phase is .now consider the dynamics of this system according to the following time - dependent ginzburg - landau equation : } { \delta \psi({\bf r},t ) } + \eta({\bf r},t ) , \label{tdgl}\ ] ] where is a kinetic coefficient and represents gaussian delta - correlated noise whose variance is related to and the temperature via the fluctuation - dissipation theorem . in the absence of noise, this equation converges to local minima of the functional .so , the noiseless dynamics exhibits two locally stable fixed points for , corresponding to the two minima of that represent the disordered and uniformly ordered states .this is analogous to the two locally stable fixed points of our nonequilibrium dynamical systems for .if we identify the flat and mounded fixed points as the `` disordered '' and `` ordered '' states , respectively , and the control parameter to play the role of the temperature , then the noiseless dynamics of our models would look similar to that of eq.([tdgl ] ) for , with playing the role of the spinodal temperature of the equilibrium problem . in the presence of noise , the system described by eq.([tdgl ] )exhibits a first - order phase transition at that lies between and : the system selects one of the phases corresponding to the two fixed points of the noiseless dynamics , except at where both phase coexist .the local stability of the mean - field ordered and disordered states in a small temperature - interval around is manifested in the dynamics as finite - time hysteresis effects .the behavior we find for out nonequilibrium dynamical models is qualitatively similar : the system selects the steady state corresponding to the mounded ( `` ordered '' ) fixed point of the noiseless dynamics as the control parameter ( analogous to the temperature of the equilibrium system ) is decreased below which is smaller than the `` spinodal '' value .the growth models do not exhibit a `` spinodal '' point for the kinetically rough ( `` disordered '' ) phase : the flat fixed point of the noiseless dynamics is locally stable for all positive values of the control parameter .if this analogy with equilibrium first order transition is correct , then our models should show hysteresis and coexistence of kinetically rough and mounded morphologies for values of near . as mentioned above, we do find hysteresis ( see inset of fig.[bigfig13 ] ) in finite - time simulations with values of near .evidence for two - phase coexistence is presented in fig.[bigfig14 ] , where a snapshot of the interface profile for a sample of model i with , is shown .this value of is very close to the critical value for ( see inset of fig.[bigfig13 ] ) .this plot clearly illustrates the simultaneous presence of mounded and rough morphologies in the interface profile .the results described above suggest that our growth models exhibit a _ first - order dynamical phase transition _ at . to make this conclusion more concrete , we need to define an _ order parameter _ , analogous to the quantity in the equilibrium problem discussed above , that is zero in the kinetically rough phase , and jumps to a non - zero value as the system undergoes a transition to the mounded phase at .the identification of such an order parameter would also be useful for distinguishing between these two different kinds of growth in experiments as mentioned in the introduction , it is difficult to experimentally differentiate between kinetic roughening and mound formation with coarsening from measurements of the usual bulk properties of the interface . a clear distinction between the two morphologies may be obtained from measurements of the average number of extrema of the height profile .the steady - state profile in the mound - formation regime exhibits two extrema for _ all _ values of the system size .in contrast , the number of extrema in the steady state in the kinetic roughening regime increases with as a power law we find that for values of for which the system is kinetically rough , e.g. for , for model i , the average number of extrema in the steady state is proportional to with .this observation allows us to define an `` order parameter '' that is zero in the large- , kinetic roughening regime and finite in the small- , mound - formation regime .let be an ising - like variable , equal to the sign of the slope of the interface at site .an extremum in the height profile then corresponds to a `` domain wall '' in the configuration of the variables .since there are two domain walls separated by in the steady state in the mound - formation regime , the quantity where represents a time - average in the steady state , would be finite in the limit .on the other hand , would go to zero for large in the kinetically rough regime because the number of domains in the steady - state profile would increase with .we find numerically that in the kinetically rough phase , with .the finite - size scaling data for the order parameter for models i and ii for both faceted and kinetically rough phases is shown in fig.[bigfig15 ] .it is seen that varies linearly with the system size in the mounded phase , whereas with for model i and for model ii in the the kinetically rough phase .so , in the limit , the order parameter would jump from zero to a value close to unity as is decreased below .this is exactly the behavior expected at a first - order phase transition .the occurrence of a first - order phase transition in our 1d models with short - range interactions may appear surprising it is well - known that 1d systems with short - range interactions do not exhibit any equilibrium thermodynamic transition at a non - zero temperature .the situation is , however , different for nonequilibrium phase transitions : in contrast to equilibrium systems , a first - order phase transition may occur in one - dimensional nonequilibrium systems with short - range interactions .several such transitions have been well documented in the literature .so , there is no reason to _ a priori _ rule out the occurrence of a true first - order transition in our 1d nonequilibrium systems . as discussed above , our numerical results strongly suggest the existence of a true phase transition .however , since all our results are based on finite - time simulations of finite - size systems , we can not claim to have established rigorously the occurrence of a true phase transition in our models .the crucial question in this context is whether the order parameter would be nonzero in the mounded phase in the limit if the time - average in eq.([op ] ) is performed over arbitrarily long times .since the steady - state profile in this phase has a single mound and a single trough ( this is clear from our simulations ) , the only way in which can go to zero is through strong `` phase fluctuations '' corresponding to lateral shifts of the positions of the peak and the trough .we do not find any evidence for such strong phase fluctuations .we have calculated the time autocorrelation function of the phase of the order parameter for small samples over times of the order of and found that it remains nearly constant at a value close to unity over the entire range of time .so , if such phase fluctuation eventually make the order parameter zero for all values of , then this must happen over astronomically long times .our finite - time simulations can not , of course , rule out this possibility .during the late - stage evolution of the interface , the mounds coarsen with time , increasing the typical size of the triangular pyramidal structures .the process of coarsening occurs by larger mounds growing larger at the expense of the smaller ones while always retaining their `` magic '' slope .snapshots of the system in the coarsening regime are shown in figs [ bigfig16 ] and [ bigfig17 ] for model i and model ii , respectively .the constancy of the slope during the coarsening process is clearly seen in these figures . as discussed in the introduction, the constancy of the slope implies that if the typical lateral size of a mound increases in time as a power law with exponent ( ) , then the width of the interface would also increase in time as a power law with the same exponent ( with ) .therefore , the value of the coarsening exponent may be obtained by measuring the width as a function of time in the coarsening regime . in fig.[bigfig18 ] , we show a plot of the width as a function of time for model i with , .it is clear from the plot that the time - dependence of the width is well - described by a power law with .a similar plot for the discrete model ii with , , shown in fig.[bigfig19 ] , also shows a power - law growth of the width in the long - time regime , but the value of the coarsening exponent obtained from a power - law fit to the data is , which is clearly different for the value obtained for model i. this is a surprising result : model ii was originally defined with the specific purpose of obtaining an atomistic realization of the continuum growth equation of eq.([lds1 ] ) , and earlier studies have shown that the dynamical scaling behavior of this model in the kinetic roughening regime is the same as that of model i. also , we have found in the present study that the dynamical phase transition in this model has the same character as that in model i. so , the difference in the values of the coarsening exponents for these two models is unexpected . as noted earlier , there is some evidence suggesting that the typical slope of the mounds in model ii increases very slowly with time ( see fig.[bigfig10 ] ) .however , this `` steepening '' , if it actually occurs , is too slow to account for the large difference between the values of the coarsening exponents for models i and ii . in order to understand these numerical results , we first address the question of why the mounds coarsen with time .this problem has certain similarities with domain growth in spin systems . using the ising variables defined in the preceding section, each height profile can be mapped to a configuration of ising spins .the coarsening of mounds then corresponds to a growth of the typical size of domains of these ising spins .there is , however , an important difference between the coarsening of mounds in our models and the usual domain growth problem for ising spins .domain growth in spin systems is the process through which the system approaches equilibrium from an out - of - equilibrium initial state .the dynamics of this process may be understood in terms of arguments based on considerations of the free energy ( at finite temperatures ) or energy ( at zero temperature ) .such arguments do not apply to our nonequilibrium growth models .the reason for the coarsening of mounds in our models must be sought in the relative stability of different structures under the assumed dynamics and the effects of noise . as discussed in the preceding section , the fixed point of eq.([cld ] ) with one mound and one troughis locally stable for .since structures with several mounds and troughs approach this steady - state structure through the coarsening process , it is reasonable to expect that fixed points of the noiseless equation with more than one mounds and troughs would be unstable .we have numerically obtained fixed points of eq.([cld ] ) with two mounds and troughs for different values of the sample size and the separation between the peaks of the two mounds .the slope of the mounds at these fixed points is found to be the same as that in the fixed point with one mound and one trough .we find that the stability matrix for such fixed points always has a real , positive eigenvalue , indicating that the structure is unstable and would evolve to the stable configuration with one mound and one trough .the magnitude of the positive eigenvalue of the stability matrix for two - mounded fixed points depends on the sample size , the separation between the peaks of the mounds and the relative heights of the mounds in a complicated way .we have not been able to extract any systematic quantitative information from these dependences .we find a qualitative trend indicating that the magnitude of the positive eigenvalue decreases as the separation between the peaks of the two mounds is increased .since the time scale of the development of the instability of two - mounded structures is given by the inverse of the positive eigenvalue , this result is consistent with the expectation that the time required for two mounds to coalesce should increase with the separation between the mounds .these results suggest that the coarsening of the mounds in model i reflects the instability of structures with multiple mounds and troughs .if this is true , then coarsening of mounds should be observed in this model even when the noise term in eq.([cld ] ) is absent . to check this ,we have carried out numerical studies of coarsening in the noiseless version of eq .( [ cld ] ) . in these studies ,the time evolution of an initial configuration with a pillar of height at the central site of an otherwise flat interface is followed numerically in the presence of noise until the instability caused by the presence of the pillar is well developed .the profiles obtained for different realizations of the noise used in the initial time evolution are then used as initial configurations for coarsening runs without noise . the dotted line in fig.[bigfig18 ]shows the width versus time data obtained from this calculation .the coarsening exponent in the absence of noise is found to be the same ( ) as that of the noisy system , indicating that the coarsening in this model is driven by processes associated with the deterministic part of the growth equation .we have examined the details of the process by which two mounds coalesce to form a single one .the different steps in this process are illustrated in the snapshots of interface profiles shown in fig.[bigfig17 ] where one can see how the two mounds near the center come together to form a single one as time progresses .first , the separation between the peaks of the mounds decreases with time . when this distance becomes sufficiently small , the `` v''-shaped segment that separates the peaks of the mounds `` melts '' to form a rough region with many spikes .this region of the interface then self - organizes to become the top part of a mound .although the data shown in this figure were obtained for model ii , it also represents quite closely the process of coalescence of mounds in models i and ia .an estimate of the time scale associated with the second part of this process , during which the `` melted '' region of the interface transforms into the top part of a mound , may be obtained in the following way .the absolute value of the closest - to - zero eigenvalue of the stability matrix for the single - mounded fixed point provides an estimate of the time scale over which configurations close to the fixed point evolve to the fixed point itself . in the inset of fig.[bigfig18 ] , we have shown the dependence of the magnitude of the closest - to - zero eigenvalue for , on the system size .the eigenvalue scales with the system size as , indicating that the time scale for the decay of fluctuations with length scale is proportional to .this is consistent with the observed value of the coarsening exponent , , which indicates that the time scale for the coalescence of mounds separated by distance is proportional to .we have found very similar behavior for the closest - to - zero eigenvalue of the stability matrix for the single - mounded fixed point of model ia in which a different form of the control function is used .coarsening data for this model are shown in fig.[bigfig18 ] . in this model , there is a long time interval between the onset of the instability and the beginning of power - law coarsening . during this time interval ,the interface segments near the tall pillars formed at the instability organize themselves into triangular mounds .this process produces a plateau in the width versus time plot .eventually , however , power - law coarsening with is recovered , as shown by the dashed line in fig.[bigfig18 ] .since the onset of power - law coarsening in this model occurs at very late times , we could not get coarsening data for this model over a very wide time interval .consequently , the calculated value of the coarsening exponent for this model is less accurate .however , our results show quite clearly that the `` universal '' features of the coarsening dynamics of models i and ia are the same . going back to the discrete model ii, we first examined its coarsening dynamics in the absence of noise .the noiseless limit of this model is not well - defined in the sense that there is no explicit noise term that can be turned off .the stochasticity in this model arises from two sources : first , the randomness associated with the selection of the deposition site ( the quantity defined in eq.([model2 ] ) is calculated at this site and at its nearest - neighbor sites ) ; and second , the randomness in the selection of one of the two neighbors of site in case of a tie in their values of . in order to make the dynamics deterministic, we employ a parallel update scheme in which all the lattice sites , , are updated simultaneously instead of sequentially .this eliminates the stochasticity arising from the choice of the sequence in which the sites are updated .the randomness associated with the selection of a neighbor in case of a tie is eliminated by choosing the right neighbor if the serial number of the occurrence of a tie , measured from the beginning of the simulation , is even , and the left neighbor if the serial number is odd . with these modifications of the update rules, the system evolves in a perfectly deterministic way . to study coarsening in this deterministic version of model ii , we prepare an initial structure with two identical mounds separated by distance . the slope of these mounds is chosen to be equal to the `` selected '' value found in simulations of the original model .we then study the time evolution of this structure according to the parallel dynamics defined above , monitoring how the distance between the peaks of the mounds changes with time .we find that the the value of increases initially , in order to accommodate a slightly higher value of the selected slope for the parallel dynamics . after reaching a maximum value that is higher than , the distance decreases with time , indicating that the noiseless dynamics leads to coarsening . eventually , the two mounds coalesce into a single one and the system remains in the state with one mound and one trough at later times .the slope of the mounds remains constant during the coarsening process .assuming power - law coarsening with exponent , the separation at time is expected to have the form where is a constant .this form implies that the time required for the coalescence of two mounds separated by distance is proportional to . in fig.[bigfig20 ] , we have shown the time dependence of for three different initial separations , and 100 , and fits of the data to the form of eq.([xoft ] ) , yielding the result .only the data for at times larger than the time at which it returns to after the initial increase are shown in the figure . from these observations , we conclude that the coarsening exponent in the zero - noise limit of model ii is the same ( ) as that found for the two versions of the continuum model .the observation that the coarsening exponent for the noisy version of model ii is different from 1/3 then indicates that the effect of noise in the discrete model is _ qualitatively different _ from that in the continuum models .we discuss below a possible explanation of this behavior .the fact that noiseless versions of all three models exhibit the same value of the coarsening exponent ( ) suggests that the coarsening is driven by an effective attractive interaction between the peaks of neighboring mounds .the observed value of suggests that the this attractive interaction is proportional to where is the separation between the mound tips .this interaction would lead to the observed result , , in the noiseless limit if the rate of change of with is assumed to be proportional to the attractive force ( `` overdamped limit '' ) .the presence of noise in the original growth model leads to a noise term in the equation of motion of the variable , but the nature of this noise term is not clear . since the observed coarsening dynamics in the noisy model ii ( ) suggests a similarity with random walks , we propose that the effective dynamics of the variable is governed by the kinetic equation where is a gaussian , delta - correlated noise with zero mean and variance equal to . in this phenomenological description ,the coarsening of a two - mounded structure in model ii is mapped to a brownian walk of a particle in an attractive potential field with an absorbing wall at the origin , such that the particle can not escape once it arrives at the origin .the absorption of a particle at the origin corresponds to the coalescence of two mounds in the original height picture .thus , in this reduced model , the quantity of interest is the typical first passage time ( i.e. the time taken by a particle to reach the origin ) as a function of , the initial distance of the particle from the origin . in the noiseless limit ( ), is equal to , and in the purely brownian walk limit ( ) , the typical value of should be of the order of .therefore , for sufficiently large values of , random - walk behavior characterized by is expected .however , for relatively small values of , the behavior should be dominated by the attractive interaction .therefore , we expect that the dynamics described by eq.([bw ] ) with nonzero and would exhibit a crossover from a noise dominated regime to an interaction dominated regime as the value of is decreased . this crossover is expected to occur near , for which the values of obtained from the two individual terms in eq.([bw ] ) become comparable . we , therefore , propose a scaling form for the dependence of on : where the scaling function has the following asymptotic dependence on its argument : for and for .our numerical study of the reduced model of eq.([bw ] ) confirms the validity of this scaling ansatz .note that for any nonzero value of , would be proportional to , implying , for sufficiently large values of . to test the validity of this reduced description of the coarsening dynamics of the original model ii with stochastic sequential updates, we have simulated the evolution of a two - mounded structure in this model .the two - mounded structure used in these simulations is identical to that used in the study of coarsening in the model with parallel updates . in these simulationsalso , the average value of exhibits a small initial increase followed by a steady decrease .the initial growth of with time is found to be linear , indicating the presence of a random additive noise in the effective equation of motion of .fig.[bigfig21 ] shows the time dependence of obtained from simulations of samples of model ii with , , and . in this plot ,the origin of time has been shifted to the point where returns to the initial value after the small initial increase , and units of simulation `` time '' ( number of deposited layers ) is taken to be the unit of .the number of independent runs used in the calculation of averages is 800 .the observed dependence of on can be described reasonably well by the reduced equation of eq.([bw ] ) for appropriate choice of the values of the parameters and . as shown in fig.[bigfig21 ] ,the calculated numerically from eq.([bw ] ) with and provides a good fit to the data obtained from simulations of the growth model . due to the limited range of the simulation data, the values of and can not be determined very accurately from such fits : values of in the range and values of in the range ( larger values of have to be combined with smaller values of ) provide reasonable descriptions of the simulation data . for such values of and , and where is the sample size used in the calculation of the coarsening exponent , is of order unity , indicating that the effects of the noise term in eq.([bw ] ) should be observed in the simulation data .we , therefore , conclude that the presence of an additive random noise term in the effective equation of motion for the separation between the peaks of neighboring mounds in model ii is a plausible explanation for the observed value of the coarsening exponent , . in view of this conclusion , it is interesting to enquire why the coarsening exponent for models i and ia has the value characteristic of dynamics governed by the deterministic interaction between mound tips .we can not provide a conclusive answer to this question .one possibility is that the additive random noise in the original growth equations for these models does not translate into a similar noise in the effective equation of motion for the separation of mound tips .a second possibility is that the equation of motion for the separation for these models also has the form of eq.([bw ] ) , but the relative strength of the noise is much smaller , so that the crossover value is much larger than the typical sample sizes used in our simulations . under these circumstances , the dynamics of would be governed by the interaction and would be proportional to , giving the value for the coarsening exponent .if the second explanation is correct , then one should observe a crossover from to in models i and ia as the sample size is increased .we do not find any evidence for such a crossover in our simulations .in passing , we note that the dynamics of the slope variables is strictly conserved in the models studied here if periodic or fixed boundary conditions are used .also , the deterministic part of the growth equations conserves the integrated height .the `` magnetization '' of the ising variables representing the signs of is not strictly conserved : it is conserved only in a statistical sense . however , unlike other well - known examples of systems with conserved dynamics , we obtain in model ii a coarsening exponent that is different from the expected lifshitz - slyozov value , . since the height variables in model ii are integers , there are some sites for which .the assignment of the value of the ising variable at such sites is clearly ambiguous. it may be more appropriate to use a three - state variable , taking the values and , to describe the coarsening behavior of this model .this problem does not arise in models i and ia for which the height variables are real numbers because the likelihood of two neighboring height variables to be exactly the same is vanishingly small .as discussed in section [ pt ] , the properties of the mounded phase of model i are determined to a large extent by the mounded fixed point of the deterministic part of the equations of motion of the height variables .the presence of noise changes the critical value of the control parameter from to , but does not affect strongly the properties of the mounded steady state of the system ( see , for example , fig.[bigfig11 ] ) .therefore , we expect that the properties of the mounded phase would not change drastically if the statistics of the noise is altered . on the other hand , it is well - known that the exponents that describe the scaling behavior in the kinetically rough phase depend strongly on the nature of the noise .in particular , the exponents for conserved noise are expected to be quite different from those describing the behavior for nonconserved noise .the occurrence of the nonlinear instability that leads to the mounded phase in our models is contingent upon the spontaneous formation of pillars of height , if the initial state of the system is completely flat .the probability of formation of such pillars depends crucially on the values of the roughening exponents which , in turn , are strongly dependent on the nature of the noise .therefore , we expect that the nature of the noise may be very important in determining whether the instability leading to mound formation actually occurs in samples with flat initial states .we have investigated this issue in detail by carrying out simulations of a version of model i in which the noise is conserved .the equations of motion for the height variables in this model have the form of eq.([cld ] ) , with the noise terms having the properties where if and zero otherwise .this model is expected to exhibit kinetic roughening with exponents , , and in one dimension .since the value of for this model is less than unity , it exhibits conventional dynamical scaling with the typical value of the nearest - neighbor height difference saturating at a constant value at long times .the value of this constant is expected to increase as the strength of the nonlinearity is increased . as discussed in detail in ref ., the nonlinear instability that leads to mound formation is expected to occur in the time evolution of such models from a perfectly flat initial state only if the value of is sufficiently large to allow the spontaneous , noise - induced formation of pillars of height greater than .so , if the value of is sufficiently small , then the model with conserved noise , evolving from a flat initial state , would not exhibit the mounding transition . on the other hand , if the instability is induced in the model by starting the time evolution from a state in which there is at least one pillar with height greater than , then it is expected to evolve to the mounded state if the value of is sufficiently small to make the mounded state stable .so , the long - time steady state of the conserved model is expected to exhibit an interesting dependence on the initial state : if is sufficiently small ( so that pillars with height greater than are not spontaneously generated in the time evolution of the interface from a flat initial state ) , and the value of sufficiently small ( so that the mounded state is stable in the presence of noise ) , then the steady state would be kinetically rough if the initial state is sufficiently smooth , and mounded if the initial state contains pillars of height greater than .this `` bistability '' does not occur for the nonconserved model i because the nearest - neighbor height difference in this model continues to increase with time , so that the instability always occurs at sufficiently long times .our simulations of the model with conserved noise show the bistable behavior discussed above in a large region of the plane .we find that in this model , the height of a pillar on an otherwise flat interface increases in time if its initial value is larger than ( the dependence of on is weak ) .this dependence of on is very similar to that found for model i with nonconserved noise .we also find that the typical values of the nearest - neighbor height difference do not continue to increase with time in this model .consequently , if is sufficiently small , pillars with height greater than are not generated , and the system exhibits conventional kinetic roughening with exponent values close to the expected ones . on the other hand ,if the time evolution of the same system is started from a state with a pillar of height greater than , then it evolves to a mounded state very similar to the one found in the model with nonconserved noise if the value of is sufficiently small .the two steady states obtained for the conserved model with , are shown in fig.[bigfig22 ] .the long - time state obtained in a run starting from a flat configuration is kinetically rough , whereas the state obtained in a run in which the nonlinear instability is initially seeded in the form of a single pillar of height at the central site is mounded with a well - defined slope , as in model i with nonconserved noise .the difference between the two profiles , obtained for the same parameter values for two different initial states , is quite striking .since the steady state in the conserved model depends on the initial condition , it is not possible to draw a conventional phase diagram for this model in the plane : the transition lines are different for different initial conditions . in fig.[bigfig23 ] , we have shown three transition lines for this model in the plane .the line drawn through the circles ( line 1 ) is obtained from simulations in which the system is started from a flat initial state . if is small , then the steady state reached in such runs is kinetically rough for all values of . as is increased above a `` critical value '' , pillars with height greater than are spontaneously generated during the time evolution of the system and it exhibits a transition to the mounded state if the value of is not very large .the circles represent the values of for which 50% of the runs show transitions to the mounded state . the line through the diamonds ( line 2 ) corresponds to 50% probability of transition to the mounded state from an initial state with a pillar of height 1000 on an otherwise flat interface. the probability of reaching a mounded steady state in such runs decreases from unity as the value of is increased , and falls below 50% as line 2 is crossed from below . for large ,lines 1 and 2 merge together .this is expected : the probability of occurrence of a mounded steady state should not depend on how the pillars that initiate the nonlinear instability are generated .the third line ( the one passing through the squares ) represents 50% probability of transition to the kinetically rough state from a mounded initial state ( the fixed point of the noiseless equations of motion with one mound and one trough ) .this line reflects the noise - induced instability of the mounded steady state for relatively large values of .the differences between lines 2 and 3 are due to finite - time hysteresis effects similar to those discussed in section [ pt ] in the context of determining the critical value of the control parameter for model i with nonconserved noise .the interesting region in the `` phase diagram '' of fig.[bigfig23 ] is the area enclosed by lines 1 and 2 and the line . for parameter values in this region , the system exhibits bistable behavior , as discussed above .this bistability is unexpected in the sense that in most studies of nonequilibrium surface growth , it is implicitly assumed that the long - time steady state of the system does not depend on the choice of the initial state .so , it is important to examine whether the dependence of the steady state on the initial condition in the conserved model reflects a very long ( but finite ) transition time from one of the two apparent steady states to the other one .we have addressed this question by carrying out long ( of the order of ) simulations of small samples with parameter values in the middle of the `` bistable region '' for flat and mounded initial states .we did not find any evidence for transitions from one steady state to the other one in such simulations .of course , we can not rule out the possibility that such transitions would occur over much longer time scales .to summarize , we have shown from numerical simulations that a class of 1d surface growth models exhibits mound formation and power - law coarsening of mounds with slope selection as a result of a nonlinear instability that is controlled by the introduction of an infinite series of terms with higher - order gradient nonlinearities .the models considered here are discretized versions of a well - known continuum growth equation and an atomistic model originally formulated for providing a discrete realization of the growth equation .we have shown that these models exhibit a dynamical phase transition between a kinetically rough phase and a mounded phase as a parameter that measures the effectiveness of controlling the instability is varied .we have defined an order parameter for this first - order transition and used finite - size scaling to demonstrate how the sample - size dependence of this order parameter provides a clear distinction between the rough and mounded phases .we have also mapped out the phase boundary that separates the two phases in a two - dimensional parameter space .we would like to emphasize that the es mechanism , commonly believed to be responsible for mound formation in surface growth , is not present in our models .our models exhibit a nonlinear instability , instead of the linear instability used conventionally to represent the effects of es barriers .the mechanism of mound formation in our models is also different from a recently discovered one involving fast edge diffusion , which occurs in two or higher dimensions .the slope selection found in our models is a rare example of pattern formation from a nonlinear instability .this is clearly different from slope selection in es - type models in which the mounds maintain a constant slope during coarsening only if the nonequilibrium surface current vanishes at a particular value of the slope .the selected slope in such models is simply the slope at which the current is zero .the behavior of our models is more complex : in these models , the surface current is zero for all values of constant slope , and the selected value of the slope is obtained from a nonlinear mechanism of pattern selection .our studies bring out two other unexpected results .we find that the coarsening behavior of an atomistic model ( model ii ) specifically designed to provide a discrete realization of the growth equation that leads to model i is different from that of model i : the exponents that describe the power - law coarsening are different in the two models .we show that this difference may arise from a difference in the nature of the effective noise that enters the equation of motion for the separation between neighboring mounds in the two cases .the second surprising result is that the numerically obtained long - time behavior of model i with conserved noise in a region of parameter space depends crucially on the initial conditions : the system reaches a mounded or kinetically rough steady state depending on whether or not the initial state is sufficiently rough . to our knowledge , this is the first example of `` nonergodic '' behavior in nonequilibrium surface growth .the behavior found in our 1d models may be relevant to experimental studies of the roughening of steps on a vicinal surface .as noted earlier , the form of the control function used in model ia is physically reasonable .however , since very little is known about the values of the model parameters appropriate for experimentally studied systems , we are unable to determine whether the mechanism of mound formation found in our study would be operative under experimentally realizable conditions .the nonlinear instability found in our 1d models is also present in the experimentally interesting two - dimensional version of the growth equation of eq.([lds1 ] ) .however , it is not clear whether this instability , when controlled in a manner similar to that in our 1d models , would lead to the formation of mounds in two dimensions .this question is currently under investigation .since the growth equation of eq.([lds1 ] ) exhibits conventional dynamic scaling in the kinetic roughening regime in two dimensions , the nonlinear instability would not occur in runs with flat initial states if the value of is small .therefore , the behavior in two dimensions is expected to be similar to that of our 1d model i with conserved noise : the nature of the long - time steady state may depend crucially on initial conditions in a region of parameter space . such nonergodic behavior , if found in two dimensions , would have interesting implications for the growth of films on patterned substrates .all the results described in this paper have been obtained from numerical studies of models that are discrete in both space and time .it is interesting to enquire whether the truly continuum growth equation of eq.([lds1 ] ) exhibits similar behavior .this question acquires special significance in view of studies that have shown that discretization may drastically change the behavior of nonlinear growth equations similar to eq.([lds1 ] ) . since the interesting behavior found in our discretized models arises from the nonlinear instability found earlier ,the question that we have to address is whether a similar instability is present in the truly continuum growth equation .this question was addressed in some detail in ref. where it was shown that the nonlinear instability is not an artifact of discretization of time or the use of the simple euler scheme for integrating the the growth equation . in the present study , we have found additional evidence ( see section [ mound ] ) that supports this claim . we should also point out that the atomistic model ii , for which the question of inaccuracies arising from time integration does not arise , exhibits very similar behavior , suggesting that the behavior found in models i and ia is not an artifact of the time discretization used in the numerical integration . the occurrence of the nonlinear instability does depend on the way space is discretized ( i.e. how the lattice derivatives are defined ) . in earlier work as well as in the present study ,the lattice derivatives are defined in a left - right symmetric way .we have found that the instability actually becomes stronger if the number of neighbors used in the definition of the lattice derivatives is increased .this result suggests that the instability is also present in the continuum equation .it has been found by putkaradze _et al . _ that the instability does not occur if the lattice derivatives are defined in a different way in which either left- or right - discretization of the nonlinear term is used , depending on the sign of the local slope of the interface .there is no reason to believe that this definition is `` better '' or more physical than the symmetric definitions used in our work .the only rigorous result we are aware of for the behavior of eq.([lds1 ] ) is derived in ref. where it is shown that the solutions of the noiseless equation are bounded for sufficiently smooth initial conditions .this result , however , does not answer the question of whether the instability occurs in the continuum equation . as discussed in ref . , the nonlinear instability of eq.([lds2 ] ) , signalled by a rapid initial growth of the height ( depth ) of isolated pillars ( grooves ) , may not lead to a true divergence of the height variables .the results reported in the present work would remain valid as long as high pillars or deep grooves are formed by the instability the occurrence of a true divergence is not necessary . in the present work , we have shown ( see section [ pt ] ) that the continuum equation with defined in eq.([pvform ] ) does admit stationary solutions that exhibit all the relevant features of stationary solutions of the discretized equation .this result provides additional support to the contention that the behaviors of the continuum and discretized systems are qualitatively similar .we should , however , mention that these stationary solutions of the continuum equation do not pick out a selected slope of the interface : profiles similar to those shown in fig.[bigfig11 ] may be obtained for different values of the parameter in eq.([cont ] ) that determines the slope of the triangular mound .slope selection in the continuum equation may occur as a consequence of the requirement of local stability of such stationary solutions .as mentioned in section [ pt ] , we have not attempted a linear stability analysis of such numerically obtained stationary solutions of the continuum equation because doing such a calculation without discretizing space would be extremely difficult .further investigation of this question would be useful .finally , we would like to emphasize that the discrete models studied here would continue to be valid models for describing nonequilibrium surface growth even if the behavior of the truly continuum growth equation of eq.([lds1 ] ) turns out to be different from that found here .these models may be looked upon as ones in which continuous ( in models i and ia ) or discrete ( in model ii ) height variables defined on a discrete lattice evolve in continuous or discrete time .these models have all the correct symmetries and conservation laws of the physical problem , and they exhibit , for different values of the control parameter , both the phenomena of kinetic roughening and mound formation found in experiments .there is no compelling reason to consider the continuum equation to be more `` correct '' or `` physical '' than these models .epitaxial growth is intrinsically a discrete process at the molecular level and a continuum description is an approximation that may not be valid in some situations . from a different perspective, the nonequilibrium first - order phase transition found in our models is interesting , especially because it occurs in 1d systems with short range interactions .such phase transitions have been found earlier in several 1d `` particle hopping '' models .it would be interesting to explore possible connections between such models and the 1d growth models studied here .we thank serc , iisc for computational facilities , and s. das sarma and s. s. ghosh for useful discussions .currently at the department of physics , condensed matter theory center , university of maryland , college park , md 20742 .barabasi and h. e. stanley , _ fractal concepts in surface growth _( cambridge university press , cambridge , 1995 ) .j. krug , adv .phys . * 46 * , 139 ( 1997 ) .j. krim and g. palasantzas , int .j. mod . phys . * b9 * , 599 ( 1997 ) .m. d. johnson , c. orme , a. w. hunt , d. graff , j. sudijono , l. m. sander and b. g. orr , phys .lett * 72 * , 116 ( 1994 ) .k. thurmer , r. koch , m. weber and k. h. rieder , phys .lett . * 75 * , 1767 ( 1995 ) .j. a. stroscio , d. t. pierce , m. stiles , a. zangwill and l. m. sander , phys . rev .lett * 75 * , 4246 ( 1995 ) .f. tsui , j. wellman , c. uher and r. clarke , phys .lett . * 76 * , 3164 ( 1996 ) .k . zuo and j. f. wendelken , phys .78 * , 2791 ( 1997 ) .m. f. gyure , j. k. zinck , c. ratsh and d. d. vvendsky , phys .lett . * 81 * , 4931 ( 1998 ) .g. apostolopoulos , j. herfort , l. daweritz and k. h. ploog , phys .. lett . * 84 * , 3358 ( 2000 ) . y .-zhao , h .-yang , g .- c .wang and t .- m .lu , phys .b * 57 * , 1922 ( 1998 ) .g. lengel , r. j. phaneuf , e. d. williams , s. das sarma , w. beard and f. g. johnson , phys .b * 60 * , r8469 ( 1999 ) .m. siegert and m. plischke , phys .lett . * 73 * , 1517 ( 1994 ) .m. rost and j. krug , phys .e * 55 * , 3952 ( 1997 ) .m. siegert , phys .lett . * 81 * , 5481 ( 1998 ) .l. golubovic , phys .lett . * 78 * , 90 ( 1997 ) . o. pierre - louis , m. r. dorsogna and t. l .einstein , phys .lett . * 82 * , 3661 ( 1999 ) .m. v. ramana murty and b. h. cooper , phys .lett . * 83 * , 352 ( 1999 ) .s. das sarma and p. punyindu , surf .lett . * 424 * , l339 ( 1998 ) .m. biehl , w. kinzel , and s. schinzer , europhys .* 41 * , 443 ( 1998 ) .j. krug , j. stat .phys . * 87 * , 505 ( 1997 ) .i. elkinani and j. villain , j. phys .i * 4 * , 949 ( 1994 ) .p. politi and j. villain , phys .b * 54 * , 5114 ( 1996 ) .s. das sarma , p. punyindu and z. toroczkai , surf .lett . * 457 * , l309 ( 2000 ) .p. punyindu - chatraphorn , z. toroczkai and s. das sarma , phys .b * 64 * , 205407 ( 2001 ) .s. das sarma , s. v. ghaisas , and j. m. kim , phys .e * 49 * , 122 ( 1994 ) .g. ehrlich and f. g. hudda , j. chem . phys . *44 * , 1039 ( 1996 ) ; s. c. wang and g. ehrlich , phys .lett . * 70 * , 41 ( 1993 ) .r. l. schwoebel and e. j. shipsey , j. appl . phys . * 37 * , 3682 ( 1966 ) ; r. l. schwoebel , j. appl . phys . * 40 * , 614 ( 1969 ) . j. villain , j. phys .i ( france ) * 1 * , 19 ( 1991 ) .z. w. lai and s. das sarma , phys .lett . * 66 * , 2348 ( 1991 ) . j. m. kim and s. das sarma , phys .lett . * 72 * , 2903 ( 1994 ) . c. dasgupta , s. das sarma , and j. m. kim , phys .e * 54 * , r4552 ( 1996 ) . c. dasgupta , j. m. kim , m. dutta , and s. das sarma , phys .e * 55 * , 2235 ( 1997 ) .a. kundagrami and c. dasgupta , physica a * 270 * , 135 ( 1999 ) . t. sun , h. guo , and m. grant , phys . rev .a * 40 * , 6763 ( 1989 ) . b. chakrabarti and c. dasgupta , europhys .( in press ). l. f. shampine , _ numerical solution of ordinary differential equations _( chapman & hall , new york , 1994 ) .z. racz , m. siegert , d. liu and m. plischke , phys .a * 43 * , 5275 ( 1991 ) .ma , _ modern theory of critical phenomena _( benjamin , reading , 1976 ) .z. toroczkai , g. corniss , s. das sarma and r. k. p. zia , phys .e. * 62 * , 276 ( 2000 ). m. r. evans , brazilian j. phys . *30 * , 42 ( 2000 ) [ cond - mat/0007293 ] .a. j. bray , adv .phys * 43 * , 357 ( 1994 ) .a. d. rutenberg and a. j. bray , phys .e * 50 * , 1900 ( 1994 ) .t. newmann , and a. j. bray , j. phys .a * 29 * , 7917 ( 1996 ) .v. putkaradze , t. bohr and j. krug , nonlinearity * 10 * , 823 ( 1997 ) .
we study a class of one - dimensional , nonequilibrium , conserved growth equations for both nonconserved and conserved noise statistics using numerical integration . an atomistic version of these growth equations is also studied using stochastic simulation . the models with nonconserved noise statistics are found to exhibit mound formation and power - law coarsening with slope selection for a range of values of the model parameters . unlike previously proposed models of mound formation , the ehrlich - schwoebel step - edge barrier , usually modeled as a linear instability in growth equations , is absent in our models . mound formation in our models occurs due to a nonlinear instability in which the height ( depth ) of spontaneously generated pillars ( grooves ) increases rapidly if the initial height ( depth ) is sufficiently large . when this instability is controlled by the introduction of an infinite number of higher - order gradient nonlinearities , the system exhibits a first - order dynamical phase transition from a rough self - affine phase to a mounded one as the value of the parameter that measures the effectiveness of control is decreased . we define a new `` order parameter '' that may be used to distinguish between these two phases . in the mounded phase , the system exhibits power - law coarsening of the mounds in which a selected slope is retained at all times . the coarsening exponents for the continuum equation and the discrete model are found to be different . an explanation of this difference is proposed and verified by simulations . in the growth equation with conserved noise , we find the curious result that the kinetically rough and mounded phases are both locally stable in a region of parameter space . in this region , the initial configuration of the system determines its steady - state behavior . 2
the financial crisis has boosted the development of several network - based methodologies to monitor systemic risk in the financial system .a traditional approach towards the quantification of systemic risk is to measure the effects of a shock on the external assets of each institution and then to aggregate the losses .however , the crisis has highlighted that stress - testing should also incorporate so called `` second - round '' effect , which might arise via interbank exposures , either as losses on the asset side or liquidity shortages ( see e.g. and references therein ) .for instance , the recent ecb comprehensive assessment carried out in 2014 goes into this direction by taking into account counterparty credit risk while the basel iii framework gives attention to interconnectedness as a key source of systemic risk .some network - based methods focus on the events of a bank s default ( i.e. its equity going to zero ) as the only relevant trigger for the contagion to be passed on to the counterparties . in other words ,an institution that has faced some shocks will not affect its counterparties in any way as long as it is left with some positive equity .this is a useful simplification which has allowed for a number of mathematical developments . because regulators recommend banks to keep their largest single exposure well below their level of equity , most stress test conducted in this way yield essentially to the result that a single initial bank default never triggers any other default .systemic risk emerges only if , at the same time , one assumes a scenario of weak balance sheets or a scenario of fire sales .in contrast , both the intuition and the classic merton approach , suggest that the loss of equity of an institution , even with no default , will imply a decrease in the market value of its obligations to other institutions . in turn , this means a loss of equity for those institutions , as long as they revalue their equity as the difference between assets and liabilities .therefore , financial distress , meant as loss of equity , can spread from a bank to another although no default occurs in between .the total loss of equity in the system can be substantial even if no bank ever defaults in the process .indeed , in the 2007/2008 crisis , losses due to the mark - to - market re - evaluation of counterparty risk were much higher than losses due from direct defaults . ]the so - called debtrank methodology has been developed with the very idea to capture such a distress propagation .the impact of a shock , as measured by debtrank , is fully comparable to the traditional default - only propagation mechanisms in the sense that the latter is a lower bound for the former . in other words , debtrank measures at least the impact that one would have with the defaults - only , but it is typically larger and this allows to assign a level of systemic importance in most situations in which the traditional method would be unable to do so because the impact would be zero for all banks .debtrank has been applied to several empirical contexts but it was not so far been embedded into a stress - test framework . in this paper , building on the method introduced in , we develop a stress - test _ framework _ aimed at providing central bankers and practitioners with a monitoring tool of the network effects .the main contributions of our works are as follows .first , the framework delivers not only an estimation of _ first - round _ ( shock on external assets ) , and _ second - round _ ( distress induced in the interbank network ) effects , but also a _ third - round _ effect consisting in possible further losses induced by fire sales . to this endwe incorporate a simple mechanism by which banks determine the necessary sales of the asset that was shocked in order to recover their previous leverage level and assuming a linear market impact of the sale on the price of the asset .the three effects are disentangled and can be tracked separately to assess their relative magnitude according to a variety of scenarios on the initial shock on external assets and on liquidity of the asset market .second , the framework allows to monitor at the same time the _ impact _ and the _ vulnerability _ of financial institutions . in other words , institutionswhose default would cause a large loss to the system become problematic only if they are exposed to large losses when their counterparties or their assets get shocked .these quantities are computed through two _ networks of leverage _ that are the main linkage between the notion of capital requirements and the notion of interconnectedness .third , the framework allows to estimate loss distributions both at the individual bank level and at the global level , allowing for the computation of individual and global var and cvar ( table [ tab : variables ] ) .fourth , since data on bilateral exposures are seldom available , the framework includes a module to estimate the interbank network of bilateral exposures given the information on the total lending and borrowing of each bank . here, we use a combination of fitness model , for the network structure and an iterative fitting method to estimate the lending volumes , but alternative methods could be used or added as benchmark comparison ( e.g. the maximum entropy method , or the minimum density method .finally , the framework has been developed in matlab and is available upon request to the authors .as an illustration , we carry out a stress - test exercise on a dataset of 183 european banks over the years , starting from the estimation of their interbank exposures .this paper is organized as follows . in section [ sec : related work ] we review similar or related work ; in section [ sec : stress - test ] we describe the main aspects of the framework , providing an outline of the distress process , a discussion of the main variables , and the framework s building blocks ; in section [ sec : exercise ] , we show how the framework can be applied to a dataset and we discuss the main results of this exercise ; in section [ sec : discussion ] , we review the main contributions and introduce elements for future research . in appendix[ sec : methods ] , we provide the technical details of the distress propagation process , including how the key measures are computed ; in the appendix [ sec : data_collection ] , we described the data we used for the exercise in section [ sec : exercise ] and , last , in appendix [ sec : network_reconstruction ] , we outline the network reconstruction methods when only the total interbank lending / borrowing for each bank is known .the recent and still ongoing economic and financial crisis has made clear the importance of methods of early detection of systemic risk in the financial system . in particular , researchers , regulators and policy - makers have recognized the importance of adopting a macroprudential approach to understand and mitigate financial stability .notwithstanding the many efforts , regulators still lack an adequate framework to measure and address systemic risk .the traditional micro - prudential approach consists in trying and ensuring the stability of the banks , one by one , with the assumption that as long as each unit is safe the system is safe .this approach has demonstrated to be a dangerous over - simplification of the situation .indeed , we have learned that it is precisely the interdependence among institutions , both in terms of liabilities or complex financial instruments and in terms of common exposure to asset classes what leads to the emergence of systemic risk and makes the prediction of the behaviour of financial systems so difficult .while risk diversification at a single institution can indeed lower its individual risk , if all institutions behave in a similar way , herding behaviour can instead amplify the risk . clearly , if all banks take similar positions , the failure of one bank can cause a global distress , because of the increased sensitivity to price changes . to add more complexity , the causes of market movements are still under debate , suggesting that exogenous instabilities add up to endogenous ones . the tension between individual regulation and global regulation poses a series of challenging questions to researchers , practitioners and regulators , well before the recent crisis , it was argued that systemic risk is real when contagion phenomena across countries take place . in this spirit , a series of studies dealt with the description of systemic risk in the financial system from the perspective of the contagion channels across balance - sheet of several institutions . in particular ,some focus was drawn upon the topology of connections ( or the network ) between institutions . in this way , the problem of analysing systemic risk splits in two distinct problems .first , the problem of understanding the role of an opaque ( if not unknown ) structure of financial contracts and , second , the problem of providing a measure for the assessment of the impact of a given shock . as for the first problem , the obvious starting point is to consider the structure of the interbank network , with the aim of possibly extracting some early warning signals .while many argued that the network structure can be intrinsically a source of instability , it turns out instead that no specific topology can be considered as systematically safer than the others .indeed , only the interplay between market liquidity , capital requirements and network structure can help in the understanding of the systemic risk .for the second problem , researchers have tried to describe the dynamics of propagation of defaults with various methods , including by means of agent - based models or by modelling the evolution of financial distress across balance - sheets conditional upon shocks in one or more institutions . from the perspective of financial regulations ,capital requirements represent the cornerstone of prudential regulations .institutions are required to hold capital as a buffer to shocks of any nature . the most used risk measures ( such as value at risk and expected shortfall )are indeed related to the quantity of cash each individual bank needs to set aside in order to cover the _ direct _ exposures to different types of risk . in such manner ,the _ indirect _ exposures arising from the interconnected nature of the financial system are not considered .interconnectedness , though , is now entering the debate on regulation : for example , the definition of `` global systemically important banks '' ( g - sibs , * ? ? ?* ) does include the concept of interconnectedness , thereby measured as the aggregate value of assets and liabilities each bank has with respect to other banking institutions .although this represents a fundamental step towards the inclusion of interconnectedness in assessing systemic risk , a further level of disaggregation would be needed .in fact , institutions that are similar in terms of their aggregated exposures ( including those _ vis - - vis _ other financial institutions ) , might have completely different sets of counterparties , therefore implying different levels of systemic impact and/or vulnerability to shocks .another important point is that the potential negative effects arising from interconnectedness ought to be included into the definition of capital requirements .in this section , we introduce and describe the debtrank stress - test framework .one of the main characteristics of the framework lies in its _ flexibility _ along the following four main dimensions .* shock type*. the framework can implement different shock types and scenarios ( on external assets ) .* network estimation*. when detailed bilateral interbank exposures are not available , the framework provides a module to estimate the interbank network from the total interbank assets and liabilities of each bank , 3 .* contagion dynamics*. the framework can implement two different contagion dynamics , distress contagion and default contagion .* systemic risk indicators*. the framework returns as output a series of systemic risk indicators , both at the individual and a the global level .the user can aptly combine this information to extract the information needed .several graphical outputs are also available and represent a key feature of the framework : graphics are specifically designed to capture relevant information at a glance . given the flexibility of the framework and the number of outputs produced , in the remainder of the section , we focus on : 1 .describing the main features of the debtrank _ distress process _ as the key foundation of the framework ; 2 . providing a qualitative description of the main variables of interests ; 3 . providing a technical summary of the _ building blocks _ of the framework , which include the _ inputs _ required , the _ outputs _ that can be obtained and the different _ modules _ constituting the framework. the reader can find detailed information about the process and the main variables of interest in the methodological appendix [ sec : methods ] .one of the key concerns in the measurement of systemic risk is to quantify _ losses _ at the individual and global level . in particular , debtrank focuses on the depletion of equity when banks experience losses in external or interbank assets .we envision a system of banks ( indexed by ) and external assets ( indexed by ) .the framework features a dynamic distress model , with : lll time & round & effects on balance sheets + & baseline & initial allocation + + & first round effects & .the distress dynamics.[tab : dynamics ] [ cols= " < " , ] +the fundamental input data are represented by banks balance sheets . in particular, the framework takes the equity , the total asset value and the total interbank lending and borrowing of each bank as minimal inputs .more granular data on the structure of external assets are indeed possible ( e.g. in case one wants to simulate a shock on a specific asset class ) .the flexibility of the modeling framework allows for a number of shock scenarios , including : 1 . a fixed shock ( e.g. ) on the value of all external assets ; 2 . a shock on the value of all external assets drawn from a specific probability distribution ( e.g. a beta distribution , which we use in the exercise in section [ sec : exercise ] . ) ;when more detailed information on the holdings in external assets for banks is available , the shock ( either fixed or drawn from a probability distribution ) on specific asset classes .as outlined above , the framework allows to compute the main systemic risk variable for two main types of contagion dynamics : 1 . the _ default cascade _ dynamics : banks impact other banks only in case of their default ( see , for the technical details , the discussion related to equation [ eq : main_debt_rank ] in the methodological appendix [ sec : methods ] . ) 2 . the _ debtrank _ dynamics : banks impact other banks regardless of whether the event of default occurred .the rationale behind this type of dynamics is that , as banks reduce their equity levels to face losses , they decrease their distance to default and therefore are less likely to repay their obligations . in this case , the market value of their obligations is reduced and is hence reflected on the asset side of their counterparties in the interbank market .as detailed data on banks bilateral exposures are often not publicly available , estimations need to be performed in order to run the framework .even though such estimations constitute a key input of the stress test framework in case the exposures are not known , they constitute an output on their own , because they can be then analyzed with the typical tools of network analysis .also , the estimations can serve for two other purposes : i ) as a benchmark for comparison with the observed data , _ la _ , or ii ) for the estimation of missing data . from a technical viewpoint , the methodology we use to estimate the interbank networkis based on the so called `` fitness model '' .the technical details are reported in appendix [ sec : network_reconstruction ] .in order to show how the framework works and what type of outputs are available , in this section we apply the framework to a specific dataset of eu banks for the years .more details on the dataset are available in appendix [ sec : data_collection ] . in brief : 1 .we collect yearly data on equity , external assets , interbank assets and liabilities for the set of banks under scrutiny ; 2 .we estimated the exposures by combining the fitness model and an interative fitting procedure ( appendix [ sec : network_reconstruction ] ) , generating ( for each year ) networks compatible with the total interbank borrowing and lending of each bank at end - year ; 3 .we then ran the stress - test in order to obtain the main systemic risk variables for all years .when not explicitly specified , the statistics reported in this section are computed by taking the median value of the networks . in the remainder of this section , we describe the main results , including some key charts and figures , in order to show part of the graphical output of the framework .figure [ fig : vulner_impact ] provides an overview of the response of the reconstructed financial networks and its individual elements to the distress scenarios simulated .the chart on the left shows the dynamics of global equity losses ( ) from to , the values reported are the median value of across the networks in the monte carlo sample and are computed for a common shock of on the external assets .the chart also offers a deconstruction of the losses , according to if they are caused by the first ( external assets shocks ) , second ( reverberation on the interbank lending network ) , and third ( fire sales ) round of distress propagation .the relative losses in equity due to the second and third rounds are substantial , implying that an assessment of systemic risk solely based on first order effects is bound to underestimate potential losses .the chart on the right shows the evolution of the impact for each of the banks in the sample throughout the years . each line is the median of the impact calculated over the networks in the ensemble .the plot clearly shows a general decrease in the systemic impact for the individual institutions over time . in order to visually capture the persistency over time of banks with higher or lower impact ,the colours reflect the level of the average impact computed over the years .in particular , red lines are associated to banks that consistently show a high impact .conversely , blue lines are associated to banks that have a consistently low impact .we observe a certain level of stability of the relative levels : banks which show a higher systemic impact tend to do so throughout the years . from a systemic risk perspective, it is of particular interest to compare the two main systemic risk quantities associated to each individual bank : the vulnerability to external shocks and the impact of a bank onto the system in case of its default . by _jointly _ analyzing these two quantities , we divide institutions into four main categories : i ) high vulnerability / high impact , ii ) high vulnerability / low impact , iii ) low vulnerability / low impact , iv ) low vulnerability / high impact .results for this exercise are reported in figure [ fig : vuln_vs_imp ] .the graphs report a plot of the vulnerability at the second round versus the impact for each year in the sample .the \times [ 0 , 1] ] . a bank defaults ( i.e. the bank reaches the maximum distress possible ) if . when the bank is undistressed .all values of between and imply that the bank is under distress .similarly , we can compute the global cumulative relative equity loss at each time as the weighted average of each individual level of distress : where the weights are given by , i.e. the fraction of equity of each bank at the baseline level ( ) . notice that is a pure number and so is .the monetary value ( e.g. in euros or dollars ) of the loss can be obtained by ( individual loss ) and ( global loss ) . using the terminology introduced in the main text , equations [ eq : individual_equity_loss ] and [ eq : global_equity_loss ]allow to measure the _ individual _ and _ global _ vulnerability respectively .the entire distress process featured in the framework can be outlined in the following steps .let be the value of one unit of the external asset . at time , a ( negative ) shock on the value of asset reduces the value of the investment in external assets of bank by the amount : .banks record a loss on their asset side that , provided the hypothesis that assets are mark - to - market and liabilities are at face value , the loss needs to be compensated by a corresponding reduction in equity : the individual and global relative equity loss at time can be obtained as follows : ] ] which shows how the initial shock on each asset is _ multiplicatively _ amplified by the external leverage on that specific asset .this leads to a straightforward interpretation of the leverage ratio .indeed it is immediate to prove that the reciprocal of the leverage ratio corresponds to the minimum shock that leads bank to default ( this applies to all summands e in equation [ eq : summands ] ) .since the single largest exposure is typically smaller than the equity , it is likely that defaults and large losses originate by different combinations of shocks affecting the different external assets . in the absence of detailed data on the exposure to different classes of external assets , we assume a _ common _ negative shock on the value of all external assets. this assumption can be interpreted in two alternative ways .first , we can envision a common small shock to all asset classes , as in times of general market distress .the second way is that of a large shock to specific asset classes held by all banks ( e.g. sovereign on a class of countries , housing shocks , etc . ) . we can therefore drop the index in the summation and write : . at this point ,the initial loss reverberates throughout the interbank network .the debtrank algorithm extends the dynamics of default contagion into a more general distress propagation not necessarily entailing a default event .in other words , shocks on the asset side of the balance sheet of bank transmit along the network even when such shocks are not large enough to trigger the default of .this is motivated by the fact that , as s equity decreases , so does its _ distance to default _ and , consistently with the approach of the bank will be less likely to repay its obligations in case of further distress , therefore implying that the market value of s obligations will decrease as well .consequently , the distress propagates onto its counterparties along the network .if we denote the market value of the obligation with , is the element standing on the liability side of ( i.e. the face value established at time ) , whereas is the value ( mark - to - market ) at time written on the asset side of i. ] then above argument implies that the distress propagates onto its lender can be expressed , in general terms , as the relative loss with respect to the original face value . by summing over all obligors ,the relative equity loss of each bank at time is described by : where is the set of _ active _ nodes , i.e. nodes that transmit distress at time .the choice of the set of active nodes at time , , is a peculiarity of debtrank .in fact , equation [ eq : main_debt_rank ] is of a recursive nature and therefore needs to be computed at each time by considering the nodes that were in distress at the previous time .since the leverage network can present cycles , the distress may propagate via a particular link more than once .although this fact does not represent a problem in mathematical terms , its economic interpretation is indeed more problematic . in order to overcome this problem, debtrank excludes more than one reverberation . from a network perspective , by choosing the set we exclude walks that count a specific link more than once .the process ends at a certain time , when nodes are no longer active .[ [ the - functional - form - of - fcdot . ] ] the functional form of . + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the choice of the function deserves further discussion .in fact , a correct estimation of its form would require an empirical framework which should take into account the probability of default of and the recovery rate of the assets held by .however , the minimum requirement that needs to satisfy is that of being a non - decreasing relation between and the losses in the value of its obligations .more specifically , we can hypothesize that small values of may have little to no effect on the market value of s obligations , whereas extremely large losses would settle the value of s obligations almost close to zero : the relationship is therefore necessarily non - linear and is likely to be a sigmoid - type of function .in view of this , although further work will deal with the analysis of more refined functional forms , we hereby present two main forms , referring to the following two specific dynamics of distress : default contagion .: : in this case , in line with a specific stream of literature , , only the event of default triggers a contagion .the function is therefore chosen as the indicator function over the case of default .: : the characteristics of imply the existence of an intermediate level where can be approximated by a linear function . by choosing the identity function , we obtain to the original debtrank formulation .this functional form will be the one we use the most in the framework and the exercise . for the sake of clarity , in the remainder of this section ,we consider only the latter functional form .however , in the framework , stress tests can be easily carried out for both cases .[ [ vulnerability . ] ] vulnerability .+ + + + + + + + + + + + + + we are now ready to compute the vulnerability ( both individual and global ) and the _ impact _ ( at the individual level ) .the individual vulnerability can be easily computed by setting in equation [ eq : main_debt_rank ] .the global vulnerability is then given by . even though the framework can take as input _ any _ type of shocks ,we focus briefly on the case in which the external assets of all banks are shocked : in this case all banks transmit distress at time and , given the choice of the set , the process indeed ends at time .we can hence derive a closed - form solution for the individual vulnerability after the second round : which elucidates the _ compounding _ effect of external and interbank leverage .if the shock is small enough not to induce any default , then [ eq : first_order_approx ] can be rewritten as : [ [ impact.-1 ] ] impact .+ + + + + + + debtrank , in its original formulation , entails a stress test by assuming the default of each bank individually and computing the global relative equity loss _ induced _ by such default .this is indeed what we define as the _ impact _ of an institution onto the system as a whole .formally , this can be written as : [ [ network - effects - a - first - order - approximation - of - vulnerability ] ] network effects : a first order approximation of vulnerability + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + equation [ eq : main_debt_rank ] clearly shows the main feature of the distress dynamics captured by debtrank : the interplay between the network of leverage and the distress imported from neighbors in this network .further , equation [ eq : first_order_approx ] clarifies the multiplicative role of leverage in determining the distress at the end of the second round .we now develop a first - order approximation of equation [ eq : first_order_approx ] , which will serve the purpose of further clarifying the _ compounding _ effects of external and interbank leverage in determining distress .for the sake of simplicity , we assume no default , which allows us to remove the `` '' operator .this is a reasonable assumption in case of a relatively small shock on external assets .we approximate the external leverage of the obligors of bank by taking the weighted average ( with weights ) of their external leverages , which we denote by . as , we write . by denoting with the weighted average of , we can approximate the global equity loss at the end of the second round as : which allows to see how the second - round effects alone can be obtained as the product of the weighted average interbank leverage and weighted average external leverage .typically , stress tests emphasize the effects of the first - round : as we observe , this may potentially bring to a severe underestimation of systemic risk .after the second round , banks have experienced a certain level of equity loss that has completely reshaped the initial configuration of the balance sheets at time .banks are now attempting to restore , at least partially , this initial configuration .in particular , we assume that each bank will try to move to the original leverage level .this implies that banks will try to sell external assets in order to obtain enough cash to repay their obligations and therefore reduce the size of their balance sheet .because of the vast quantity of external assets sold by the banking system in aggregate , the impact on the prices of external assets is also relevant , which will reduce accordingly .banks therefore will experience further losses due to fire sales and we label such losses as _ third round _ effects . here , we provide a minimal model for the scenario described above .consider the leverage dynamics at .the leverage at is we assume that , at , each bank had a quantity of external assets and , without loss of generality , that the initial price of the asset is unitary ( ) . hence , the asset values at can be written as .the asset price after the first round is therefore simply .recalling that the first round affects only the external asset and that the second round affects only interbank assets , the leverage of each bank immediately after the second round can be written as : where , for ease of notation and .first , we need to prove that the new leverage levels are higher with respect to the initial conditions .it is easy to prove that ( as long as has not defaulted ) : where .the above inequality leads to the condition , which is always verified in our setting . at , banks attempt to restore the target leverage , by selling a fraction ] .finally , by computing the additional loss given by the decline in price following equation [ eq : pricedecline ] , we obtain the final individual relative equity loss at : and the global equity loss at the third round ( assuming no defaults ) : the distress process allows to capture , at each time , the relative equity loss for both the individual institution and the system as a whole .this implies the possibility to compute , at each time , a ( continuous ) _ relative equity loss distribution _conditional to a certain shock .the equity loss distribution can be characterized , for example , by two typical risk measures : value at risk ( var ) and conditional value at risk ( cvar ) ( also known as expected shortfall , es ) . since and are nonnegative variables $ ] , the individual value at risk for bank at time at level is defined as the quantile : : p(h_i(t ) \leq x ) \geq ( 1 - \alpha)\}\ ] ] and the conditional value at risk for bank at time at level is defined as the expected value of the losses exceeding the var , as : \ ] ] considering the system as a whole , we can likewise analyze the global relative equity losses at each time , therefore obtaining a _ global var _ : : p(h(t ) \leq x ) \geq ( 1 - \alpha)\},\ ] ] and the _ global cvar _ : .\ ] ]detailed public data on banks balance sheets are unavailable , therefore we resorted to a dataset that provides a reasonable level of breakdown , the bureau van dijk bankscope database ( url : ` bankscope.bvdinfo.com ` ) .we focus on a subset of banks headquartered in the european union that are also quoted on a stock market for the years from 2008 to 2013 .the main criterion for the selection was that of having detailed coverage ( on a yearly basis ) for total assets , equity , interbank lending or borrowing .future work will deal with data at higher frequency ( quarterly , monthly , ) .our interbank asset and liability data include amounts due under repurchase agreements ( which are economically analogous to a secured loan ) thereby prompting large contagion effects .we perfomed a series of consistency checks . in the case of missing interbank lending data for a bank for less than three years, we proceed with an estimation via linear interpolation of the data available for the other years ( a comparison with the available data gives errors lower than 20% ) .since , in general , the correlation between interbank lending and borrowing for all banks and years is about 70% ( with some significant differences ) , this implies the presence of net lenders and net borrowers . in view of this , when data on either interbank lending or borrowing are not available for more than three years , we simply set them equal .data on total interbank lending and borrowing are often publicly available , while the detailed bilateral exposures are typically confidential . however , in this section , we outline the estimation procedure adopted in the framework . at each point in time , we create a sample of networks via the `` fitness model '' , which is a technique that has recently been used to reconstruct financial networks starting from aggregate exposures . the procedure can be outlined as follows : * 1 .total exposure re - balancing .* since we are considering a subset of the entire interbank market , we observe an inconsistency : the total interbank assets are systematically smaller than the total interbank liabilities for each year ( eu banks are net borrowers from the rest of the world ) . to adopt a conservative scenario , we assume that the total lending volume in the network is the minimum between the two ( in the exercise ) .let and be respectively the lending and borrowing propensity of .* 2 . exposure link assignment .* the fitness model , when applied to interbank networks attributes to each bank a so - called fitness level ( typically a proxy of its size in the interbank network ) .we can estimate the probability that an exposure between and exists via the following formula , ( is a free parameter ) .notice that .consistently with a recent stream of literature , for each bank we take as fitness the average between its total lending and borrowing propensity , implying that , the greater this value , the higher will be the number of counterparties ( the _ degree _ of a node ) . considering empirical evidence on the density of different interbank networks , we assume on average a density of ( i.e. about over the possible links ) . does not influence the overall results of the exercise .for example , values for the global vulnerability at the second round differs only at the third decimal digit . ]since it can be proved that the total number of links is equal to the expected value of , we can determine the parameter and compute the matrix of link probabilities .we now generate network realizations . for each of these realizations ,we assign a link to the pair of banks with probability .the link direction ( which determines whether or is the lender or the borrower ) is chosen at random with probability . *exposure volume allocation * last , we need to assign weights to the edges ( the volumes of each exposure ) .we impose the fundamental constraint that the sum of the exposures of each bank ( out - strength ) equals its total interbank asset . to achieve this, we implement an iterative proportional fitting algorithm on the interbank exposure matrix .we wish to estimate the matrix , which is the relative value of each exposure with respect to the total interbank volume .we begin the estimation of , at each iteration : ( 1 ) , i.e. is divided by its relative lending propensity and multiplied by the total relative assets of , ; ( 2 ) .we repeated the two steps until and are below .last , the exposure network can be estimated by .
we develop a novel stress - test framework to monitor systemic risk in financial systems . the modular structure of the framework allows to accommodate for a variety of shock scenarios , methods to estimate interbank exposures and mechanisms of distress propagation . the main features are as follows . first , the framework allows to estimate and disentangle not only _ first - round effects _ ( i.e. shock on external assets ) and _ second - round effects _ ( i.e. distress induced in the interbank network ) , but also _ third - round effects _ induced by possible fire sales . second , it allows to monitor at the same time the _ impact _ of shocks on individual or groups of financial institutions as well as their _ vulnerability _ to shocks on counterparties or certain asset classes . third , it includes estimates for loss distributions , thus combining network effects with familiar risk measures such as var and cvar . fourth , in order to perform robustness analyses and cope with incomplete data , the framework features a module for the generation of sets of networks of interbank exposures that are coherent with the total lending and borrowing of each bank . as an illustration , we carry out a stress test exercise on a dataset of listed european banks over the years - . we find that second - round and third - round effects dominate first - round effects , therefore suggesting that most current stress - test frameworks might lead to a severe underestimation of systemic risk . stefano battiston,^a^ guido caldarelli,^b^ marco derrico,^a,^ stefano gurciullo^c^ + ^a^ department of banking and finance , university of zurich + ^b^ imt alti studi lucca , isc - cnr , rome , lims london + ^c^ school of public policy , university college london + this version :
information and technology make large data sets widely available for scientific discovery .much statistical analysis of such high - dimensional data involves the estimation of a covariance matrix or its inverse ( the precision matrix ) .examples include portfolio management and risk assessment ( fan , fan and lv , 2008 ) , high - dimensional classification such as fisher discriminant ( hastie , tibshirani and friedman , 2009 ) , graphic models ( meinshausen and bhlmann , 2006 ) , statistical inference such as controlling false discoveries in multiple testing ( leek and storey , 2008 ; efron , 2010 ) , finding quantitative trait loci based on longitudinal data ( yap , fan , and wu , 2009 ; xiong et al . 2011 ) , and testing the capital asset pricing model ( sentana , 2009 ) , among others .see section 5 for some of those applications .yet , the dimensionality is often either comparable to the sample size or even larger . in such cases ,the sample covariance is known to have poor performance ( johnstone , 2001 ) , and some regularization is needed .realizing the importance of estimating large covariance matrices and the challenges brought by the high dimensionality , in recent years researchers have proposed various regularization techniques to consistently estimate .one of the key assumptions is that the covariance matrix is sparse , namely , many entries are zero or nearly so ( bickel and levina , 2008 , rothman et al , 2009 , lam and fan 2009 , cai and zhou , 2010 , cai and liu , 2011 ) . in many applications, however , the sparsity assumption directly on is not appropriate .for example , financial returns depend on the equity market risks , housing prices depend on the economic health , gene expressions can be stimulated by cytokines , among others . due to the presence of common factors , it is unrealistic to assume that many outcomes are uncorrelated .an alternative method is to assume a factor model structure , as in fan , fan and lv ( 2008 ) .however , they restrict themselves to the strict factor models with known factors .a natural extension is the conditional sparsity .given the common factors , the outcomes are weakly correlated .in order to do so , we consider an approximate factor model , which has been frequently used in economic and financial studies ( chamberlain and rothschild , 1983 ; fama and french 1993 ; bai and ng , 2002 , etc ) : here is the observed response for the ( ) individual at time ; is a vector of factor loadings ; is a vector of common factors , and is the error term , usually called _idiosyncratic component _ , uncorrelated with . both and to infinity , while is assumed fixed throughout the paper , and is possibly much larger than .we emphasize that in model ( [ eq1.1 ] ) , only is observable .it is intuitively clear that the unknown common factors can only be inferred reliably when there are sufficiently many cases , that is , . in a data - rich environment, can diverge at a rate faster than .the factor model ( [ eq1.1 ] ) can be put in a matrix form as where , and .we are interested in , the covariance matrix of , and its inverse , which are assumed to be time - invariant . under model ( [ eq1.1 ] ) , is given by where is the covariance matrix of .the literature on approximate factor models typically assumes that the first eigenvalues of diverge at rate , whereas all the eigenvalues of are bounded as .this assumption holds easily when the factors are pervasive in the sense that a non - negligible fraction of factor loadings should be non - vanishing .the decomposition ( [ eq1.3 ] ) is then asymptotically identified as .in addition to it , in this paper we assume that is _ approximately sparse _ as in bickel and levina ( 2008 ) and rothman et al .( 2009 ) : for some , does not grow too fast as in particular , this includes the exact sparsity assumption ( ) under which , the maximum number of nonzero elements in each row . the conditional sparsity structure of ( [ eq1.2 ] )was explored by fan , liao and mincheva ( 2011 ) in estimating the covariance matrix , when the factors are observable .this allows them to use regression analysis to estimate .this paper deals with the situation in which the factors are unobservable and have to be inferred .our approach is simple , optimization - free and it uses the data only through the sample covariance matrix .run the singular value decomposition on the sample covariance matrix of , keep the covariance matrix formed by the first principal components , and apply the thresholding procedure to the remaining covariance matrix .this results in a principal orthogonal complement thresholding ( poet ) estimator .when the number of common factors is unknown , it can be estimated from the data .see section 2 for additional details .we will investigate various properties of poet under the assumption that the data are serially dependent , which includes independent observations as a specific example .the rate of convergence under various norms for both estimated and and their precision ( inverse ) matrices will be derived .we show that the effect of estimating the unknown factors on the rate of convergence vanishes when , and in particular , the rate of convergence for achieves the optimal rate in cai and zhou ( 2012 ) .this paper focuses on the high - dimensional _ static factor model _ ( [ eq1.2 ] ) , which is innately related to the principal component analysis ( pca ) , as clarified in section 2 .this feature makes it different from the classical factor model with fixed dimensionality ( e.g. , lawley and maxwell 1971 ) . in the last ten years, much theory on the estimation and inference of the static factor model has been developed , for example , stock and watson ( 1998 , 2002 ) , bai and ng ( 2002 ) , bai ( 2003 ) , doz , giannone and reichlin ( 2011 ) , among others .our contribution is on the estimation of covariance matrices and their inverse in large factor models .the _ static _ model considered in this paper is to be distinguished from the _ dynamic factor model _ as in forni , hallin , lippi and reichlin ( 2000 ) ; the latter allows to also depend on with lags in time .their approach is based on the eigenvalues and principal components of spectral density matrices , and on the frequency domain analysis .moreover , as shown in forni and lippi ( 2001 ) , the dynamic factor model does not really impose a restriction on the data generating process , and the assumption of idiosyncrasy ( in their terminology , a -dimensional process is idiosyncratic if all the eigenvalues of its spectral density matrix remain bounded as ) asymptotically identifies the decomposition of into the common component and idiosyncratic error .the literature includes , for example , forni et al .( 2000 , 2004 ) , forni and lippi ( 2001 ) , hallin and lika ( 2007 , 2011 ) , and many other references therein .above all , both the static and dynamic factor models are receiving increasing attention in applications of many fields where information usually is scattered through a ( very ) large number of interrelated time series .there has been extensive literature in recent years that deals with sparse principal components , which has been widely used to enhance the convergence of the principal components in high - dimensional space .daspremont , bach and el ghaoui ( 2008 ) , shen and huang ( 2008 ) , witten , tibshirani , and hastie ( 2009 ) and ma ( 2011 ) proposed and studied various algorithms for computations . more literature on sparse pca is found in johnstone and lu ( 2009 ) , amini and wainwright ( 2009 ) , zhang and el ghaoui ( 2011 ) , birnbaum et al .( 2012 ) , among others .in addition , there has also been a growing literature that theoretically studies the recovery from a low - rank plus sparse matrix estimation problem , see for example , wright et al .( 2009 ) , lin et al .( 2009 ) , cands et al .( 2011 ) , luo ( 2011 ) , agarwal , nagahban , wainwright ( 2012 ) , pati et al .it corresponds to the identifiability issue of our problem .there is a big difference between our model and those considered in the aforementioned literature . in the current paper ,the first eigenvalues of are spiked and grow at a rate , whereas the eigenvalues of the matrices studied in the existing literature on covariance estimation are usually assumed to be either bounded or slowly growing . due to this distinctive feature , the common components and the idiosyncratic components can be identified , and in addition , pca on the sample covariance matrix can consistently estimate the space spanned by the eigenvectors of .the existing methods of either thresholding directly or solving a constrained optimization method can fail in the presence of very spiked principal eigenvalues .however , there is a price to pay here : as the first eigenvalues are too spiked " , one can hardly obtain a satisfactory rate of convergence for estimating in absolute term , but it can be estimated accurately in relative term ( see section 3.3 for details ) .in addition , can be estimated accurately .we would like to further note that the low - rank plus sparse representation of our model is on the population covariance matrix , whereas cands et al .( 2011 ) , wright et al .( 2009 ) , lin et al .( 2009 ) considered such a representation on the data matrix .as there is no to estimate , their goal is limited to producing a low - rank plus sparse matrix decomposition of the data matrix , which corresponds to the identifiability issue of our study , and does not involve estimation and inference .in contrast , our ultimate goal is to estimate the population covariance matrices as well as the precision matrices . for this purpose, we require the idiosyncratic components and common factors to be uncorrelated and the data generating process to be strictly stationary .the covariances considered in this paper are constant over time , though slow - time - varying covariance matrices are applicable through localization in time ( time - domain smoothing ) .our consistency result on demonstrates that the decomposition ( [ eq1.3 ] ) is identifiable , and hence our results also shed the light of the surprising phenomenon " of cands et al .( 2011 ) that one can separate fully a sparse matrix from a low - rank matrix when only the sum of these two components is available .the rest of the paper is organized as follows .section 2 gives our estimation procedures and builds the relationship between the principal components analysis and the factor analysis in high - dimensional space .section 3 provides the asymptotic theory for various estimated quantities .section 4 illustrates how to choose the thresholds using cross - validation and guarantees the positive definiteness in any finite sample .specific applications of regularized covariance matrices are given in section 5 .numerical results are reported in section 6 .finally , section 7 presents a real data application on portfolio allocation .all proofs are given in the appendix . throughout the paper ,we use and to denote the minimum and maximum eigenvalues of a matrix .we also denote by , , and the frobenius norm , spectral norm ( also called operator norm ) , -norm , and elementwise norm of a matrix , defined respectively by , , and .note that when is a vector , both and are equal to the euclidean norm .finally , for two sequences , we write if and if and are three main objectives of this paper : ( i ) understand the relationship between principal component analysis ( pca ) and the high - dimensional factor analysis ; ( ii ) estimate both covariance matrices and the idiosyncratic and their precision matrices in the presence of common factors , and ( iii ) investigate the impact of estimating the unknown factors on the covariance estimation . the propositions in section [ s2.1 ] below show that the space spanned by the principal components in the population level is close to the space spanned by the columns of the factor loading matrix .consider a factor model where the number of common factors , , is small compared to and , and thus is assumed to be fixed throughout the paper . in the model , the only observable variable is the data .one of the distinguished features of the factor model is that the principal eigenvalues of are no longer bounded , but growing fast with the dimensionality .we illustrate this in the following example .consider a single - factor model where suppose that the factor is pervasive in the sense that it has non - negligible impact on a non - vanishing proportion of outcomes .it is then reasonable to assume for some .therefore , assuming that , an application of ( [ eq1.3 ] ) yields , for all large , assuming .we now elucidate why pca can be used for the factor analysis in the presence of spiked eigenvalues .write as the loading matrix .note that the linear space spanned by the first principal components of is the same as that spanned by the columns of when is non - degenerate .thus , we can assume without loss of generality that the columns of are orthogonal and , the identity matrix .this canonical form corresponds to the identifiability condition in decomposition ( [ eq1.3 ] ) .let be the columns of , ordered such that is in a non - increasing order .then , are eigenvectors of the matrix with eigenvalues and the rest zero .we will impose the pervasiveness assumption that all eigenvalues of the matrix are bounded away from zero , which holds if the factor loadings are independent realizations from a non - degenerate population .since the non - vanishing eigenvalues of the matrix are the same as those of , from the pervasiveness assumption it follows that are all growing at rate .let be the eigenvalues of in a descending order and be their corresponding eigenvectors .then , an application of weyl s eigenvalue theorem ( see the appendix ) yields that [ prop21 ] assume that the eigenvalues of are bounded away from zero for all large . for the factor model ( [ eq1.3 ] ) with the canonical condition we have in addition , for , . using proposition [ prop21 ] and the theorem of davis and kahn ( 1970 , see the appendix ), we have the following : [ prop22 ] under the assumptions of proposition [ prop21 ] , if are distinct , then propositions [ prop21 ] and [ prop22 ] state that pca and factor analysis are approximately the same if .this is assured through a sparsity condition on , which is frequently measured through ] the minimizer is now clear : the columns of are the eigenvectors corresponding to the largest eigenvalues of the matrix and ( see e.g. , stock and watson ( 2002 ) ). we will show that under some mild regularity conditions , as and , consistently estimates the true uniformly over and . since is assumed to be sparse , we can construct an estimator of using the adaptive thresholding method by cai and liu ( 2011 ) as follows .let and for some pre - determined decreasing sequence , and large enough , define the adaptive threshold parameter as the estimated idiosyncratic covariance estimator is then given by where for all ( see antoniadis and fan , 2001 ) , it is easy to verify that includes many interesting thresholding functions such as the hard thresholding ( ) , soft thresholding ( ) , scad , and adaptive lasso ( see rothman et al .( 2009 ) ) .analogous to the decomposition ( [ eq1.3 ] ) , we obtain the following substitution estimators and by the sherman - morrison - woodbury formula , noting that ^{-1}{\widehat { \mbox{\boldmath }}}_k'({\widehat{\mathbf{\sigma}}}_{u , k}^{\mathcal{t}})^{-1},\ ] ] in practice , the true number of factors might be unknown to us .however , for any determined , we can always construct either as in ( [ eq2.4 ] ) or as in ( [ eq2.14 ] ) to estimate .the following theorem shows that for each given , the two estimators based on either regularized pca or least squares substitution are equivalent .similar results were obtained by bai ( 2003 ) when and no thresholding was imposed .[ thm2.1 ] suppose that the entry - dependent threshold in ( [ eq2.2 ] ) is the same as the thresholding parameter used in ( [ eq2.13 ] ) .then for any , the estimator ( [ eq2.4 ] ) is equivalent to the substitution estimator ( [ eq2.14 ] ) , that is , in this paper , we will use a data - driven to construct the poet ( see section 2.4 below ) , which has two equivalent representations according to theorem [ thm2.1 ] . determining the number of factors in a data - driven way has been an important research topic in the econometric literature .bai and ng ( 2002 ) proposed a consistent estimator as both and diverge .other recent criteria are proposed by kapetanios ( 2010 ) , onatski ( 2010 ) , alessi et al .( 2010 ) , etc .our method also allows a data - driven to estimate the covariance matrices . in principle , any procedure that gives a consistent estimate of can be adopted . in this paperwe apply the well - known method in bai and ng ( 2002 ) .it estimates by where is a prescribed upper bound , is a matrix whose columns are times the eigenvectors corresponding to the largest eigenvalues of the matrix ; is a penalty function of such that and two examples suggested by bai and ng ( 2002 ) are throughout the paper , we let be the solution to ( [ eq2.16add ] ) using either ic1 or ic2 .the asymptotic results are not affected regardless of the specific choice of .we define the poet estimator with unknown as the procedure is as stated in section [ s2.2 ] except that is now data - driven .this section presents the assumptions on the model ( [ eq1.2 ] ) , in which only are observable .recall the identifiability condition ( [ eq2.7 ] ) .the first assumption has been one of the most essential ones in the literature of approximate factor models . under this assumption and other regularity conditions ,the number of factors , loadings and common factors can be consistently estimated ( e.g. , stock and watson ( 1998 , 2002 ) , bai and ng ( 2002 ) , bai ( 2003 ) , etc . ) .[ a35 ] all the eigenvalues of the matrix are bounded away from both zero and infinity as . 1 .it implies from proposition 2.1 in section 2 that the first eigenvalues of grow at rate .this unique feature distinguishes our work from most of other low - rank plus sparse covariances considered in the literature , e.g. , luo ( 2011 ) , pati et al .( 2012 ) , agarwal et al .( 2012 ) , birnbaum et al .( 2012 ) . )are fan et al .( 2008 , 2011 ) and bai and shi ( 2011 ) . while fan et al .( 2008 , 2011 ) assumed the factors are observable , bai and shi ( 2011 ) considered the strict factor model in which is diagonal . ]assumption 3.1 requires the factors to be pervasive , that is , to impact a non - vanishing proportion of individual time series .see example 2.1 for its meaning .is sparse the intuition of a sparse loading matrix is that each factor is related to only a relatively small number of stocks , assets , genes , etc . with being sparse, all the eigenvalues of and hence those of are bounded . ]3 . as to be illustrated in section 3.3 below , due to the fast diverging eigenvalues, one can hardly achieve a good rate of convergence for estimating under either the spectral norm or frobenius norm when .this phenomenon arises naturally from the characteristics of the high - dimensional factor model , which is another distinguished feature compared to those convergence results in the existing literature .[ a21 ] ( i ) is strictly stationary .in addition , for all and + ( ii ) there exist constants such that , and + ( iii ) there exist and , such that for any , and , condition ( i ) requires strict stationarity as well as the non - correlation between and .these conditions are slightly stronger than those in the literature , e.g. , bai ( 2003 ) , but are still standard and simplify our technicalities .condition ( ii ) requires that be well - conditioned .the condition instead of a weaker condition is imposed here in order to consistently estimate .but it is still standard in the approximate factor model literature as in bai and ng ( 2002 ) , bai ( 2003 ) , etc .when is known , such a condition can be removed .our working paper shows that the results continue to hold for a growing ( known ) under the weaker condition . condition ( iii ) requires exponential - type tails , which allows us to apply the large deviation theory to and .we impose the strong mixing condition .let and denote the -algebras generated by and respectively .in addition , define the mixing coefficient [ a32 ] strong mixing : there exists such that , and satisfying : for all , in addition , we impose the following regularity conditions .[ a33 ] there exists such that for all , and , + ( i ) , + ( ii ) ^ 4<m \lambda \lambda \lambda \lambda \lambda \lambda \lambda \lambda \lambda \lambda \lambda \lambda \lambda \lambda ] , then note that theorem [ thm31 ] implies .lemma [ lc.14 ] then implies this shows that .similarly .in addition , since , similarly . finally , let ^{-1}.$ ] by lemma [ lc.14 ] , .then by lemma [ lc.13 ] , consequently , adding up - gives one the other hand , using sherman - morrison - woodbury formula again implies ^{-1}-[{\mathrm{\bf i}}_k+{{\mathrm{\bf b}}}'{\mathbf{\sigma}}_u^{-1}{{\mathrm{\bf b}}}]^{-1}){{\mathrm{\bf b}}}'{\mathbf{\sigma}}_u^{-1}\|\cr & \leq & o(p)\|[({\mathrm{\bf h}}'{\mathrm{\bf h}})^{-1}+{{\mathrm{\bf b}}}'{\mathbf{\sigma}}_u^{-1}{{\mathrm{\bf b}}}]^{-1}-[{\mathrm{\bf i}}_k+{{\mathrm{\bf b}}}'{\mathbf{\sigma}}_u^{-1}{{\mathrm{\bf b}}}]^{-1}\|\cr & = & o_p(p^{-1})\|({\mathrm{\bf h}}'{\mathrm{\bf h}})^{-1}-{\mathrm{\bf i}}_k\|=o_p(\omega_t^{1-q}m_p).\end{aligned}\ ] ] we first bound . repeatedly using the triangular inequalityyields \cr & \leq&(\max_i\|{\widehat { \mathrm{\bf b}}}_i-{\mathrm{\bf h}}{\mathrm{\bf b}}_i\|)^2 + 2\max_{ij}\|{\widehat { \mathrm{\bf b}}}_i-{\mathrm{\bf h}}{\mathrm{\bf b}}_i\|\|{\mathrm{\bf h}}{\mathrm{\bf b}}_j\|+\max_i\|{\mathrm{\bf b}}_i\|^2\|{\mathrm{\bf h}}'{\mathrm{\bf h}}-{\mathrm{\bf i}}_k\|\cr & = & o_p(\omega_t).\end{aligned}\ ] ] on the other hand , let be the entry of . then . hence the result then follows immediately .carvalho , c. , chang , j. , lucas , j. , nevins , j. , wang , q. and west , m. ( 2008 ) .high - dimensional sparse factor modeling : applications in gene expression genomics . _ j. amer .assoc . _ * 103 * , 1438 - 1456 .johnstone , i. m. ( 2001 ) . on the distribution of the largest eigenvalue in principal components analysis ._ , * 29 * , 295327 .johnstone , i.m . and( 2009 ) . on consistency and sparsity for principal components analysis in high dimensions ._ j. amer .assoc . _ * 104 * , 682 - 693 .witten , d.m ., tibshirani , r. and hastie , t. ( 2009 ) . a penalized matrix decomposition , with applications to sparse principal components and canonical correlation analysis . _ biostatistics _ , * 10 * , 515 - 534 .wright , j. , peng , y. , m , y. , ganesh , a. and rao , s. ( 2009 ) .robust principal component analysis : exact recovery of corrupted low - rank matrices by convex optimization ._ manuscript_. microsoft research asia
this paper deals with the estimation of a high - dimensional covariance with a conditional sparsity structure and fast - diverging eigenvalues . by assuming sparse error covariance matrix in an approximate factor model , we allow for the presence of some cross - sectional correlation even after taking out common but unobservable factors . we introduce the principal orthogonal complement thresholding ( poet ) method to explore such an approximate factor structure with sparsity . the poet estimator includes the sample covariance matrix , the factor - based covariance matrix ( fan , fan , and lv , 2008 ) , the thresholding estimator ( bickel and levina , 2008 ) and the adaptive thresholding estimator ( cai and liu , 2011 ) as specific examples . we provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high - dimensional data . the rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms . it is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases . the uniform rates of convergence for the unobserved factors and their factor loadings are derived . the asymptotic results are also verified by extensive simulation studies . finally , a real data application on portfolio allocation is presented . * keywords : * high - dimensionality , approximate factor model , unknown factors , principal components , sparse matrix , low - rank matrix , thresholding , cross - sectional correlation , diverging eigenvalues .
dynamic positron emission tomography ( pet ) can be used to measure tracer kinetics in vivo , from which physiological parameters , such as tissue perfusion , ligand receptor binding potential , and metabolic rate can be determined using compartmental modelling techniques .estimation of the kinetic parametric images can be extremely challenging , since the data are often very noisy .most conventional methods either define a region of interest ( roi ) and estimate parameters based on the averages , or in a voxelwise fashion . the former requires the identification of roi which itself is difficult .the latter fails to utilize information from nearby voxels , resulting in more noisy estimates .estimations are typically carried out using a minimum least - squares approach or a basis function approach . given low signal - to - noise ratio ( snr ) , particularly in voxelwise estimations ,some external constraints are often necessary to stabilize parameter estimation .smoothness regularization can be used , constraining the parameters from nearby spatial locations to be more similar .similarly , tikhonov regularization can be used to directly constrain parameter values to be within a certain range , so that estimates obtained are less sensitive to noise . to account for irregularities in the noise distribution, mixture models can be fitted to each voxel . in this case , it is necessary to restrict the total number of mixture components to be small and employ regularization to constrain parameter estimates . assuming a gaussian error distribution , bayesian methods provide an alternative way of obtaining uncertainty estimates for the kinetic parameters , as well as model choice for the competing compartmental models . however , these methods yield higher voxel to voxel variability because each voxel was processed independently , and the assumption of gaussian distribution can also be inappropriate , leading to biased parameter estimates . this has led to the development of several approaches , in which a clustering method was performed first to cluster the pet images into several homogeneous regions , and kinetic parameter estimations were then performed afterward based on the averaged values of each cluster .see , for example , hierarchical clustering of the time activity curves ( tacs ) using a weighted dissimilarity measure , and a comparison of a number of different hierarchical clustering algorithms in this context .recently , simultaneous clustering and parameter estimation methods have been proposed using a spatially regularized -means algorithm .the algorithm iteratively estimates the kinetic parameters in a least squared sense between each cluster update .it was demonstrated that incorporating the physiological model in the clustering procedure performed better than their counterparts in terms of clustering .however , the method offers no guidance on the choice of cluster numbers , or how to select the spatial regularization parameter , both can have great influence on the results .a similar algorithm was proposed , where the clustering and parameter estimation were performed simultaneously , although spatial correlation was ignored .they demonstrated improvements to parameter estimation in myocardial perfusion pet imaging .we develop a fully bayesian approach , based on defining a finite mixture of multivariate gaussian distributions to model each voxelwise tac .we consider that there are a number of distinct homogeneous groups of voxel kinetics which tend to be spatially contiguous .the optimal number of mixture components ( or groups ) is estimated via information theoretic criteria .this provides a flexible specification of the error distribution for the tac .additionally , we model the spatial dependence between the tacs via the markov random field ( mrf ) , which allows us to borrow information across nearby voxels .our model simultaneously handles both spatial and temporal information , making full use of the data available , and this is done with the estimation of the kinetic parameters in a single step .we apply our approach to simulated one - compartment pet perfusion data and compare the performance of our approach with both the standard voxelwise curve - fitting approach and the spatial temporal approach , using the true kinetic parameters as the gold standard .we also apply our method to an in vivo pig study data .all the simulation studies were performed using an ncat torso phantom which consists of heart , lungs , liver , and soft - tissue compartments . the left ventricle ( lv ) myocardium was segmented into 17 standard segments. the simulation was based on - flurpiridaz , which is a new myocardial perfusion tracer that exhibits rapid uptake and longerwashout in cardiomyocytes . based on the one - tissue compartmental model ,the tac of the tissue concentration , , was simulated using ,\ ] ] where is the blood input function , and are kinetic rate constants for the segment , and denotes convolution operation .the input function used in the simulation was based on a previously published - flurpiridaz study . during the study ,the lv input function was extracted with generalized factor analysis on dynamic series .this lv input function was treated as the plasma input function .the kinetic parameters , i.e. , , assigned to 17 segments were based on the realistic values obtained from pet perfusion studies on normal patients . in order to mimic a myocardial defect , the segment located in the anterior wall was assigned with values by lowering and by 50% and 20% , respectively , of their original values .we added the segment to include other voxels not part of the left ventricle myocardium .table [ segmentname ] shows the kinetic parameters assigned to all the 18 segments in the myocardium .the blood input function and tacs for one normal ( basal inferoseptal ) and one defect ( apex ) segments are shown in figure [ inputsample ] .+ [ hbt ] .segment names and their assigned values in ml / min / cc , values in 1/min ( i.e.,the ground truth ) . [ cols="<,^,^,<,^,^",options="header " , ] a system matrix corresponding to philips gemini pet - ct camera , which includes position dependent point spread function modelling , a forward - projection operator implemented using siddon s method , line of response ( lor ) normalization factors , and attenuation correction factors , was used to create noise - free sinograms from tacs . the simulated sinogram data is equivalent to a 13-min dynamic pet scan with the framing scheme of 6 5s , 3 30s , 5 60s , and 3 120s frames .twenty five dynamic pet noise realizations were generated .both random and scatter events were not included in this study .the total number of events simulated in all the time frames is 50 m. the decay of the tracer was not simulated .poisson noise was then added to each pixel in the sinogram based on the mean counts for the pixel . for each noise realization, the image reconstruction at each time frame was performed using standard ordered subset expectation maximization (osem ) with 16 subsets and 8 iterations .no postreconstruction smoothing was applied .the physical dimension in the image reconstruction was 57.6 cm 57.6 cm 16.2 cm , matrix dimension was 128 128 36 , where the voxel size was 0.45 cm 0.45 cm 0.45 cm .a pig with a body weight of 40 kg was scanned on a siemens biograph truepoint pet / ct with the radiotracer - flurpiridaz .first , a planar x - ray topogram was performed to allow delineation of the field of view ( fov ) and centering on the heart following ct and pet acquisitions .the cardiac ct was used for structure localization and later for attenuation correction during reconstruction of pet images .emission pet data were acquired in 3d list mode and started concomitantly to the injection of - flurpiridaz , the injected activity was 11 mci at the time of injection .list mode data were framed into dynamic series of 12 x 5 , 8 x 15 , 4 x 30 , 5 x 60s .pet images were reconstructed using filtered back projection with minimal filtering ( voxel size : 2.14x2.14x3 mm3 , 55 slices ) .attenuation correction was obtained from the ct images .decay correction was applied and the first 10 min of the data are used for kinetic analysis .the input functions for the left and right ventricle were obtained by averaging the tacs from a manually defined region . a one - compartment model with spill - over correction was used .the described experiment was performed under a protocol approved by the institutional animal care and use committee at the massachusetts general hospital .kinetic analysis is performed by curve - fitting the tac in each voxel using a nonlinear least - square fitting , where is the reconstructed activity concentration for voxel at time frame divided by frame duration , } ds12 & 12#1212_12%12[1][0] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * ( ) in _ _ ( , ) pp . `` , '' ( , ) chap . , pp . * * , ( ) * * , ( ) * * , ( ) in _ _ ( , ) pp . * * , ( ) _ _ , ph.d .thesis , ( ) * * , ( ) ( ) in _ _ ( ) * * , ( ) * * , ( ) * * , ( ) ( )
* purpose : * estimation of parametric maps is challenging for kinetic models in dynamic positron emission tomography . since voxel kinetics tend to be spatially contiguous , the authors consider groups of homogeneous voxels together . the authors propose a novel algorithm to identify the groups and estimate kinetic parameters simultaneously . uncertainty estimates for kinetic parameters are also obtained . + * methods : * mixture models were used to fit the time activity curves . in order to borrow information from spatially nearby voxels , the potts model was adopted . a spatial temporal model was built incorporating both spatial and temporal information in the data . markov chain monte carlo was used to carry out parameter estimation . evaluation and comparisons with existing methods were carried out on cardiac studies using both simulated data sets and a pig study data . one - compartment kinetic modelling was used , in which is the parameter of interest , providing a measure of local perfusion . + * results : * based on simulation experiments , the median standard deviation across all image voxels , of estimates were 0 , 0.13 , and 0.16 for the proposed spatial mixture models ( smms ) , standard curve fitting and spatial -means methods respectively . the corresponding median mean squared biases for were 0.04 , 0.06 and 0.06 for abnormal region of interest(roi ) ; 0.03 , 0.03 and 0.04 for normal roi ; and 0.007 , 0.02 and 0.05 for the noise region . + * conclusions : * smm is a fully bayesian algorithm which determines the optimal number of homogeneous voxel groups , voxel group membership , parameter estimation and parameter uncertainty estimation simultaneously . the voxel membership can also be used for classification purposes . by borrowing information from spatially nearby voxels , smm substantially reduces the variability of parameter estimates . in some rois , smm also reduces mean squared bias . * keywords * pet image , kinetic model , myocardium , spatial mixture model , mcmc .
in this section we introduce the mathematical model describing incompressible isothermal flow in porous medium without reaction .the considered equations for the velocity and pressure fields are for flows in fluid saturated porous media .most of research results for flows in porous media are based on the darcy equation which is considered to be a suitable model at a small range of reynolds numbers . however , there are restrictions of darcy equation for modeling some porous medium flows , e.g. in closely packed medium , saturated fluid flows at slow velocity but with relatively large reynolds numbers .the flows in such closely packed medium behave nonlinearly and can not be modelled accurately by the darcy equation which is linear .the deficiency can be circumvented with the brinkman forchheimer - extended darcy law for flows in closely packed media , which leads to the following model : let , , represent the reactor channel .we denote its boundary by .the conservation of volume - averaged values of momentum and mass in the packed reactor reads as follows where , denote the unknown velocity and pressure , respectively . the positive quantity stands for porosity which describes the proportion of the non - solid volume to the total volume of material and varies spatially in general .the expression represents the friction forces caused by the packing and will be specified later on .the right - hand side represents an outer force ( e.g. gravitation ) , the constant fluid density and the constant kinematic viscosity of the fluid , respectively .the expression symbolizes the dyadic product of with itself .the formula given by ergun will be used to model the influence of the packing on the flow inertia effects thereby stands for the diameter of pellets and denotes the euclidean vector norm .the linear term in ( [ s2eq2 ] ) accounts for the head loss according to darcy and the quadratic term according to forchheimer law , respectively . for the derivation of the equations , modelling and homogenization questions in porous media we refer to e.g. . to close the system ( [ s2eq1 ] )we prescribe dirichlet boundary condition whereby has to be fulfilled on each connected component of the boundary .we remark that in the case of polygonally bounded domain the outer normal vector has jumps and thus the above integral should be replaced by a sum of integrals over each side of .the distribution of porosity is assumed to satisfy the following bounds with some constants .a comprehemsive account of fluid flows through porous media beyond the darcy law s valid regimes and classified by the reynolds number , can be found in , e.g. , . also , see for simulating pumped water levels in abstraction boreholes using such nonlinear darcy - forchheimer law , and , , and for recent referenes on this model . in the next section we use the porosity distribution which is estimated for packed beds consisting of spherical particles andtakes the near wall channelling effect into account .this kind of porosity distribution obeys assumption .let us introduce dimensionless quantities whereby denotes the magnitude of some reference velocity .for simplicity of notation we omit the asterisks .then , the reactor flow problem reads in dimensionless form as follows where with and the reynolds number is defined by the existence and uniqueness of solution of the nonlinear model ( [ s2eq4 ] ) with constant porosity and without the convective term has been established in .we will extend this result to the case when the porosity depends on the location and with the convective term in this work .[ remarknse_1 ] becomes a navier - stokes problem if .* notation * throughout the work we use the following notations for function spaces . for , and bounded subdomain let be the usual sobolev space equipped with norm .if , we denote the sobolev space by and use the standard abbreviations and for the norm and seminorm , respectively .we denote by the space of functions with compact support contained in .furthermore , stands for the closure of with respect to the norm .the counterparts spaces consisting of vector valued functions will be denoted by bold faced symbols like ^n ] .the inner product over and will be denoted by and , respectively . in the case the domain index will be omitted . in the followingwe denote by the generic constant which is usually independent of the model parameters , otherwise dependences will be indicated .in the following the porosity is assumed to belong to .we start with the weak formulation of problem and look for its solution in suitable sobolev spaces .let be the space consisting of functions with zero mean value .we define the spaces and let us introduce the following bilinear forms & b:\,\boldsymbol{x}\times q&\to{{\mathbb r}}\,,&&\qquad b(\boldsymbol u , q)&=\bigl(\textrm{div}(\varepsilon\boldsymbol{u}),q\bigr)\,,\\[1.5ex ] & c:\,\boldsymbol{x}\times \boldsymbol{x}&\to{{\mathbb r}}\,,&&\qquad c(\boldsymbol u,\boldsymbol v)&=\frac{1}{re}\bigl(\alpha\boldsymbol{u},\boldsymbol{v}\bigr)\,.\\ \end{alignedat}\ ] ] furthermore , we define the semilinear form and trilinear form we set multiplying momentum and mass balances in ( [ s2eq4 ] ) by test functions and , respectively , and integrating by parts implies the weak formulation : + find with that first , we recall the following result from : [ s3thm1 ] the mapping is an isomorphism from onto itself and from onto itself .it holds for all in the following the closed subspace of defined by will be employed .next , we establish and prove some properties of trilinear form and nonlinear form .[ s3lem2 ] let and with and .then we have furthermore , the trilinear form and the nonlinear form are continuous , i.e. and for a sequence with , we have also we follow the proof of ( * ? ? ?* lemma 2.1 , 2 , chapter iv ) and adapt it to the trilinear form which has the weighting factor . hereby , symbols with subscripts denote components of bold faced vectors , e.g. .let , and . integrating by parts and employing density argument , we obtain immediately from sobolev embedding ( see ) and hlder inequality follows and consequently the proof of is completed . since and , the continuity estimate implies the continuity of follows from hlder inequality and sobolev embedding ( see ) in the next stage we consider the difficulties caused by prescribing the inhomogeneous dirichlet boundary condition .analogous difficulties are already encountered in the analysis of navier stokes problem .we will carry out the study of three dimensional case .the extension in two dimensions can be constructed analogously . since , we can extend inside of in the form of with some .the operator is defined then as we note that in the two dimensional case the vector potential can be replaced by a scalar function and the operator is then redefined as . our aim is to adapt the extension of hopf ( see ) to our model .we recall that for any parameter there exists a scalar function such that & \hspace{-0.5cm}\bullet\quad\varphi_\mu(\boldsymbol{x})=0~\text{if \ , , where }\\[-0.5ex ] & \hspace{0.5cm}\text{denotes the distance of to }\,,\\[2ex ] & \hspace{-0.5cm}\bullet\quad \text{if~~\ , , } \end{split}\;\right\}\tag{\text{ex}}\ ] ] for the construction of see also ( * ? ? ?* lemma 2.4 , 2 , chapter iv ) .+ let us define in the following lemma we establish bounds which are crucial for proving existence of velocity .[ s3lem3 ] the function satisfies the following conditions and for any there exists sufficiently small such that the relations in are obvious .we follow in order to show .since sobolev s embedding theorem implies , so we get according to the properties of in the following bound defining we obtain from cauchy - schwarz and triangle inequalities and consequently applying hardy inequality ( see ) and using sobolev embedding , estimate ( [ s3eq13 ] ) becomes where from , , poincar inequality and from the fact that we conclude that for any we can choose sufficiently small such that holds .therefore the proof of estimate is completed .now , we take a look at the trilinear convective term the first term of above difference becomes small due to ( * ? ? ?* lemma 2.3 , 2 , chapter iv ) , and it satisfies as long as is chosen sufficiently small . using hlder inequality , sobolev embedding yields which together with implies for sufficiently small the bound from and follows the desired estimate . while the general framework for linear and non - symmetric saddle point problems can be found in ,our problem requires more attention due to its nonlinear character . setting ,the weak formulation is equivalent to the following problem + find such that let us define the nonlinear mapping with :=&a(\boldsymbol w+\boldsymbol g_\mu,\boldsymbol v)+c(\boldsymbol w+\boldsymbol g_\mu,\boldsymbol v)-(\boldsymbol f,\boldsymbol v)\\ & \;+n(\boldsymbol w+\boldsymbol g_\mu,\boldsymbol w+\boldsymbol g_\mu,\boldsymbol v)+d(\boldsymbol w+\boldsymbol g_\mu;\boldsymbol w+\boldsymbol g_\mu,\boldsymbol v)\ , , \end{split}\ ] ] whereby ] .then , the variational problem ( [ s3eq17 ] ) reads in the space of -weighted divergence free functions as follows + find such that =0\quad\forall\;\boldsymbol v\in\boldsymbol{w}.\ ] ] we start our study of the nonlinear operator problem with the following lemma . [ s3lem4 ] the mapping defined in ( [ s3eq18 ] ) is continuous and there exists such that > 0\quad\forall\;\boldsymbol u\in\boldsymbol{w}\quad\textrm{with}\quad |\boldsymbol u|_1=r.\ ] ] let be a sequence in with .then , applying cauchy schwarz inequality and , we obtain for any \right|\le \frac{1}{re}\left|\bigl(\varepsilon\nabla(\boldsymbol u^k-\boldsymbol u),\nabla\boldsymbol v\bigr)\right| + \frac{1}{re}\left|\bigl(\alpha(\boldsymbol u^k-\boldsymbol u),\boldsymbol v\bigr)\right|\\ & \quad + \left|\bigl(\beta|\boldsymbol u^k+\boldsymbol g_\mu|(\boldsymbol u^k-\boldsymbol u),\boldsymbol v\bigr)\right|+\left|\bigl(\beta ( |\boldsymbol u^k+\boldsymbol g_\mu|-|\boldsymbol u+\boldsymbol g_\mu|)(\boldsymbol u+\boldsymbol g_\mu),\boldsymbol v\bigr)\right|\\ & \quad+\left|n(\boldsymbol u^k,\boldsymbol u^k , \boldsymbol v)-n(\boldsymbol u,\boldsymbol u,\boldsymbol v)\right| + \left|n(\boldsymbol u^k-\boldsymbol u,\boldsymbol g_\mu,\boldsymbol v)\right|+\left|n(\boldsymbol g_\mu , \boldsymbol u^k-\boldsymbol u,,\boldsymbol v)\right|\\ & \le\frac{\varepsilon_1}{re}|\boldsymbol u^k-\boldsymbol u|_1|\boldsymbol v|_1 + \frac{1}{re}\|\alpha\|_{0,\infty}\|\boldsymbol u^k-\boldsymbol u\|_0\|\boldsymbol v\|_0\\ & \quad+\|\beta\|_{0,\infty}\|\boldsymbol u^k+\boldsymbol g_\mu\|_{0,4}\|\boldsymbol u^k-\boldsymbol u\|_0\|\boldsymbol v\|_{0,4 } + \|\beta\|_{0,\infty}\|\boldsymbol u+\boldsymbol g_\mu\|_{0,4}\|\boldsymbol u^k-\boldsymbol u\|_0\|\boldsymbol v\|_{0,4}\\ & \quad+\left|n(\boldsymbol u^k,\boldsymbol u^k , \boldsymbol v)-n(\boldsymbol u,\boldsymbol u,\boldsymbol v)\right| + c\|\boldsymbol u^k-\boldsymbol u\|_1\|\boldsymbol g_\mu\|_1\|\boldsymbol v\|_1\ , .\end{split}\ ] ] the boundedness of in , , the poincar inequality , and the above inequality imply that \right|\to 0\quad\text{as}\quad k\to\infty\qquad\forall\ , \boldsymbol{v}\in\boldsymbol{w}\,.\ ] ] thus , employing }{|\boldsymbol v|_1}\,,\ ] ] we state that is continuous .now , we note that for any we have = \frac{1}{re}\bigl(\varepsilon\nabla(\boldsymbol u+\boldsymbol g_\mu),\nabla\boldsymbolu\bigr)+\frac{1}{re}\bigl(\alpha(\boldsymbol u+\boldsymbol g_\mu),\boldsymbol u\bigr)\\ & \quad + \bigl(\beta|\boldsymbol u+\boldsymbol g_\mu|(\boldsymbol u+\boldsymbol g_\mu),\boldsymbol u\bigr ) + n(\boldsymbol u+\boldsymbol g_\mu,\boldsymbol u+\boldsymbol g_\mu,\boldsymbol u)-(\boldsymbol f,\boldsymbol u)\\ & \ge\frac{\varepsilon_0}{re}|\boldsymbol u|_1 ^ 2-\frac{\varepsilon_1}{re}|(\nabla\boldsymbol g_\mu,\nabla\boldsymbol u)| + \frac{1}{re}(\alpha\boldsymbol u,\boldsymbol u)-\frac{1}{re}|(\alpha\boldsymbol g_\mu,\boldsymbol u)|\\ & \quad + ( \beta|\boldsymbol u+\boldsymbol g_\mu|,|\boldsymbol u|^2)-\left|(\beta |\boldsymbol u+\boldsymbol g_\mu|\boldsymbol g_\mu,\boldsymbol u)\right|\\ & \quad + n(\boldsymbol u,\boldsymbol g_\mu,\boldsymbol u)+n(\boldsymbol g_\mu,\boldsymbol g_\mu,\boldsymbol u)-\|\boldsymbol f\|_0\|\boldsymbol u\|_0\\ & \ge\frac{\varepsilon_0}{re}|\boldsymbol u|_1 ^ 2-\frac{\varepsilon_1}{re}|\boldsymbol g_\mu|_1|\boldsymbol u|_1\\ & \quad -\frac{1}{re}\|\alpha\|_{0,\infty}\|\boldsymbol g_\mu\|_0\|\boldsymbol u\|_0 -\left|(\beta |\boldsymbol u+\boldsymbol g_\mu|\boldsymbol g_\mu,\boldsymbol u)\right|\\ & \quad -\left|n(\boldsymbol u,\boldsymbol g_\mu,\boldsymbol u)\right| -c\|\boldsymbol g_\mu\|_1 ^ 2\|\boldsymbol u\|_1 -\|\boldsymbol f\|_0\|\boldsymbol u\|_0\ , .\end{split}\ ] ] from the poincar inequality , we infer the estimate which together with , and results in \ge\left\{\frac{\varepsilon_0}{re}-\delta(1+\|\beta\|_{0,\infty})\right\}|\boldsymbol u|_1 ^ 2\\ & \quad -\bigl\{\frac{\varepsilon_1}{re}|\boldsymbol g_\mu|_1+c_1\frac{1}{re}\|\alpha\|_{0,\infty}\|\boldsymbol g_\mu\|_0+ \delta \|\beta\|_{0,\infty}\|\boldsymbol g_\mu\|_0 + c_2\|\boldsymbol g_\mu\|_1 ^ 2+c_3\|\boldsymbol f\|_0\bigr\}|\boldsymbol u|_1 . \end{split}\ ] ] choosing such that and with leads to the desired assertion ( [ s3eq20 ] ) .the following lemma plays a key role in the existence proof .[ s3lem5 ] let be finite - dimensional hilbert space with inner product $ ] inducing a norm , and be a continuous mapping such that >0\quad\textrm{for}\quad\|x\|=r_0>0.\ ] ] then there exists , with , such that see .now we are able to prove the main result concerning existence of velocity .[ s3thm6 ] the problem ( [ s3eq19 ] ) has at least one solution .we construct the approximate sequence of galerkin solutions .since the space is separable , there exists a sequence of linearly independent elements .let be the finite dimensional subspace of with and endowed with the scalar product of .let , be a galerkin solution of ( [ s3eq19 ] ) defined by =0,\quad\forall\ ; j=1,\ldots , m\,.\end{aligned}\ ] ] from lemma [ s3lem4 ] and lemma [ s3lem5 ] we conclude that =0\quad\forall\;\boldsymbol w\in\boldsymbol x_m\ ] ] has a solution .the unknown coefficients can be obtained from the algebraic system ( [ s3eq23 ] ) . on the other hand , multiplying ( [ s3eq23 ] ) by , and adding the equations for we have \\ & \ge \left\{\frac{1}{re}-\delta(1+\|\beta\|_{0,\infty})\right\}|\boldsymbol u^m|_1 ^ 2\\ & \quad -\bigl\{\frac{1}{re}|\boldsymbol g_\mu|_1+c_1\frac{1}{re}\|\alpha\|_{0,\infty}\|\boldsymbol g_\mu\|_0+ \delta \|\beta\|_{0,\infty}\|\boldsymbol g_\mu\|_0 + c_2\|\boldsymbol g_\mu\|_1 ^ 2+c_3\|\boldsymbol f\|_0\bigr\}|\boldsymbol u^m|_1 . \end{split}\ ] ] this gives together with ( [ s3eq22 ] ) the uniform boundedness in therefore there exists and a subsequence ( we write for the convenience instead of ) such that furthermore , the compactness of embedding implies taking the limit in ( [ s3eq24 ] ) with we get =0\quad\forall\;\boldsymbol w\in\boldsymbol x_m.\ ] ] finally , we apply the continuity argument and state that ( [ s3eq25 ] ) is preserved for any , therefore is the solution of ( [ s3eq19 ] ) .for the reconstruction of the pressure we need inf - sup - theorem [ s3thm7 ] assume that the bilinear form satisfies the inf - sup condition then , for each solution of the nonlinear problem ( [ s3eq19 ] ) there exists a unique pressure such that the pair is a solution of the homogeneous problem ( [ s3eq17 ] ) .see ( * ? ? ?* theorem 1.4 , 1 , chapter iv ) .we end up this subsection by proving the existence of the pressure .[ s3thm8 ] let be solution of problem .then , there exists unique pressure .we verify the inf - sup condition ( [ s3eq26 ] ) of theorem [ s3thm7 ] by employing the isomorphism of theorem [ s3thm1 ] . from (* corollary 2.4 , 2 , chapter i ) follows that for any in there exists in such that with a positive constant . setting and applying the isomorphism in theorem [ s3thm1 ], we obtain the estimate where . from the above estimate we conclude the inf - sup condition .we exploit a priori estimates in order to prove uniqueness of weak velocity and pressure .[ s3thm9 ] if , are sufficiently small , then the solution of ( [ s3eq19 ] ) is unique .assume that and are two different solutions of ( [ s3eq17 ] ) . from in lemma [ s3lem2 ]we obtain .then , we obtain \\ & = a(\boldsymbol u_1-\boldsymbol u_2,\boldsymbol u_1-\boldsymbol u_2)+c(\boldsymbol u_1-\boldsymbol u_2,\boldsymbol u_1-\boldsymbol u_2)-(\boldsymbol f,\boldsymbol u_1-\boldsymbol u_2)\\ & \quad + n(\boldsymbol u_1+\boldsymbol g_\mu,\boldsymbol u_1+\boldsymbol g_\mu,\boldsymbol u_1-\boldsymbol u_2)-n(\boldsymbol u_2+\boldsymbol g_\mu,\boldsymbol u_2+\boldsymbol g_\mu,\boldsymbol u_1-\boldsymbol u_2)\\ & \quad + ( \beta |\boldsymbol u_1+\boldsymbol g_\mu|(\boldsymbol u_1+\boldsymbol g_\mu),\boldsymbol u_1-\boldsymbol u_2)\\ & \quad-\bigl(\beta |\boldsymbol u_2+\boldsymbol g_\mu|(\boldsymbol u_2+\boldsymbol g_\mu),\boldsymbol u_1-\boldsymbol u_2)\\ & \ge \frac{\varepsilon_0}{re}|\boldsymbol u_1-\boldsymbol u_2|_1 ^ 2-\|\boldsymbol f\|_{-1}\|\boldsymbol u_1-\boldsymbol u_2\|_1\\ & \quad + n(\boldsymbol u_1-\boldsymbol u_2,\boldsymbol u_2+\boldsymbol g_\mu,\boldsymbol u_1-\boldsymbol u_2)\\ & \quad + \bigl(\beta|\boldsymbol u_1+\boldsymbol g_\mu|(\boldsymbol u_1-\boldsymbol u_2),\boldsymbol u_1-\boldsymbol u_2\bigr)\\ & \quad + \bigl(\beta(|\boldsymbol u_1+\boldsymbol g_\mu|-|\boldsymbol u_2+\boldsymbol g_\mu|)(\boldsymbol{u}_2+\boldsymbol{g}_\mu),\boldsymbol u_1-\boldsymbol u_2\bigr)\\ & \ge \frac{\varepsilon_0}{re}|\boldsymbol u_1-\boldsymbol u_2|_1 ^2-\|\boldsymbol f\|_{-1}\|\boldsymbol u_1-\boldsymbol u_2\|_1\\ & \quad -\left|n(\boldsymbol u_1-\boldsymbol u_2,\boldsymbol u_2,\boldsymbol u_1-\boldsymbol u_2)\right| -\left|n(\boldsymbol u_1-\boldsymbol u_2,\boldsymbol g_\mu,\boldsymbol u_1-\boldsymbol u_2)\right|\\ & \quad -\|\beta\|_{0,\infty}\left|\bigl(|\boldsymbol u_1+\boldsymbol g_\mu|\cdot |\boldsymbol u_1-\boldsymbol u_2|,|\boldsymbol u_1-\boldsymbol u_2|\bigr)\right|\\ & \quad -\|\beta\|_{0,\infty}\left|\bigl(\bigl||\boldsymbol u_1+\boldsymbol g_\mu|-|\boldsymbol u_2+\boldsymbol g_\mu|\bigr|\cdot |\boldsymbol{u}_2+\boldsymbol{g}_\mu|,|\boldsymbol u_1-\boldsymbol u_2|\bigr)\right|\ , .\end{split}\ ] ] from cauchy - schwarz inequality and sobolev embedding we deduce and according to ( [ s3eq7 ] ) we have ) we can find such that and .testing the equation ( [ s3eq17 ] ) with results in from sobolev embedding we deduce for sufficiently small putting ( [ s3eq28])-([s3eq32 ] ) into ( [ s3eq27 ] ) and using the inequality we obtain for sufficiently small , the constant in gets small and consequently the right hand side of ( [ s3eq33 ] ) is nonnegative .this implies and according to theorem [ s3thm8 ] is .in this section , we provide an example of the flow problem in packed bed reactors with numerical solutions at small and relatively large reynolds numbers to show the nonlinear behavior of the velocity solutions .let the reactor channel be represented by where and . ] in all computations we use the porosity distribution which is determined experimentally and takes into account the effect of wall channelling in packed bed reactors where .the distribution of the porosity is presented in figure [ fig_poro ] .we distinguish between the inlet , outlet and membrane parts of domain boundary , and denote them by , and , respectively .let at the inlet and at the membrane wall we prescribe dirichlet boundary conditions , namely the plug flow conditions and whereby , . at the outlet we set the following outflow boundary condition where denotes the outer normal . in order to avoid discontinuity between the inflow and wall conditionswe replace constant profile by trapezoidal one with zero value at the corners .our computations are carried out on the cartesian mesh using biquadratic conforming and discontinuous piecewise linear finite elements for the approximation of the velocity and pressure , respectively .the finite element analysis of the brinkman - forchheimer - extended darcy equation will be conducted in the forthcoming work .the plots of velocity magnitude in fixed bed reactor ( ) are presented along the vertical axis . in the investigated reactorthe inlet velocity is assumed to be normalized ( ) . due to the variation of porositywe might expect higher velocity at the reactor walls .this tunnelling effect can be well observed in figure [ fig4 ] which shows the velocity profiles for different reynolds numbers .we remark that the maximum of velocity magnitude decreases with increasing reynolds numbers .in this work , we have extended the existence and uniqueness of solution result in literature for the porous medium flow problem based on the nonlinear brinkman - forchheimer - extended darcy law .the existing result is valid only for constant porosity and without the considered convection effects , and our result holds for variable porosity and it includes convective effects .we also provided a numerical solution to demonstrate the nonlinear velocity solutions at moderately large reynolds numbers for which case the brinkman - forchheimer - extended darcy law applies .j. l. lions ., gauthier - villars , paris , 1969 .t. zhao . ,ph.d thesis , university of oxford , 2014 ., ph.d thesis , imperial college london , 2015 .a. grillo , m. carfagna , and s federico ., theoret .teopm7 , vol.41 , no.4 , 283 - 322 , belgrade 2014 .w. sobieski , a. trykozko . , technical sciences 17(4 ) , 321335 , 2014 .a. s. lal and a. c menon ., ijtel , issn : 2319 - 2135 , vol.3 , no.4 , 545 - 548 , 2014 .
the nonlinear brinkman - forchheimer - extended darcy equation is used to model some porous medium flow in chemical reactors of packed bed type . the results concerning the existence and uniqueness of a weak solution are presented for nonlinear convective flows in medium with nonconstant porosity and for small data . furthermore , the finite element approximations to the flow profiles in the fixed bed reactor are presented for several reynolds numbers at the non - darcy s range . * 2010 mathematics subject classification ( msc ) : * 76d03 , 35q35 * keywords : * brinkman - forchheimer equation , packed bed reactors , existence and uniqueness of solution +
let consider the generic model of inverse problems where and refer to the unknown and the data in a banach space and , and is a nonlinear operator .the model can represent a variety of inverse problems arising in industrial applications including computerized tomography , inverse scattering and image processing . due to the ill - poisedness of the problem ,a regularization method must be applied in order to retrieve from the noisy data and there are numerous works devoted for regularization methods .one of the most appealing regularization techniques is tikhonov regularization , which has been studied from both theoretical and computational aspects by many authors .the tikhonov regularization takes the minimizer of to the functional in an admissible set : where here is the regularization parameter compromising the data fitting and fidelity term and a priori information encoded in the restoration energy functional of .commonly used data fidelity functionals include , and and regularization functionals include with a bounded linear self - adjoint nonnegative operator , , etc .the set which describes _ a priori _ information for the solution is usually set to be weakly closed convex set in . for instance , where is the another banach space whose topology is stronger than that of and is a given constant . for the detail of tikhonov regularization, we refer to baumeister , engl , hanke and neubauer , groetsch , hofmann and references therein .the selection of a parameter is crucial for the stable solution and there is a significant amount of works for the development of methods for choosing a suitable parameter . among others ,we refer to .the minimum value function ( see definition [ def_value ] ) appearing in tikhonov regularization technique is very useful in determining the regularization parameter . in on several principles such as the generalized principle of discrepancy , the generalized principle of quasisolutions , the generalized principle of smoothing functional , and the principles are investigated by the calculus of the minimum value function .the value function calculus also gives an insight to well known conventional principles such as morozov principle .for example , ito and kunisch proposed in study the morozov principle in terms of the minimum value function for nonlinear inverse problems . on the basis of the value function calculus, they propose a modified morozov principle . as we see in section 4 , other conventional principlescan also be formulated in terms of the value function . in all the principleswe have mentioned , each of the regularization parameters is determined with an equation including the value function and its higher derivatives .it can be computationally expensive to solve the equation numerically . in order to reduce the computational effort , an efficient method and an algorithm should be developed .a model function approach is proposed in , in which an approximation ( a model function ) to the minimum value function is constructed and use the model function for the value function in their principle .other principles and numerical algorithms can be found in kunisch and zou , xie and zou . in this paper ,new results on the properties of the minimum value function are derived in a general set - up for problem ( [ tikhonovfunctional ] ) , and a new principle for a choice of a regularization parameter is proposed using the minimum value function , which strongly relates to the principle of reginska also known as the minimum product criterion .we also propose a model function for the value function and employ the model function approach in several conventional principles to numerically compute the regularization parameters accurately and efficiently .the paper is organized as follows . in section 2, we give properties of the minimum value function .a model function is proposed in section 3 and the efficiency of the model function is verified in section 4 . in section 5 , a new principle for the regularization parameter is given .in this section , we investigate the properties of the minimum value function in a general framework : consider the generic model of inverse problems where is a nonlinear operator from a banach space and another banach space .we retrieve from the noisy data using tikhonov regularization technique , i.e. , is approximately obtained with the minimizer [ def_value ] the minimal value function of is the function of the parameter defined as [ lem : f ] the value minimum function is ( i ) monotonically increasing and ( ii ) concave .( i ) : let be given .for any , taking the infimum with respect to yields .+ ( ii ) : let and be given .set for , then hence is concave .since is concave , it is continuous .[ cor : dfae ] is continuous everywhere .lemma [ lem : f ] does not require the existence of that achieves the infimum of .we examine the minimum value function more closely .let and are one - sided derivatives of the value function , i.e., note that both limits exist for all and that .indeed , for given , let .select as , then . by the convexity of , then therefore , the limit exists . herewe list two the basic properties of . 1 .* monotonicity * : , for all , 2 . * left and right continuity * : , and for all .the differentiability of at guarantees the continuity of at this point .indeed , the monotonicity of and the left continuity of yield the inequalities .now suppose is differentiable at , i.e , .then from the inequalities it follows that , which shows the continuity of at .similarly it follows that is continuous at .the fact is used in section [ sec : df ] . for the analysis of the minimum value function ,we introduce the following function of : note that once a result on is proved then the same result is true for as well . for example, we can show that is concave in exactly the same way as lemma [ lem : f ] .every results on can be written in terms of by using the relation : which can be verified without difficulty .we study the asymptotic behavior of .[ prop : asymp ] let .for any there exists such that .then by passing to the limit and taking into account that is arbitrary , we obtain . on the other hand ,as we see above we have for all , here is arbitrary fixed . since , we obtain which means . therefore . from this inequalitywe get .let be a sequence that converges to 0 as .we pick such that to get the sequence such that . by monotonicity of have passing to the limit yields . from this propositionit follows that is right continuous at . applying proposition [ prop : asymp ] to yields or equivalently , by using the relation ( [ equ : dg ] ) , we arrive at the desired results .we investigate the relation between the derivative and the two terms , the fidelity functional and the regularization functional .we also show a sufficient condition for the existence of in terms of and .we denote the set of solutions of the minimization problem by . for simplicity of our argumentwe always assume the existence of the minimizer .it may happen that there exist two minimizers satisfying and that is , for a fixed , the value and may vary depending on the choice of the minimizer .thus , it is possible that the maps and will be multi - value functions . in what follows, we study the basic properties of those functions .we begin with the following inequality .[ lem : df ] the following inequalities hold for all : for arbitrary such that , thus , we obtain by passing to the limit , it follows that .similarly , we obtain and thus the inequality ( [ ineq : dfandpsi ] ) is proven .we also obtain the second inequality ( [ ineq : dfandphi ] ) from ( [ ineq : dfandpsi ] ) and the definition of .[ cor : df = psi ] if exists at , then and are single valued at and it holds that and for all . note a monotone increasing ( decreasing ) function is differentiable except on a possibly countable set .[ cor : df = psi ] there exists a possibly countable set such that , * is differentiable and the multi - value functions and have single value on .* , for all if .corollary [ cor : df = psi ] guarantees the differentiability of except on a possibly countable set .next we show the conditions for the differentiability of at all .firstly , we define the boundness to state the assumptions .a sequence is -bounded if there exists constant such that for all .[ ass:2 ] * let be a -bounded sequence .there exists a subsequence which weakly converges to an element in the topology of .* and are lower semi - continuous with respect to weakly convergence sequencers , i.e , if a subsequence which weakly converges to an element , then henceforth hereafter we assume the assumption holds . then is guaranteed the existence of the solutions in such that . there exist and in such that and for all .let fixed arbitrary and let be a parameter such that and .let be a minimizing sequence .then from the monotonicity of , thus and it follows that the sequence is bounded . by assumption [ ass:2 ], there exists subsequence of , which we denote it by , that converges weakly to an element .then by the continuity of and the lower semi - continuity of and , it follows that and thus we have and . in what follows ,we show that . by lemma [ lem : df ] , the last equality follows from the left continuity of . since , we obtain and therefore .similarly we can show the existence of the minimizer that satisfies .we complete the proof .there exist elements and such that and .[ cor:22 ] if for all for all , then exists and it is continuous for all . [ cor:2 ] assume that the solution of minimization problem ( [ tikhonovfunctional ] ) is unique for all , then exists and it is continuous for all .the other properties for such as second differentiability is studied in . as shown in corollaries [ cor:22 ] and [ cor:2 ] ,both of the value and is obtained with the knowledge of , and the computation of is not required .moreover , we obtain and from for provided that is linear from hilbert space to another hilbert space , and with a symmetric operator such that ( [ tikhonovfunctional ] ) has a unique solution , and , i.e , no constraint is imposed .[ thm : f2k ] the function is infinitely differentiable at every . the derivatives and for each are give with the -th derivative as where the constants are recursively defined as with . the proof is based on the following lemma in .[ lem : zou ] the function is infinitely differentiable at every and its derivative , for each , is the unique solution to the following equation : from lemma [ lem : zou ] with , one obtains .suppose with a constant , then lemma [ lem : zou ] yields then we have where is defined as .by induction the assertion is valid .in this section , we propose a model function for the value function for linear inverse problems .firstly , we give a motivation for using the model function approach .a principle for determining a regularization parameter often requires solving an equation , for example , morozov discrepancy principle takes the parameter that satisfies the equation with noisy data of noise level .one can apply a newton type iteration to solve the equation , however , the iteration could be numerically expensive .one strategy to reduce the computational effort is that : first , represent the equation in terms of the value function as .then , construct a model function to and find the parameter that satisfies the equation .if the model function approximates to the value function , the parameter thus determined will give a close approximation to .we assume that is a linear operator form hilbert space and a hilbert space , ( i.e. , no constraint imposed ) , and with a symmetric operator such that ( [ tikhonovfunctional ] ) has a unique solution .the solution is written as as we see in theorem [ thm : f2k ] , the minimum value function is infinitely differentiable and thus it is reasonable to consider a rational function as a model function , which is briefly mentioned in .we propose our model function to of the particular form the derivation of our proposed model function to bases on the following discussion : just for simplicity we assume that and , although our discussion is valid for the infinite dimension framework . the singular value decomposition of yields where ( are the singular values and ] are the orthogonal matrices , respectively . then is represented as since we assume that is highly ill - conditioned , the singular value decreases rapidly as increases . as a result only the first few -terms satisfying in ( [ svdf ] )will contribute to the sum .thus we drop off the remaining terms and obtain the approximation the pad approximation to is constructed through the use of several minimizing elements for different values of regularization parameter . for a given interval ,let are distinct parameters , which we call reference points in the following .we compute the function values and its derivatives at the reference points to determine unknowns in by the linear system the more accurate model function will be obtained by imposing further conditions on the higher derivatives to , i.e. , we determine unknowns by the system for the solvability of the systems ( [ sym : f ] ) and ( [ sym : f_more ] ) one can refer to . in the next section ,we demonstrate that the model function approximates in the interval ] and find apprioximate optimals and that are very close , i.e , and that satisfy .then a much smaller interval including these parameters is choosen to compute an accurate .the other parameters in different principles are computed in a similar manner . in each noise level , an approximation to the optimal parameter in morozov principle is computed by employing model functions as follows : firstly , we construct the model function of the form .the eight unknowns , in this model function are determined by solving the linear system ( [ sym : f ] ) where the value and are obtained at four points . then we solve the equation , which is the model function version of .we denote the solution by .we also construct another model function whose sixteen unknowns , are determined by solving the linear system ( [ sym : f ] ) where up to third derivatives , , and at the same points are used .we denote the solution of the equation by .the approximated parameters in the other principles are computed in similar manner using the model function and and we denote them by , ( damped morozov for ) , , ( l - curve ) and , ( minimum product for ) , respectively .all the computed parameters , , etc in each noise level are reported in from table [ table1 ] to table [ table4 ] .cc . , and .d - morozov . [ cols="^,^,^,^ " , ] the authors observed that the first derivative did not approximate to . on the other hand observed to give a very good approximation to .the equation ( [ dmorozov ] ) is written in terms of as .this means that the parameter is determined using only .thus it is enough to give a good approximation to to compute an approximation to . as we expect , the parameters and in damped morozov principle are very good approximations to .the parameters , and for all nose level are not so accurate .this is because the equations ( [ morozov ] ) , ( [ l ] ) , ( [ product ] ) contain the first derivative of and does not approximate to . to give a better approximation to , and ,the model function must approximate to in high accuracy and our model function will be the candidate .table [ table1 ] and [ table4 ] show that the parameters determined using are very accurate .on the other hand , are not so accurate , although they are acceptably close to .an accurate second derivative of is also required for determining the parameter .figure [ fig : l1_4points.eps ] and [ fig : l3_4points.eps ] show the curvature of the l - curve with its numerical approximations obtained by using and respectively when the error .the four reference points used to construct and are depicted by bullets ( ) on the curve in each figure .( solid line ) and ( dashed line ) obtained by using with five reference points . ] ( solid line ) and ( dashed line ) obtained by using with five reference points . ] ( solid line ) and ( dashed line ) obtained by using with five reference points . ]the approximation ( dashed line ) in figure [ fig : l1_4points.eps ] completely fails to approximate to .we used and for the construction of the model function , and thus the second derivative can not approximate to which is contained in . on the other hand ,our gives better approximation to as shown in figure [ fig : l3_4points.eps ] , although does not much perfectly with . to give more accurate , we construct another model function of the form using , , and at five reference points .figure [ fig : l3_5points.eps ] depicts the curvature of the l - curve with its numerical approximations obtained by using .the five reference points are also shown in the figure .we observe that the model function yields the sufficiently good approximation that almost perfectly matches with the exact curvature .this observation suggests that we should use more reference points to construct model function when we employ a principle that contains second derivatives of the value function . in a practical situation ,an interval where a regularization parameter to be found is often much smaller than the interval used for our numerical test .if it is the case it is enough to construct a model function in a smaller interval .the number of reference points to be used for the construction of a model function can be reduced to two or three .we propose a new criterion for the regularization parameter .let us introduce a function defined as where is a positive constant .our new criterion takes the parameter as a local minimum of .since and for all , solves the equation .the criterion is similar to the minimum product criterion by regiska .first , we note that the energy function in ( [ product ] ) is written in terms of suppose that exists and .( a sufficient condition for the existence of the second derivative and the negativity can be found in . ) since , the regularization parameter determined by the criterion solves the equation .the relationship between ( [ newf ] ) and ( [ minimumf ] ) follows from the next proposition .let be a positive number . the equality holds if and only if solves the equation .consider the inequality with and .substituting with and with , it follows that and the inequality holds if and only if , namely , multiplying and taking power yields the desired inequality .figure [ fig : gamma.eps ] shows and in the interval for certain linear inverse problem .( solid line ) and ( dashed line ),title="fig : " ] + there exists a local minimum point around where and take the same value . the advantages of our criterion are ( i ) the shape of is sharper than and thus it is easier to detect the minimum point .( ii ) contains only , does not require the function which can be discontinuous due to the nonuniqueness of an inverse problem .the effect of the parameter in to the quality of the solution should be studied .we investigate both the applicability of the criterion to nonlinear problems and the effect of the parameter in our future works .we investigate the minimum value function for the tikhonov regularization .we propose the model function for the minimum value function for linear inverse problems and verify its efficiency in the determination of the regularization parameter .we also propose a new criterion for the choice of the regularization parameter .our criterion strongly relates to the minimum product criterion and is applicable to nonlinear inverse problems . c. r. vogel , non - convergence of the -curve regularization parameter selection method , , 12 , 535547 , 1996 . j. xie and j. zou , an improved model function method for choosing regularization parameters in linear inverse problems , , 18 , 631643 , 2002
the minimum value function appearing in tikhonov regularization technique is very useful in determining the regularization parameter , both theoretically and numerically . in this paper , we discuss the properties of the minimum value function . we also propose an efficient method to determine the regularization parameter . a new criterion for the determination of the regularization parameter is also discussed .
a key prediction of a number of simple single - field slow - roll inflationary models is that they can not generate detectable non - gaussianity of the cosmic microwave background ( cmb ) temperature fluctuations within the level of accuracy of the wilkinson microwave anisotropy probe ( wmap ) .there are , however , several inflationary models that can generate non - gaussianity at a level detectable by the wmap .these non - gaussian scenarios comprise models based upon a wide range of mechanisms , including special features of the inflation potential and violation of one of the following four conditions : single field , slow roll , canonical kinetic energy , and initial bunch - davies vacuum state .thus , although convincing detection of a fairly large primordial non - gaussianity in the cmb data would not rule out all inflationary models , it would exclude the entire class of stationary models that satisfy _ simultaneously _ these four conditions ( see , e.g. , refs . ) . moreover, a null detection of deviation from gaussianity would rule out alternative models of the early universe ( see , for example , refs .thus , a detection or nondetection of primordial non - gaussianity in the cmb data is crucial not only to discriminate ( or even exclude classes of ) inflationary models but also to test alternative scenarios , offering therefore a window into the physics of the primordial universe .however , there are various non - primordial effects that can also produce non - gaussianity such as , e.g. , unsubtracted foreground contamination , unconsidered point sources emission and systematic errors .thus , the extraction of a possible primordial non - gaussianity is not a simple endeavor . in view of this , a great deal of effort has recently gone into verifying the existence of non - gaussianity by employing several statistical estimators ( for related articles see , e.g. , refs .different indicators can in principle provide information about multiple forms of non - gaussianity that may be present in wmap data .it is therefore important to test cmb data for deviations from gaussianity by using a range of different statistical tools to quantify or constrain the amount of any non - gaussian signals in the data , and extract information on their possible origins .a number of recent analyses of cmb data performed with different statistical tools have provided indications of either consistency or deviation from gaussianity in the cmb temperature fluctuations ( see , e.g. , ref . ) . in a recent paper we proposed two new large - angle non - gaussianity indicators , based on skewness and kurtosis of large - angle patches of cmb maps , which provide measures of the departure from gaussianity on large angular scales .we used these indicators to search for the large - angle deviation from gaussianity in the three and five - year single frequency maps with a _kq75 _ mask , and found that while the deviation for the q , v , and w masked maps are within the expected values of monte - carlo ( mc ) statistically gaussian cmb maps , there is a strong indication of deviation from gaussianity ( off the mc ) in the k and ka masked maps .most of the gaussianity analyses with wmap data have been carried out by using cmb temperature fluctuation maps ( raw and clean ) in the frequency bands q , v and w or some combination of these maps . in these analyses , in order to deal with the diffuse galactic foreground emission , masks such as , for example , _kq75 _ and _ kp0 _ have been used .however , sky cuts themselves can potentially induce bias in gaussianity analyses , and on the other hand full - sky maps seem more appropriate to test for gaussianity in the cmb data .thus , a pertinent question that arises is how the analysis of gaussianity made in ref . is modified if whole - sky foreground - reduced cmb maps are used .our primary objective in this paper is to address this question by extending the analysis of ref . in three different ways .first , we use the same statistical indicators to carry out a new analysis of gaussianity of the available _ full - sky foreground - reduced _ five - year and seven - year cmb maps .second , since in these maps the foreground is reduced through different procedures each of the resulting maps should be tested for gaussianity .thus , we make a quantitative analysis of the effects of distinct cleaning processes in the deviation from gaussianity , quantifying the level of non - gaussianity for each foreground reduction method .third , we study quantitatively the consequences for the gaussianity analysis of masking the foreground - reduced maps with the _ kq75 _ mask .an interesting outcome is that this mask lowers significantly the level of deviation from gaussianity even in the foreground - reduced maps , rendering therefore information about the suitability of the foreground - reduced maps as gaussian reconstructions of the full - sky cmb .the chief idea behind our construction of the non - gaussianity indicators is that a simple way of accessing the deviation from gaussianity distribution of the cmb temperature fluctuations is by calculating the skewness , and the kurtosis from the fluctuations data , where and are the third and fourth central moments of the distribution , and is its variance . clearly calculating and from the whole sky temperature fluctuations datawould simply yield two dimensionless numbers , which are rough measures of deviation from gaussianity of the temperature fluctuation distribution .however , one can go further and obtain a great number of values associated to directional information of deviation from gaussianity if instead one takes a discrete set of points homogeneously distributed on the celestial sphere as the center of spherical caps of a given aperture and calculate and from the cmb temperature fluctuations of each spherical cap .the values and can then be taken as measures of the non - gaussianity in the direction of the center of the spherical cap .such calculations for the individual caps thus provide quantitative information ( values ) about possible violation of gaussianity in the cmb data .this procedure is a constructive way of defining two discrete functions and ( defined on from the temperature fluctuations data , and can be formalized through the following steps ( for more details , see ref . ) : 1 . take a discrete set of points homogeneously distributed on the cmb celestial sphere as the centers of spherical caps of a given aperture ; 2 .calculate for each spherical cap the skewness ( ) and kurtosis ( ) given , respectively , by where is the number of pixels in the cap , is the temperature at the pixel , is the cmb mean temperature in the cap , and is the standard deviation . clearly , the values and obtained in this way for each cap can be viewed as a measure of non - gaussianity in the direction of the center of the cap ; 3 .patching together the and values for each spherical cap , one obtains our indicators , i.e. , discrete functions and defined over the celestial sphere , which can be used to measure the deviation from gaussianity as a function of the angular coordinates .the mollweid projection of skewness and kurtosis functions and are nothing but skewness and kurtosis maps , hereafter we shall refer to them as and , respectively .now , since and are functions defined on they can be expanded into their spherical harmonics in order to have their power spectra and .thus , for example , for the skewness indicator one has and can calculate the corresponding angular power spectrum which can be used to quantify the angular scale of the deviation from gaussianity , and also to calculate the statistical significance of such deviation .obviously , similar expressions hold for the kurtosis . in the next sectionwe shall use the statistical indicators and to test for gaussianity the available foreground - reduced maps obtained from the five - year wmap data .the wmap team has released high angular resolution five - year maps of the cmb temperature fluctuations in the five frequency bands k ( ghz ) , ka ( ghz ) , q ( ghz ) , v ( ghz ) , and w ( ghz ) .they have also produced a full - sky foreground - reduced internal linear combination ( ilc ) map which is formed from a weighted linear combination of these five frequency band maps in which the weights are chosen in order to minimize the galactic foreground contribution .it is well known that the _ first - year _ ilc map is inappropriate for cmb scientific studies .however , in the _ five - year _ ( also in the three - year and seven - year ) version of this map a bias correction has been implemented as part of the foreground cleaning process , and the wmap team suggested that this map is suitable for use in large angular scales ( low ) analyses although they admittedly have not performed non - gaussian tests on this version of the ilc map . notwithstandingthe many merits of the five - year ilc procedure , some cleaning features of this ilc approach have been considered , and two variants have been proposed recently . in the first approachthe frequency dependent weights were determined in harmonic space , while in the second the foreground is reduced by using needlets as the basis of the cleaning process .thus , two new full - sky foreground - cleaned maps have been produced with the wmap five - year data , namely the harmonic ilc ( hilc ) and the needlet ilc ( nilc ) ( for more details see refs . ) . in the next section ,we use the full - sky foreground - reduced ilc , hilc and nilc maps with the same smoothed resolution ( which is the resolution of the ilc map ) as the input maps from which we calculate the and maps , and then we compute the associated power spectra in order to carry out a statistical analysis to quantify the levels of deviation from gaussianity.jkim/hilc/ and http://www.apc.univ-paris7.fr/apc_cs/recherche/adamis/cmb_wmap-en.php . ] in order to minimize the statistical noise , in the calculations of skewness and kurtosis maps ( and ) from the foreground - reduced maps , we have scanned the celestial sphere with spherical caps of aperture , centered at points homogeneously generated on the two - sphere by using the healpix code . in other words ,the point - centers of the spherical caps are the center of the pixels of a homogeneous pixelization of the generated by healpix with .we emphasize , however , that this pixelization is only a practical way of choosing the centers of the caps homogeneously distributed on .it is not related to the pixelization of the above - mentioned ilc , hilc and nilc input maps that we have utilized to calculate both the and maps from which we compute the associated power spectra .figures [ fig1 ] and [ fig2 ] show examples of and maps obtained from the foreground - reduced nilc full - sky and _ kq75 _ maps .the panels of these figures clearly show regions with higher and lower values ( hot and cold spots ) of and , which suggest _ large - angle _ multipole components of non - gaussianity .we have also calculated similar maps ( with and without the _ kq75 _ mask ) from the ilc and hilc maps . however , since these maps provide only _ qualitative _ information , to avoid repetition we only depict the maps of figs . [ fig1 ] and [ fig2 ] merely for illustrative purpose . in order to obtain _ quantitative _ information about the large angular scale ( low ) distributions for the non - gaussianity and maps obtained from the available full - sky foreground - reduced five - year maps , we have calculated the ( low ) power spectra and for these maps .the statistical significance of these power spectra is estimated by comparing with the corresponding multipole values of the averaged power spectra and calculated from maps obtained by averaging over monte - carlo - generated statistically gaussian cmb maps.cdm model , obtained by randomizing the temperature components within the cosmic variance limits . ] throughout the paper the mean quantities are denoted by overline . before proceeding to a statistical analysis ,let us describe with some detail our calculations .for the sake of brevity , we focus on the skewness indicator , but a completely similar procedure was used for the kurtosis indicator .we generated mc gaussian ( _ scrambled _ ) cmb maps , which are then used to generate skewness , from which we calculate power spectra : ( is an enumeration index , and ) . in this way , for each fixed multipole component we have multipole values from which we calculate the mean value . from this mc processwe have at the end ten mean multipole values , each of which are then used for a comparison with the corresponding multipole values ( obtained from the input map ) in order to evaluate the statistical significance of the multipole components . to make this comparison easier , instead of using the angular power spectra and themselves , we employed the _ differential _ power spectra and , which measure the deviation of the skewness and kurtosis multipole values ( calculated from the foreground - reduced maps ) from the mean multipoles and ( calculated from the gaussian maps ) .thus , for example , to study the statistical significance of the quadrupole component of the skewness from hilc map ( say ) we calculate the deviation , where the mean quadrupole value is calculated from the quadrupole values of the mc gaussian maps .it is interesting to note that the deviation from gaussianity as measured by our indicators is greater for the ilc7 than for the ilc5 input map .concerning this point some words of clarification are in order here .first , we note that the details of the algorithm used to compute the ilc7 maps are the same as those of the ilc5 map . however , to take into account the most recent updates to the calibration and beams , the frequency weights for each of the 12 regions ( in which the sky is subdivided in the ilc method ) are slightly different in the calculation of the ilc7 map .second , the difference between the ilc7 and ilc5 maps is a map whose small - scale differences are consistent with the pixel noise , but with a large - scale dipolar component , with the large - scale differences being consistent with a change in dipole of 6.7 .thus , the resultant ilc7 map is not indistinguishable from the ilc5 map , and the differences between them have been captured by our indicators .figure [ fig6 ] shows the differential power spectra calculated from a five - year and seven year version of the foreground - reduced ilc maps with a _kq75 _ mask .this figure along with fig .[ fig5 ] show a significant reduction in the level of deviation from gaussianity when both ilc5 and ilc7 are masked . to quantify this reductionwe have calculated for these input maps with the _ kq75 _ mask , and have collected the results in table [ chi2-table - ilc5 - 7_kq75 ] .the comparison of table [ chi2-table - ilc5 - 7_full - sky ] and table [ chi2-table - ilc5 - 7_kq75 ] shows quantitatively the reduction of the level of gaussianity for the case of cmb masked maps. 999 v. acquaviva , n. bartolo , s. matarrese , and a. riotto , nucl . phys .b * 667 * , 119 ( 2003 ) ; j. maldacena , jhep * 0305 * 013 ( 2003 ) ; m. liguori , f.k .hansen , e. komatsu , s. matarrese , and a. riotto , * 73 * , 043505 ( 2006 ) .n. bartolo , e. komatsu , s. matarrese , and a. riotto , phys . rept . * 402 * , 103 ( 2004 ) .e. komatsu _ et al ._ , arxiv:0902.4759v4 [ astro-ph.co ] . b.a .bassett , s. tsujikawa , and d. wands , rev .78 * , 537 ( 2006 ) ; a. linde , lect .notes phys .* 738 * , 1 ( 2008 ) .k. koyama , s. mizuno , f. vernizzi , and d. wands , _ jcap _ 11 ( 2007 ) 024 ; e.i .buchbinder , j. khoury and b.a .ovrut , * 100 * , 171302 ( 2008 ) ; j .-lehners and p.j .steinhardt , * 77 * , ( 2008 ) 063533 ; y .- f .cai , w. xue , r. brandenberger , and x. zhang , _ jcap _ 05 ( 2009 ) 011 ; y .- f .cai , w. xue , r. brandenberger , and x. zhang , _ jcap _ 06 ( 2009 ) 037 .chiang , p.d .naselsky , o.v .verkhodanov , and m.j .way , * 590 * , l65 ( 2003 ) .p. cabella , d. pietrobon , m. veneziani , a. balbi , r. crittenden , g. de gasperis , c. quercellini , and n. vittorio , arxiv:0910.4362 .e. komatsu et al . , astrophys .* 148 * , 119 ( 2003 ) ; d.n .spergel et al . , astrophys .* 170 * , 377 ( 2007 ) ; e. komatsu et al . ,* 180 * , 330 ( 2009 ) ; p. vielva , e. martnez - gonzlez , r.b .barreiro , j.l .sanz , and l. cayn , * 609 * , 22 ( 2004 ) ; m. cruz , e. martnez - gonzlez , p. vielva , and l. cayn , mon . not .. soc . * 356 * , 29 ( 2005 ) ; m. cruz , l. cayn , e. martnez - gonzlez , p. vielva , and j. jin , * 655 * , 11 ( 2007 ) ; l. cayn , j. jin , and a. treaster , mon . not .. soc . * 362 * , 826 ( 2005 ) ; lung - y chiang , p.d .naselsky , int . j. mod .d * 15 * , 1283 ( 2006 ) ; j.d .mcewen , m.p .hobson , a.n .lasenby , and d.j .mortlock , mon . not .r. astron .soc . * 371 * , l50 ( 2006 ) ; j.d .mcewen , m.p .hobson , a.n .lasenby , and d.j .mortlock , mon . not .. soc . * 388 * , 659 ( 2008 ) ; a. bernui , c. tsallis , and t. villela , europhys .lett . * 78 * , 19001 ( 2007 ) ; l .- y .chiang , p.d .naselsky , and p. coles , * 664 * , 8 ( 2007 ) ; c .- g .park , mon . not .r. astron . soc . *349 * , 313 ( 2004 ) ; h.k .eriksen , d.i .novikov , p.b .lilje , a.j .banday , and k.m .grski , * 612 * , 64 ( 2004 ) ; m. cruz , m. tucci , e. martnez - gonzlez , and p. vielva , mon . not .r. astron .369 * , 57 ( 2006 ) ; m. cruz , n. turok , p. vielva , e. martnez - gonzlez , and m. hobson , science * 318 * , 1612 ( 2007 ) ; p. mukherjee and y. wang , * 613 * , 51 ( 2004 ) ; d. pietrobon , p. cabella , a. balbi , g. de gasperis , and n. vittorio , mon . not .. soc . * 396 * , 1682 ( 2009 ) ; d. pietrobon , p. cabella , a. balbi , r. crittenden , g. de gasperis , and n. vittorio , mon . not .. soc . * 402 * , l34 ( 2010 ) ; y. ayaita , m. weber , and c. wetterich , * 81 * , 023507 ( 2010 ) ; p. vielva and j.l .sanz , mon . not .astron . soc . *397 * , 837 ( 2009 ) ; b. lew , _jcap _ 08 ( 2008 ) 017 ; a. bernui and m.j .rebouas , int . j. moda * 24 * , 1664 ( 2009 ) ; m. kawasaki , k. nakayama , t. sekiguchi , t. suyama , and f. takahashi , _ jcap _ 11 ( 2008 ) 019 ; m. kawasaki , k. nakayama , and f. takahashi , _ jcap _ 01 ( 2009 ) 026 ; m. kawasaki , k. nakayama , t. sekiguchi , t. suyama , and f. takahashi , _ jcap _ 01 ( 2009 ) 042 ; m. cruz , e. martnez - gonzlez , and p. vielva , arxiv:0901.1986 [ astro - ph ] .e. martnez - gonzlez , arxiv:0805.4157 [ astro - ph ] ; y. wiaux , p. vielva , e. martnez - gonzlez , and p. vandergheynst , * 96 * , 151303 ( 2006 ) ; l.r .abramo , a. bernui , i.s .ferreira , t. villela , and c.a .wuensche , * 74 * , 063506 ( 2006 ) ; p. vielva , y. wiaux , e. martnez - gonzlez , and p. vandergheynst , new astron .* 50 * , 880 ( 2006 ) ; p. vielva , y. wiaux , e. martnez - gonzlez , and p. vandergheynst , mon . not .r. astron .soc . , * 381 * , 932 ( 2007 )copi , d. huterer , d.j .schwarz , and g.d .starkman , * 75 * , 023507 ( 2007 ) ; c.j .copi , d. huterer , d.j .schwarz , and g.d .starkman , mon . not .. soc . * 399 * , 295 ( 2009 ) ; k. land and j. magueijo , mon . not .. soc . * 378 * , 153 ( 2007 ) ; b. lew , _ jcap _ 09 ( 2008 ) 023 ; a. bernui , * 78 * , 063531 ( 2008 ) ; a. bernui , * 80 * , 123010 ( 2009 ) ; p.k .samal , r. saha , p. jain , and j.p .ralston , mon . not .. soc . * 385 * , 1718 ( 2008 ) ; p.k .samal , r. saha , p. jain , and j.p .ralston , mon . not .. soc . * 396 * , 511 ( 2009 ) ; a. bernui , b. mota , m.j .rebouas , and r. tavakol , astron .& astrophys .* 464 * , 479 ( 2007 ) ; a. bernui , b. mota , m.j .rebouas , and r. tavakol , int .d * 16 * , 411 ( 2007 ) ; t. kahniashvili , g. lavrelashvili , and b. ratra , * 78 * , 063012 ( 2008 ) ; l.r .abramo , a. bernui , and t.s .pereira , _ jcap _ 12 ( 2009 ) 013 ; f.k .hansen , a.j .banday , k.m .grski , h.k .eriksen , and p.b .lilje , * 704 * , 1448 ( 2009 ) .
a detection or nondetection of primordial non - gaussianity by using the cosmic microwave background radiation ( cmb ) data is crucial not only to discriminate inflationary models but also to test alternative scenarios . non - gaussianity offers , therefore , a powerful probe of the physics of the primordial universe . the extraction of primordial non - gaussianity is a difficult enterprise since several effects of non - primordial nature can produce non - gaussianity . given the far - reaching consequences of such a non - gaussianity for our understanding of the physics of the early universe , it is important to employ a range of different statistical tools to quantify and/or constrain its amount in order to have information that may be helpful for identifying its causes . moreover , different indicators can in principle provide information about distinct forms of non - gaussianity that can be present in cmb data . most of the gaussianity analyses of cmb data have been performed by using part - sky frequency , where the masks are used to deal with the galactic diffuse foreground emission . however , full - sky map seems to be potentially more appropriate to test for gaussianity of the cmb data . on the other hand , masks can induce bias in some non - gaussianity analyses . here we use two recent large - angle non - gaussianity indicators , based on skewness and kurtosis of large - angle patches of cmb maps , to examine the question of non - gaussianity in the available full - sky five - year and seven - year wilkinson microwave anisotropy probe ( wmap ) maps . we show that these full - sky foreground - reduced maps present a significant deviation from gaussianity of different levels , which vary with the foreground - reducing procedures . we also make a gaussianity analysis of the foreground - reduced five - year and seven - year wmap maps with a _ kq75 _ mask , and compare with the similar analysis performed with the corresponding full - sky foreground - reduced maps . this comparison shows a significant reduction in the levels of non - gaussianity when the mask is employed , which provides indications on the suitability of the foreground - reduced maps as gaussian reconstructions of the full - sky cmb .
in the uncapacitated facility location ( ufl ) problem the goal is to open facilities in a subset of given locations and connect each client to an open facility so as to minimize the sum of opening costs and connection costs . in the penalty avoiding ( prize collecting ) variant of the problem , a fixed penalty can be paid instead of connecting a client . in the -level uncapacitated facility location problem with penalties ( -level uflwp ) , we are given a set of clients and a set of facilities ( locations to potentially open a facility ) in a metric space .facilities are of different types ( levels ) , e.g. , for one may think of these facilities as shops , warehouses and factories .each set contains all facilities on level and the sets are pairwise disjoint . each client can either be connected to precisely one facility at each of levels ( via a path ) , or be rejected in which case the penalty must be paid ( can be considered as the loss of profit ) . to be more precise , for a client to be connected , it must be connected with a path , where is an open facility on level .the cost of connecting points , is the distance between and , denoted by .the cost of opening facility is ( ) .the goal is to minimize the sum of the total cost of opening facilities ( at all levels ) , the total connection cost and the total penalty cost . in the uniform version of the problemall penalties are the same , i.e. , for any two clients we have .if are big enough , -level uflwp is the -level ufl problem , for which krishnaswamy and sviridenko showed -hardness of approximation for general and -hardness for .actually , even for guha and khuller showed that the approximation ratio is at least , unless . the current best known approximation ratio for this simplest case is by li .for -level ufl problem shmoys , tardos , and aardal gave the first constant factor approximation algorithm by extending the algorithm for -level and obtaining an approximation ratio .subsequently , aardal , chudak , and shmoys used randomized rounding to get the first algorithm for general , which had approximation ratio of .ageev , ye and zhang gave a combinatorial -approximation algorithm for general by reducing the -level directly into -level problem .by recursive reduction , i.e. , reducing -level to level , they obtained an improved -approximation for and for .later , this was improved by zhang , who combined the maximization version of -level ufl problem and dual - fitting to get a -approximation algorithm for , and a -approximation for .byrka and aardal improved the ratio for to .for the ratio was recently improved by byrka and rybicki to for , for , and the ratio converges to 3 when .ufl with penalties was first introduced by charikar et al . , who gave a -approximation algorithm based on a primal - dual method .later , jain et al . indicated that their greedy algorithm for ufl could be adapted to uflwp with the approximation ratio .xu and xu proposed a -approximation algorithm based on lp - rounding and a combinatorial -approximation algorithm by combining local search with primal - dual .later , geunes et al . presented an algorithmic framework which can extend any lp - based -approximation algorithm for ufl to get an -approximation algorithm for ufl with penalties .as a result , they gave a -approximation algorithm for this problem .recently , li et al . extended the lp - rounding algorithm by byrka and aardal and the analysis by li to uflwp to give the currently best -approximation algorithm . for multi - level uflwp , asadi et al . presented an lp - rounding based -approximation algorithm by converting the lp - based algorithm for uflwp by xu and xu to -level . to the best of our knowledge ,this is the only algorithm for multi - level uflwp in the literature .we first show that algorithms whose performance can be analysed with a linear function of certain instance parameters , like the chudak and shmoys algorithm for ufl , can be easily combined and analysed with a natural factor revealing lp .this simplifies the argument of shi li for his -approximation algorithm for ufl _ since an explicit distribution for the parameters obtained by a linear program is not necessary in our factor revealing lp_. with this tool one can easily randomize the scaling factor in lp - rounding algorithms for various variants of the ufl problem ._ we demonstrate this by randomizing the algorithm for -level uflwp . for -level uflwe can get the same approximation ratios as for -level uflwp by setting ._ note that the previously best ratio is 4 for -level uflwp ( ) and for .the following table shows how much we improve the approximation ratios of our algorithm for by involving randomization of the scaling factor ._ irrespective of the way in which we choose , deterministically or randomly , approximation ratio converges to three . _.comparison of ratios . [ cols="^,^,^,^,^,^,^,^,^,^,^",options="header " , ] note that the reduction above does not work for the non - uniform case , because then the distance from client to the penalty - facility of client could be smaller than .nevertheless we will show that lp - rounding algorithms in this paper can be easily extended to the non - uniform penalty variant ._ for non - uniform case , _ our algorithm is based on rounding a solution to the extended lp - relaxation of the problem .this extended lp may either be seen as the standard lp on a modified graph ( see appendix [ sec : modiffication ] ) as described in , or originate from the -th level of the sherali adams hierarchy , or explicitly be written in terms of paths on the original instance .here we use the explicit construction . note that in the optimal solution to -level uflwp each facilityis connected to at most one facility on the higher level .we will impose this structure on the fractional solution by creating multiple copies of the original facility , one for each path across the higher levels of facilities . to describe the linear program we have to give a few definitions .let be the set of paths which start in a client and end in a facility on level .let be the set of paths which start on level and end on the highest level , i.e. , in a root of some tree .by we denote the set of all paths , i.e. , .the cost of the path denoted by depends on the kind of path .if , then . if , then . the natural interpretation of the above lp is as follows .inequality ( [ lp : out_one ] ) states that each client is assigned to at least one path or is rejected. inequality ( [ lp : thomas_order ] ) encodes that opening a lower level facility implies opening its unique higher level facility . the most complicated inequality ( [ lp : f_open_enough ] ) for a client and a facility , imposes that the opening of must be at least the total usage of it by the client .let be an optimal solution to the above lp .the approximation algorithm presented below is parameterized by . formulate and solve the extended lp ( 12)-(17 ) to get an optimal solution ; scale up facility opening and client rejecting variables by , then recompute values of for to obtain a minimum cost solution + divide clients into two groups and cluster clients in ; round facility opening ( tree by tree ) ; connect each client with a closest open connection path unless rejecting it is a cheaper option .our final algorithm is as follows : run algorithm for each and select a solution with the smallest cost .clustering is based on rules described in which is generalized in for -level instances . rounding on a treewas also used in .nevertheless , for completeness we give a brief description of step 4 and 5 in the following subsections . from now on we are considering only scaled up instance . for any client ,let be the set of top - level facilities fractionally serving in . as discussed in section [ sec : one_level ] , wlog the fractional connectivity of to a set of facilities may be assumed to be the fractional opening of these facilities .sort facilities from by non - decreasing distance from client , and select the smallest subset of with volume one - this is the set of close facilities , the rest of facilities from are distant facilities . by and denote the average distances from to close , distant and all facilities in set respectively .moreover by we denote the maximal distance from to a close facility .formal definitions are as follows : using the similar arguments as in we can define and express and using . two clients are called neighbors if .select unclustered client that minimizes , form a new cluster containing and all its unclustered neighbors from , call the center of the new cluster ; the above clustering procedure ( just like in ) partitions all clients into groups called clusters .such partition has two important properties .first : there are no two neighbors from which are ( both ) centers of clusters .second : distance from any client in cluster to his cluster center is not too big .consider an arbitrary cluster center .since lp solutions have a form of a forest , we only need to focus on rounding single tree serving .for clarity , within this rounding procedure we will refer to facilities as vertices ( of a tree ) , and use to denote the fractional opening of vertex ( facility ) and to denote the extent in which the cluster center uses in , i.e , .note that for each and if is not the root of a tree . the main idea is to open exactly one path for cluster center but keep the probability of opening of each vertex equal to in the randomized procedure .in we gave a token - passing - based adaptation of the procedure by garg konjevod and ravi , that stores the output in and , and has exactly the desired properties . = x_v ] for all .it is essential that the probability of opening at least one path in a set can be lower bounded by a certain function , where is the total flow from client to all paths in and is the number of levels in the considered instance .it can be shown that and the following lemma ( from ) hold . for more details see appendix [ sec : functions of probability ] and .inequality implies ._ the high level idea is that we can consider the instance of -level uflwp as a corresponding instance of -level ufl by showing that the worst case approximation ratio is for clients in set and we can treat the penalty of client as a penalty - facility " in our analysis .that is , we can overcome penalties by solving an equivalent -level ufl without penalties . _it is standard in uncapacitated location problems to split facilities to obtain a so called _ complete _ solution , where no facility is used less than it is open by a client ( see for details ) . for our algorithm , to keep the forest structure of the fractional solution , we must slice the whole trees instead of splitting individual facilities to obtain the following .[ completeness ] each solution of our linear program for -level uflwp can be transformed to an equivalent complete solution .we should give two copies and of tree ( instead of it ) if there is some client with a positive flow to one of the paths in the tree which is smaller than the path opening .let the opening of such problematic path be equal to flow in tree . in tree it has value equal to the opening in decreased by .in general each facility in tree ( ) has the same opening as in times ( ) .note that the value of flow from client ( and other clients which are connected with both trees now ) should be the same as before adding trees and instead of .all clients recompute " their connection values .we sort all paths in increasing connection cost for client and connect with them ( in that order ) as strong as it is possible until client has flow equal to one or it is cheaper to pay penalty instead of connecting with any open path .the important fact is that the expected connection and penalty cost of each client remain the same after above operations . in the process of coping and replacing trees we add at most new trees . because each client has at most one `` problematic '' ( not saturating ) path . for the clarity of the following analysis we will use a one - level " description of the instance and fractional solution despite its -level structure . because the number of levels will have influence only on the probabilities of opening particular paths in our algorithm .consider set of paths which start in client and end in the root of a single tree . instead of thinking about all paths from set we can now treat them as one path whose fractional opening is and ( expected ) cost is . observe that our distance function satisfy the triangle inequality .from now on we will think only about clients and facilities ( on level ) and ( unique ) paths between them .accordingly , we will now encode the fractional solution as , to denote the fractional connectivity , opening and penalty components .[ ineq_proof ] . + the second inequality holds because .moreover to justify the last equality we should observe that .[ worst_case ] the worst case approximation ratio is for clients from set .we have two types of clients divided for two sets and .lets sort facilities in nondecreasing distances from client . in that proof is number of facilities which has positive flow from in considering ( scaled up ) fractional solution .suppose the first case , then we can upper bound his connection and penalty cost like that \leq \sum_{i = 1}^{l}(f_k(\sum_{j = 1}^{i } y_j ) - f_k(\sum_{j = 1}^{i-1 } y_j ) ) d(q , j)\ ] ] inequality holds because in other case would be connected with facility in that distance instead of using penalty . in the second case we havethat .connection and penalty cost of client can be upper bounded in below way \leq ( f_k(\sum_{j = 1}^{l } y_j ) - f_k(\sum_{j = 1}^{l-1 } y_j ) ) d(q , j ) + ( 1 - f_k(\gamma ( 1 - g ) ) ) p_q\ ] ] note that for each client the truth is , so from lemma [ ineq_proof ] we have that the worst case approximation ratio is for clients from set .[ g_0 ] for clients we can treat its penalty as a facility . if is a cluster center , will have at least one ( real ) facility open in its set of close facilities .thus , its connection and penalty cost are independent of the value of .if is not a cluster center and we pretend its penalty as a facility , no other client will consider to use this fake facility . because only looks at facilities fractionally serving him , and the facilities which serve the center of the cluster containing .a single algorithm has expected facility opening cost \leq \gamma \cdot f^* ] ( see appendix [ sec : single_algorithm ] for a detailed proof ) . to obtain an improved approximation ratio we run algorithm for several values of and select the cheapest solution .the following lp gives an upper bound on the approximation ratio . since the number of levels has influence on connection probabilities , the values of need to be defined more carefully than for ufl .in particular , for we now have and for .the table [ improved_ratios ] summarizes the obtained ratios for a single algorithm ( run with the best choice of for particular ) and for a group of algorithms .( i.e. , distances to facilities ) for obtained from solution of the lp in section [ group_analize ] . _x - axis is volume of a considered set and y - axis represents distance to the farthest facility in that set .values of function are in one - to - one correspondence with values of in lp from section [ group_analize]._,height=215 ] in a randomized alg .( from the dual lp ) for . left figure : general view ; right figure : close - up on small probabilities.,width=226 ] in a randomized alg .( from the dual lp ) for .left figure : general view ; right figure : close - up on small probabilities.,width=226 ] 99 k. aardal , f. chudak , d. shmoys : a 3-approximation algorithm for the k - level uncapacitated facility location problem .inf . process .72(5 - 6 ) : 161 - 167 ( 1999 ) a. ageev , y. ye , j. zhang : improved combinatorial approximation algorithms for the k - level facility location problem .icalp 2003 : 145 - 156 m. asadi , a. niknafs , m. ghodsi : an approximation algorithm for the k - level uncapacitated facility location problem with penalties .csicc 2008 , ccis 6 : 41 - 49 . j. byrka , k. aardal : an optimal bifactor approximation algorithm for the metric uncapacitated facility location problem .siam j. comput .39(6 ) : 2212 - 2231 ( 2010 ) j. byrka , m. ghodsi , a. srinivasan : lp - rounding algorithms for facility - location problems .corr abs/1007.3611 ( 2010 ) j. byrka , b. rybicki : improved lp - rounding approximation algorithm for k - level uncapacitated facility location .icalp ( 1 ) 2012 : 157 - 169 m. charikar , s. khuller , d. mount , g. narasimhan : algorithms for facility location problems with outliers .soda 2001 : 642 - 651 f. chudak , d. shmoys : improved approximation algorithms for the uncapacitated facility location problem .siam j. comput .33(1 ) : 1 - 25 ( 2003 ) n. garg , g. konjevod , r. ravi : a polylogarithmic approximation algorithm for the group steiner tree problem . soda 1998 : 253259 j. geunes , r. levi , h. romeijn , d. shmoys : approximation algorithms for supply chain planning and logistics problems with market choice . math . program .130(1 ) : 85 - 106 ( 2011 ) s. guha , s. khuller : greedy strikes back : improved facility location algorithms .j. algorithms 31(1 ) : 228 - 248 ( 1999 ) k. jain , m. mahdian , e. markakis , a. saberi , v. vazirani : greedy facility location algorithms analyzed using dual fitting with factor - revealing lp .j. acm 50(6 ) : 795 - 824 ( 2003 ) r. krishnaswamy , m. sviridenko : inapproximability of the multi - level uncapacitated facility location problem .soda 2012 : 718 - 734 s. li : a 1.488 approximation algorithm for the uncapacitated facility location problem .comput . 222 : 45 - 58 ( 2013 ) y. li , d. du , n. xiu , d. xu : improved approximation algorithm for the facility location problems with linear / submodular penalties .cocoon 2013 : 292 - 303 .d. shmoys , e. tardos , k. aardal .approximation algorithms for facility location problems ( extended abstract ) .stoc 1997 : 265 - 274 .m. sviridenko : an improved approximation algorithm for the metric uncapacitated facility location problem .ipco 2002 : 240 - 257 g. xu , j. xu : an lp rounding algorithm for approximating uncapacitated facility location problem with penalties .94(3 ) : 119 - 123 ( 2005 ) g. xu , j. xu : an improved approximation algorithm for uncapacitated facility location problem with penalties. j. comb .17(4 ) : 424 - 436 ( 2009 ) j. zhang : approximating the two - level facility location problem via a quasi - greedy approach .math . program .108(1 ) : 159 - 176 ( 2006 )the idea is to construct a graph in which each facility on level is connected with exactly one facility on level .we will describe in a few words how to do it , but the best idea is to read section 2 in .let and be the set of facilities before and after modification respectively . for the highest level nothing change which means .for each facility we have copies each connected with a different facility in .the cardinality of set is equal to . in general : for each set has copies of each element in set and each copy is connected with a different element in the set .note that there is an optimal integral solution with the form of a forest .so we do not lose anything important for this optimal solution by modifying the graph in a way described above .lets consider set of paths which start in client and end in the root of a tree .we say that client has flow of value to tree if the total value of paths in set is equal to .byrka et al . in the following definition of function which is a lower bound for the probability of at least one path of a tree will be open as a result of rounding procedure on that tree .we use to denote , similar for . it is a product of the probability of opening the root node , and the ( recursively bounded ) probability that at least one of the subtrees has an open path , conditioned on the root being open .now we are ready to give a function to bound the probability of opening at least one path when we have flow from one client to more than one tree .let , which is one minus the biggest chance that no tree gives a route from the root to a leaf , using the previously defined function to express the success probability on a single tree .now we can upper bound the expected connection and penalty cost of single algorithm . as it was proved in lemma [ worst_case ]the worst case scenario is for client which is not a cluster center , so to upper bound the expected connection and penalty cost we can concentrate on clients from .moreover from lemma [ g_0 ] we can suppose that .the value of is a chance that at least one facility will be open in the set of close facilities . expresses the chance that at least one distant facility of the considered client is open , but all close facilities are closed .the remaining is the probability of connecting the considered client to the open facility by its cluster center .the cost of this connection is bounded in lemma [ cluster_close_distance ] .suppose is the cluster center of .p_c \cdot d_{av}^{c}(j ) + p_d \cdot d_{av}^{d}(j ) + p_s \cdot ( \gamma d_{av}(j ) + ( 3 - \gamma)d_{max}(j)))\ ] ] you can find the justification for above inequalities in . summing over all clients we get the lemma .
the state of the art in approximation algorithms for facility location problems are complicated combinations of various techniques . in particular , the currently best 1.488-approximation algorithm for the uncapacitated facility location ( ufl ) problem by shi li is presented as a result of a non - trivial randomization of a certain scaling parameter in the lp - rounding algorithm by chudak and shmoys combined with a primal - dual algorithm of jain et al . in this paper we first give a simple interpretation of this randomization process in terms of solving an auxiliary ( factor revealing ) lp . _ then , armed with this simple view point , we exercise the randomization on a more complicated algorithm for the -level version of the problem with penalties in which the planner has the option to pay a penalty instead of connecting chosen clients , which results in an improved approximation algorithm . _
all the simplest paths of a given network were calculated in the dual space by converting the networks from the primal to the dual representation , where straight lines are mapped into nodes and the intersection between straight lines were mapped into edges .straight lines are found by using a version of the _ icn _ ( intersection continuity negotiation ) algorithm .more specifically , given an edge , we search among the adjacent edges attached to , , that one that is most aligned to . if the angle between and is smaller or equal to , we assume that these two edges belong to the same straight line .this procedure continues until no more edges are assigned to the same straight line .then , the procedure is repeated in opposite direction starting from the adjacent edges attached to node . once assigned to a straight line, an edge is removed from the network .as it is , this algorithm produces different networks depending on the choice for the initial edge . to overcome this ambiguity ,our algorithm always starts with the edge that give us the longest straight line for a given network .after this straight line is fully detected and its edges deleted , we choose the next edge that will give us the second longest straight line and so on .the algorithm ends when there are no more edges left in the network .once all straight lines have been identified , the dual representation is built by looking at the intersection between straight lines .each straight line is mapped onto a node in the dual space and two nodes are connected together if their respective straight lines intersect each other at least once . to illustrate this process, we show in fig [ fig : illus](b,1 ) an example of planar network in the primal representation where the edges are colored according the of the straight line they belong to . in fig .[ fig : illus](b,4 ) we show the dual representation of the same network .it is important to note that the longest straight lines , in this example represented by orange , red , green and magenta give rise to hubs in the dual space . in order to calculate the simplest path between nodes 1 and 2 from fig .[ fig : illus](b,1 ) , we search for the shortest path between their respective straight lines in the dual space , cyan and blue in this case .as it can be seen , there are two paths with length 4 , and .each of them define a subgraph in the primal representation , here represented by the set of magenta lines in fig .[ fig : illus](b,2 ) and red lines in fig .[ fig : illus](b,3 ) for paths a and b , respectively . then we evaluate the shortest path distance between nodes 1 and 2 over these subgraphs and we adopted the shortest one as the simplest path - green dashed in fig .[ fig : illus](b,3 ) . the black path in figs .[ fig : illus](c , d ) represent the shortest path between nodes 1 and 2 .the gini coefficient quantifies the inequalities of the lengths of straight lines , and is defined as in where is the average length of straight lines and is the number of straight lines .the gini coefficient lies in the range $ ] and when all lengths are equal . on the other hand ,if all lengths but one are very small , the gini coefficient will be close to .we thank prof . a. perna for the dataset of the leaf _ hymenanthera chathamica _ and prof .a. adamatzky for providing us the _ physarum policephalum_. mb thanks h. berestycki and m. gribaudi for interesting discussion .mb acknowledges funding from the eu commission through project eunoia ( fp7-dg.connect-318367 ) .mpv , es , pb , mb designed , performed research and wrote the paper .the authors declare no competing financial interests .strano , e. , et al .urban street networks : a comparative analysis of ten european cities .arxiv.1211.0259 barthelemy , m. , bordin , p. , berestycki , h. & gribaudi , m. self - organization self - organization versus top - down planning in the evolution of a city .rep . _ * 3 * , 2153 ( 2013 ) .
shortest paths are not always simple . in planar networks , they can be very different from those with the smallest number of turns - the simplest paths . the statistical comparison of the lengths of the shortest and simplest paths provides a non trivial and non local information about the spatial organization of these graphs . we define the simplicity index as the average ratio of these lengths and the simplicity profile characterizes the simplicity at different scales . we measure these metrics on artificial ( roads , highways , railways ) and natural networks ( leaves , slime mould , insect wings ) and show that there are fundamental differences in the organization of urban and biological systems , related to their function , navigation or distribution : straight lines are organized hierarchically in biological cases , and have random lengths and locations in urban systems . in the case of time evolving networks , the simplicity is able to reveal important structural changes during their evolution . a planar network is a graph that can be drawn on the two - dimensional plane such that no edges cross each other . planar graphs pervade many aspects of science : they are the subject of numerous studies in graph theory , in combinatorics and in quantum gravity . planar graphs are also central in biology where they can be used to describe veination patterns of leaves or insect wings . in particular , the vasculature of leaves displays an interesting architecture with many loops at different scales , while in insects , the vascular network brings strength and flexibility to their wings . in city science , planar networks are extensively used to represent , to a good approximation , various infrastructure networks . in particular , transportation networks and more recently streets patterns are the subject of many studies that are trying to characterize both topological ( degree distribution , clustering , etc . ) and geometrical ( angles , segment length , face area distribution , etc . ) aspects of these networks . despite a large number of studies on planar networks , there is still a lack of global , high - level metrics allowing to characterize their structure and geometrical patterns . such a characterization is however difficult to achieve and in this article , we will discuss an important aspect of planar graphs which is intimately connected to their geometrical organization . in this respect , we will define new metrics and test them on various datasets , both artificial ( roads , highways , railways , and supply networks ) and natural ( veination patterns of leaves and wings , slime mould ) enabling us to obtain new information about the structure of these networks . we will now introduce the main metrics used in this article . generally speaking , we can define different types of paths for a given pair of nodes . a usual quantity is the shortest euclidean path of length which minimizes the distance travelled to go from to . we can however ask for another path which minimizes the number of turns - the simplest path , of length ( if there are more than one such path we choose the shortest one ) . fig . [ fig : illus]a displays an example of the shortest and simplest path for a given pair of nodes on the oxford ( uk ) street network . to identify the simplest path , we first convert the graph from the primal to the dual representation , where each node corresponds to a straight line in the primal graph . these straight lines are determined by a continuity negotiation - like algorithm , as described in material and methods . edges in dual space , in turn , represent the intersection of straight lines in the primal graph ( see fig . [ fig : illus]b ) . we define the number of turns of a given path as the number of switches from one straight line to another when walking along this path . this quantity is intimately related to the amount of information required to move along the path . we have computed the probability distribution for all shortest and simplest paths of several networks ( see si ) and the results show that this distribution is usually centered around a smaller values for the simplest paths than for the shortest paths , as expected . more generally , we show in the supplementary information that the average number of turns versus the number of nodes indeed displays a small - world type behavior characterized by a slow logarithmic increase with , consistently with previous analysis of the dual network . this feature is thus not very useful to distinguish different networks and shows that the distribution of the number of turns is a very partial information and tells very little about the spatial structure of the simplest paths . for navigation purposes ( neglecting all congestion effects ) and in order to understand the structure of the network , it is useful to compare the lengths of the shortest and the simplest paths with the ratio . it is then natural to introduce the _ simplicity index _ as the average the simplicity index is larger than one and exactly equal to one for a regular square lattice and any tree - like network for example . large values of indicate that the simplest paths are on average much longer than the shortest ones , and that the network is not easily navigable . we note here that we do not take into account congestion effects which can influence the path choice ( see for example ) . this new metric is a first indication about the spatial structure of simplest paths but mixes various scales , and in order to obtain a more detailed information , we define the _ simplicity profile _ where is the euclidean distance between and and where is the number of pairs of nodes at euclidean distance . this quantity is larger than one and its variation with informs us about the large scale structure of these graphs . we can draw a generic shape of this profile : for small , we are the scale of nearest neighbors and there is a large probability that the simplest and shortest paths have the same length , yielding , and increasing for small . for very large , it is almost always beneficial to take long straight lines when they exist , thus reducing the difference between the simplest and the shortest paths . as a result we expect to decrease when ( note that a similar behavior is observed for another quantity , the route - length efficiency , introduced in ) . the simplicity profile will then display in general at least one maximum at an intermediate scale for which the length differences between the shortest and the simplest path is maximum . the length thus represents the typical size of domains not crossed by long straight lines . at this intermediate scale , the detour needed to find long straight lines for the simplest paths is very large . we finally note here that these indices are actually not limited to planar networks but to all networks for which the notion of straight lines has a sense and can be computed . this would be the case for example for spatial networks which are not perfectly planar . we introduce a null model in order to provide a simple benchmark to further analyze the results obtained by these new metrics ( the expression null model should be understood here in the sense of the benchmark and not in the usual statistical definition ) . the goal in this study is to compare empirical results with a very simple model based on a minimal number of assumptions , but we note that it would be also interesting to compare various models generating planar networks . we start with points randomly distributed in the plane and construct the voronoi graph ( see the supplementary information for further details ) . we then add a tunable number of straight lines of length distributed according to . examples of networks generated by this model as well as many results are shown in the si . we first study static networks ( see fig . [ fig : all ] ) such as the streets of cities ( bologna , italy ; oxford , uk ; nantes , france ) , the national highway network of australia , the national uk railway system , and the water supply network of central nantes ( france ) . in the case of biological networks , we study the veination patterns of leaves ( _ ilex aquifolium _ and _ hymenanthera chatamica _ ) , and of a dragongly wing . details on these datasets can be found in the si . we also consider three datasets describing the time evolution of networks at different scales ( see fig . [ fig : evolution ] ) : at a small scale and in the biological realm we study the evolution of a slime mould network . at the city scale we present results on the road network of paris ( france ) from 1789 until now . paris was largely transformed by a central authority ( the prefect haussmann under napoleon iii ) in the middle of the 19th century and the dataset studied here displays the network before and after these important transformations , offering the possibility to study quantitatively the effect of top - down planning . at the multi - town level , we study the road network of the groane area ( italy ) ( see the si for details on these datasets ) . these networks allow us to explore different systems at very different scales from ( slime mould ) to ( australian highways ) meters . we compute the simplicity index for the various datasets and for the null model as well . the results are shown in fig . [ fig : svalues ] as a function of the density of straight lines and the gini coefficient for the length of straight lines ( see material and methods for details ) . the density of straight lines is defined as the ratio of total length of straight lines ( see fig . si2 ) , over the total length of the network , and is an indicator of the diversity of the length of straight lines . the first observation from fig . [ fig : svalues ] is that the simplicity index encodes information which is neither contained in the density nor in the gini coefficient , and reveals how the straight lines are distributed in space and participate in the flows on the network . in fig . [ fig : svalues]a , we observe that the density of straight lines is always larger for urban systems . more precisely , in the biological systems the density lies in the range ] , which can be attributed to the construction of large avenues connecting important nodes of the city . in addition , we observe the surprising effect that at large scales , the simplicity is degraded by haussmann s work : this however could be an artifact of the method and the fact that we considered a portion of paris only and neglected the effect of surroundings . finally , we note that differences between groane and paris might be explained in terms of a sparse , polycentric urban settlement ( groane ) versus a dense one ( paris ) . in particular , in the ` urban ' phase for groane ( after 1955 ) , the simplicity profile becomes similar to the one of a dense urban area such as paris . finally , we show the results in fig . [ fig : evolution](c ) for the _ physarum policephalum _ , a biological system evolving at the centimeter scale . _ physarum _ is a unicellular multinucleated amoeboid that during its vegetative state takes a complex shape . its plasmodium viscous body whose goal is to find and connect to food sources , crystallizes in a planar network - like structure of micro - tubes . in simple terms , physarum s foraging strategy can be summarized in two phases : i ) the exploration phase in which it grows and reacts to the environment and ii ) the crystallization phase in which it connects to food sources with micro - tubes . we inoculated active plasmodium over a single food source and observe the micro - tube network at six phases of its growth ( see si for details ) . under these conditions , we observe that the network is statistically isotropic around the food source as shown in fig . [ fig : evolution](c ) and develops essentially radially . we first observe that the simplicity profile for the physarum is relatively low ( less than ) , suggesting that simplicity could be an important factor in the evolution of this organism . a closer observation shows that during its evolution , the physarum adds new links to the previous network and also modifies the network on a larger scale , as revealed by the changes of the simplicity profile . the evolution of the profile is similar to the one obtained for the null model when the density is increased ( see si ) , suggesting that the statistics of straight lines in this case could be described as essentially resulting from the random addition of straight lines of random lengths ( with ) . we have shown that the new metrics introduced here encode in a useful way both topological and geometrical information about the global structure of planar graphs . in particular , our results highlight the structural differences between biological and artificial networks . in the former , we have a clear spatial organization of straight lines , with a clear hierarchy of lines ( midrib , veins , etc ) , leading to simplest paths that require a very small number of turns but at the cost of large detours . in contrast , there is no such strong spatial organization in urban systems , where the simplicity is usually smaller and comparable to a null model with straight lines of random length and location . these differences between biological and urban systems might be related to the different functions of these networks : biological networks are mainly distribution networks serving the purpose of providing important fluids and materials . in contrast , the role of road networks is not only to distribute goods but to enable individuals to move from one point of the city to another . in addition , while biological networks are usually the result of a single process , urban systems are the product of a more complex evolution corresponding to different needs and technologies . these new metrics also allow us to track important structural changes of these networks . the simplicity profile thus appears as a useful tool which could provide a quantitative classification of planar graphs and could help in constructing a typology of leaves or street patterns for example .
modern society features an increasing degree of interaction between cultures ( `` cultural contact '' ) owing to , e.g. , communication technologies , immigration and other socio - political forces . in many countries cultural contactis perceived as both an opportunity and a threat and is related to important public issues such as immigration management and the need to `` protect '' national culture .our understanding of these phenomena is , however , limited : we can not predict the outcome of cultural contact , nor make plausible conjectures that can be used in policy making . within this context ,the aim of this paper is twofold : we first describe a general mathematical framework for modeling social interactions , then we make specific assumptions relevant to studying immigration , i.e. , social contact between two groups that , typically , differ both in culture and relative size .for simplicity , we focus on a single _ cultural trait _ , which may represent an idea , an opinion or a behavior that has two mutually exclusive forms .a useful example to keep in mind is being in favor or against an issue such as the death penalty , or any other issue that might be the subject of a yes / no referendum vote .our framework allows to consider multiple traits without conceptual differences , although model analysis may in general be much more difficult .we consider a population of individuals , labeled by an index ranging from to .we associate to the -th individual the variable , which may take values or representing the two possible trait values .for instance , might represent a yes vote in a referendum , and a no vote .the state of the whole population is thus encoded in an array of numbers , such as .the hallmark of social interactions is that individuals may change their opinions or behavior owing to interactions with others .a given couple can in principle be in one of the four states , , and , but these outcomes , in general , do not have the same probability .which one is more likely will depend on the characteristics of individuals such as their culture and personality .our starting assumption is that individuals have no particular bias towards or opinions : what matters most is whether , by adopting one or the other value , an individual is in agreement or disagreement with others .there are two reasons for this assumptions .first , social psychologists have shown that , in most cultures , agreement or disagreement with others is a powerful determinant of individual opinions and behavior , often more important than holding a particular opinion ; we will expand on this point in our model of immigration below .second , our framework allows to introduce biases that favor a particular trait value , if needed .indeed , any model in which individuals are biased can be recast as a model with unbiased individuals , plus an additional `` force '' that orients individual opinions .thus our starting assumption of unbiased individuals does not reduce the generality of the framework .again , we will make a specific example for the case of immigration below . to formalize these notions, we assume that individuals take on the trait that minimizes a _cost function_. we define the cost for individual to agree or disagree with individual as where is a number that summarizes the nature of the interaction between and , as follows .when and agree ( ) we have a cost , while when and disagree ( ) we have .thus whether agreement or disagreement carries the lesser cost depends on the sign of : favors agreement while favors disagreement .the magnitude of gives how important it is for to agree or disagree with .if , for instance , then it is more important for to agree with than with , while means that agreement with is not relevant to .the signs and magnitudes of the s become important when we consider that an individual interacts with many others . in this case , we assume that the costs relative to each interaction sum up to a total cost as anticipated above , we can take into account additional factors that may influence individuals modifying equation ( [ eq : hi ] ) as follows : meaning that individual is subject to an additional `` force '' that favors if and if .the quantity may represent any factor that is not explicitly taken into account by the direct interaction with other individuals .for instance , it may summarize the influence of media , government campaigns or existing culture ( see below ) .we can now write a population - level cost function as the sum of individual cost functions : we stress that the cost function is a theoretical computational aid to track which trait values are favored by the interactions and the external forces .we do not assume that individuals explicitly compute or are aware of such costs .rather , should be designed so that its minima correspond to those trait values that are favored by the mechanisms with which individuals interact .once a cost function has been specified , it is possible to calculate population level quantities such as the average trait value using the methods of statistical mechanics , a branch of theoretical physics .statistical mechanics was originally concerned with deriving the laws of thermodynamics from the behaviour of atoms and molecules , but can actually be applied to understand the macroscopic ( population level ) properties of any system composed of many parts that interact according to given microscopic ( individual level ) rules .more recently its methods have found application in fields as diverse as biology , neuroscience , economy and finance and also social science .the starting point is to assign to each system configuration a probability according to the boltzmann - gibbs formula where the sum runs over all possible configurations of the system . by means of ( [ eq : bg ] ) a given configurationis considered more or less likely according to whether it is more or less costly : a low value of results in a high probability of , and vice - versa .assigning probabilities based on a given cost function is the heart of statistical mechanics and is inspired by the principles of thermodynamics ( see the appendix for a short discussion , and , for a fuller treatment ) . once a probability is assigned to system configurations , it is possible to compute the expected values of quantities of interest and to relate them to the parameters that describe the system .for instance the average cultural trait defined by would have an expected value given by note that , while is the average trait value in a given configuration , is the average trait value over all possible system configurations , each one weighted according to its probability of occurrence .these probabilities depend on the cost function and thus on the parameters that appear in its expression , i.e. , the s and s . rather than directly attempting to computeexpected values such as ( [ eq : av ] ) , statistical mechanics aims to compute the so - called _ free energy _ of a system , defined as the rationale for this strategy is that important quantities such as ( [ eq : av ] ) can be easily computed from knowledge of the free energy function , typically by taking suitable derivatives with respect to system parameters ( see appendix ) .the basic task of statistical mechanics is thus , after the cost function has been specified , to calculate the free energy function for a given system .we stress that _ the form of the cost function is not given by statistical mechanics ; rather , it is the outcome of a modeling effort relative to a specific problem ._ we now make an example of how one may proceed .we illustrate here the potentials of our framework considering the impact of immigration on culture .we consider a large and a small population , which will be referred to , respectively , as _ residents _ ( ) and _ immigrants _ ( ) .we let be the number of residents , and of immigrants , with and the total number of individuals .we are interested in how cultural contact changes the average trait values in the two populations , with the aim of understanding the effect of one culture upon the other .our main assumption regarding how residents and immigrants interact is that people , generally speaking , tend to agree with those who are perceived as similar to oneself and to disagree with those perceived as different . in social psychologythis is known as the _ similarity - attraction _ hypothesis .it has received ample support , although the details of how we interact with others often depend on social context .we consider this assumption a general guideline , and in modeling a specific case it can be modified without difficulty .we formalize the similarity - attraction hypothesis by assuming that high perceived similarity corresponds to positive values of , and low perceived similarity to negative values .since residents and immigrants have generally different cultures , we may assume the following structure for the interaction coefficients .we let the interaction between any two residents be ; the similarity - attraction hypothesis suggests that this be a positive number , whose magnitude reflects how strongly residents prefer to agree among themselves .likewise , we let the interactions between immigrants be .the mutual interactions and should model whether residents prefer to agree or disagree with immigrants , and vice - versa , and how strongly so .if resident and immigrant cultures are very different , the similarity - attraction hypothesis suggests to take both and as negative , but the best choice of values depends on the specific case one intends to model .note that we are assuming that depends only on population membership and not on the particular individuals and ( the so - called _ mean field _ assumption in statistical mechanics ) .this assumption greatly simplifies mathematical analysis but is not wholly realistic .it can capture the average variation in interactions across population but not the variation that exists within each population .for instance , a more realistic assumption would be to take the s as random variables whose mean and variance depend on population membership .we plan to return on that model ( which would represent the two - population generalization of the sherrington - kirkpatrick model in statistical mechanics , ) in future studies .when modeling interactions , a technical requirement is that the value of the cost function be proportional to total population size .this guarantees that the free - energy function and important quantities such as average trait value , equation ( [ eq : av ] ) , scale appropriately with . in our casethe appropriate scaling is , hence the interactions are : before the two populations start to interact , residents and immigrants are each characterized by a given average trait value , say and , respectively .we consider and as experimental data about the beliefs or behavior of each population , which could be obtained from , say , a referendum vote on a particular issue ( e.g. , the death penalty ) or from statistical sampling of the population . that a population is characterized by a given average value means that the two forms of the trait are not equally common .specifically , the individuals with the form are , while individuals have the form .pre - existing culture , in other words , is like a bias or force that favors one trait value over the other .for modeling purposes , it is convenient to describe pre - existing culture as a `` cultural force '' that acts to orient the opinion of otherwise unbiased individuals. this is possible including a force term in the cost function , as shown in ( [ eq : hh ] ) . by standard methods of statistical mechanics ( see appendix ) it is possible to show that the force term corresponding to a particular average opinion is where is the inverse hyperbolic tangent function . to summarize , a model in which individuals are biased so that the average opinion is is equivalent to a model with unbiased individuals subject to a force given by ( [ eq : h ] ) .so far we have specified interaction terms to model cultural contact between two populations and we have introduced equation ( [ eq : h ] ) to represent the pre - existing culture in the two populations .the next step is to compute the average trait values and in the two populations after immigration has taken place . the same method that allows to derive equation ( [ eq : h ] ) enables us to derive the following equations for and ( see appendix ) : [ eq : m1m2 ] where is the fraction of immigrants in the total population and is the hyperbolic tangent function .the values of and predicted by ( [ eq : m1m2 ] ) depend of course on values of the and parameters , and on .we give here a qualitative description of the different regimes that one can observe varying these parameters .we refer to for a proof of the following statements , in the context of an analogous model from condensed matter physics .the two key parameters are , the fraction of immigrants , and the overall scale of the interactions , which we label .if is below a critical value , equation ( [ eq : m1m2 ] ) has always one pair of solutions , for all values of . in this casethe two populations are essentially merging into a homogeneous one , with average cultural trait in between the two initial ones more toward one or the other according to the value of .this regime is not surprising and corresponds to the nave prediction that one could have made a priori without applying statistical mechanics .if the interaction scale is large ( ) , however , model predictions are highly non - trivial , suggesting that the outcome of cultural contact can be surprising . depending on there are two critical values for : and that delimit qualitatively different behavior . for resident culture dominates dominant and the immigrant culture disappears , i.e. , is close to irrespective of the initial value .the converse happens when , i.e. , immigrant culture dominates .the most interesting case occurs when . in this regime ( [ eq : m1m2 ] ) has two distinct solutions in which either of the two cultures dominates .that is , both cultures may survive the immigration process , generally with a different probability determined by system parameters .the parameter values that favor the resident or immigrant culture , have still to be worked out and will be the topic of future work . herewe analyze the case in which the intensity of the interactions is uniform both within and between groups , .this is interpreted as two groups that do not really discriminate between themselves , so that disagreement with any particular individual carries the same cost independent of which group the individual belongs to .we assume , however , that the two groups have initially a very different average trait value : and . in figure[ fig : jalpha ] we explore this system by plotting the average trait value after the interaction , , for between 0 and 10% and for . for ( no interaction ) is simply the weighted sum of pre - existing trait values , , where each group contributes according to its size . as a function of , this is a straight line .as the interaction increases the line slowly bends and for higher values of we see a slight exaggeration of the pre - existing opinion ( the surface in figure [ fig : jalpha ] rises slightly over the level ) .when crosses a critical value , however , a dramatic phenomenon occurs : the population undergoes a sudden change of opinion , switching to a value of that is closer to , and indeed exaggerates , the initial value in the immigrant population , .note that this sudden transition occurs for all values of , i.e. , irrespective of the proportion of immigrants .the solution with closer to is still available to the system ( not plotted in figure[fig : jalpha ] ) , but as grows past it is less and less likely that the system remains in such state ( technically , for the solution with has a higher free - energy than the solution with and thus becomes metastable , allowing fluctuations to cause a transition between the two solutions ) .thus , according to this model , to prevent dramatic changes in resident culture , it would do little to restrict immigration ( the effect of is small in the graph ) .rather , one should concentrate in reducing the scale of the interaction , i.e. , the strength of attitudes within and between groups .attempts to apply mathematical - physics methods to social sciences have appeared in the litterature since the pioneering work of . in this paperwe have focused on statistical mechanics as a tool to bridge the gap from individual - level psychology and behavior to population - level outcomes .our framework prescribes that researchers build a cost function that embodies knowledge of what trait values ( opinions , behaviors , etc . )are favored by individual interactions under given social conditions . the cost function , equation ( [ eq : h ] ) ,is defined by a choice for the interactions and the fields that represent social forces influencing individual opinions and behavior .this modeling effort , of course , requires specific knowledge of the social issue to be modeled .after a cost function has been specified , the machinery of statistical mechanics can be used to compute population - level quantities and study how they depend on system parameters .we have demonstrated our framework with an attempt to understand the possible outcomes of contact between two cultures .even the simple case we studied in some detail the model suggests that cultural contact may have dramatic outcomes ( figure [ fig : jalpha ] ) . how to tailor our framework to specific cases , and what scenarios such models predict , is open to future research .we thank f. guerra for very important suggestions .i. gallo , c. giardina , s. graffi and g. menconi are acknowledged for useful discussion .amit , d. : 1989 , _ modeling brain function _ ,vol . 1 .cambridge : cambridge university press .arbib , m. a. : 2003 , _ the handbook of brain theory and neural networks_. mit press , 2 edition .bond , r. , smith , p. b. : 1996 , ` culture and conformity : a meta - analysis of studies using asch s ( 1952b,1956 ) line judgment task ' ., 111137 .bouchaud , p. , potters , m. : 2000 , _ theory of financial risks_. cambridge university press .byrne , d. : 1997 , ` an overview ( and underview ) of research and theory within the attraction paradigm ' ., 417431 .cornelius , w. a. , p. l. martin , and j. f. hollifield ( eds . ) : 1994 , _ controlling immigration : a global perspective_. stanford , ca : stanford university press .durlauf , s. n. : 1999 , ` how can statistical mechanics contribute to social science ? ' . , 1058210584 .galam , s. , gefen , y. , shapir , y. : 1982 , ` sociophysics : a mean field model for the process of strike ' ., 113 .givens , t. and a. luedtke : 2005 , ` european immigration policies in comparative persepctive : issue salience , partisanship and immigrant rights ' ., 122 .cohen , e.g.d . : 1973 , _ tricritical points in metamagnets and helium mixtures_. in _ fundamental problems in statistical mechanics _ , proceedings of the 1974 wageningen summer school .north - holland / american elsevier .lull , j. : 2000 , _ media , communication , culture_. cambridge , uk : polity press .mezard , m. , g. parisi , and m. a. virasoro : 1987 , _ spin glass theory and beyond_. singapore : world scientific .michinov , e. and j .- m .monteil : 2002 , ` the similarity - attraction relationship revisited : divergence between affective and behavioral facets of attraction ' ., 485500 .thompson , c. : 1979 , _ mathematical statistical mechanics_. princeton , nj : princeton university press .it is a standard result of statistical mechanics that the free energy function of a system defined by a cost function of the form is obtained for the value of that minimizes the function the minimization of this function with respect to yields the condition ( [ eq : h ] ) which relates and and the hamiltonian parameters .the structure of the free energy ( [ eq : fcw ] ) admits the standard statistical mechanics interpretation as a sum of two contributions : the internal energy ( the average of the cost function ) minus the entropy one can indeed show that the distribution function ( [ eq : bg ] ) may be deduced from the second principle of thermodynamics i.e. as the distribution for which the entropy is minimum at an assigned value of the cost function . equation ( [ eq : m1m2 ] )is obtained similarly from the representation of the free energy of the two population system as the minimum of the function \\ & + { \alpha}[+\frac{1+m_2}{2}\log\frac{1+m_2}{2}+\frac{1-m_2}{2}\log\frac{1-m_2}{2 } ] \end{array}\ ] ] the minimum condition yields ( [ eq : m1m2 ] ) .
we introduce a general modeling framework to predict the outcomes , at the population level , of individual psychology and behavior . the framework prescribes that researchers build a cost function that embodies knowledge of what trait values ( opinions , behaviors , etc . ) are favored by individual interactions under given social conditions . predictions at the population level are then drawn using methods from statistical mechanics , a branch of theoretical physics born to link the microscopic and macroscopic behavior of physical systems . we demonstrate our approach building a model of cultural contact between two cultures ( e.g. , immigration ) , showing that it is possible to make predictions about how contact changes the two cultures .
different viral types have phylogenetic trees exhibiting different branching properties , with influenza and hiv being two extreme examples . in the influenza treea single type dominates for a long time with other types dying out quickly until suddenly a new type completely takes over and the old type dies out .the hiv phylogeny is the complete opposite , with a large number of co existing types . in stochastic model is described and depending on the choice of parameters it can exhibit both types of dynamics .we briefly describe the model after .we only keep track of the number of different viral types at each time point .let denote the number of distinct viral types at time . in the nomenclature of phylogenetics counts the number of different species alive at time . at each time pointthe birth rate is and the death rate is . if there is only one type alive then it can not die .clearly is a markov chain with discrete state space and continuous time .each virus type is described by a fitness value that is randomly chosen at its birth . if a death event occurs the type with smallest fitness dies .this means that only the fitness ranks matter and so the exact distribution of a virus fitness will not play a role .the main result of is the asymptotic behaviour of the dominating type , whether it is expected to remain the same for long stretches of time or change often .this is summarized in theorem 1 , take .if then while if then this limit is .the proof of this theorem is based on considering successive visits to the state , in particular denote to be the ( random ) times between visits of the chain to and . in the latter random variableis represented as where are independent mean exponential random variables and are the ( independent ) hitting times of the state conditional on starting in state .descriptively is the return from state to the state of one virus type alive , since from the chain has to jump to two types .the markovian nature of the process ensures that the are independent and identically distributed for distinct . in the proof of theorem 1it is stated that the cumulative distribution function of , , satisfies , solved uniquely for , this gives the asymptotic behaviour ( lemma ) , from which the result of theorem 1 is derived when .however eq . ( [ eqligintft ] ) does not take into account that this model differs from a classical birth death model where is the absorbing state .we illustrate this in fig .we can easily re numerate the state values , but the intensity values will differ between the two models . ) would be correct .the numbers inside the circles are the states ( counting the number of virus type ) and the numbers above and below the arrows are the birth and death rates respectively of the state from which they come out.,title="fig : " ] + ) would be correct .the numbers inside the circles are the states ( counting the number of virus type ) and the numbers above and below the arrows are the birth and death rates respectively of the state from which they come out.,title="fig : " ] correcting for this difference in intensity values one will still get the same asymptotics as in eq .( [ eqlignt ] ) and hence the same result as in but with a more complicated proof .below we present a correct proof of lemma in the case of , based on lemmas [ lemrenewal ] and [ lemftasympt ] . from thisthe statement of theorem of follows .[ lemrenewal ] let .then solves the renewal equation in panel a of fig .[ fgbd ] we can see a representation of the studied markov chain on the state space . due to the being independent for different we can study the distribution of and treat as an absorbing state .let denote the probability of being in state at time , when one starts in state at time .the system of differential equations describing the probabilities is , with initial conditions , let denote the generating function of the sequence , i.e. taking first derivatives we get the partial differential equation , with initial conditions following and using the substitution with we arrive at , evaluating the derivative of both sides with respect to at we find that the function must satisfy the following integral equation ( we can recognize it as a renewal equation ) , [ lemftasympt ] , the solution of the renewal equation ( [ eqrenewal ] ) , has the following properties , the proof is based on tauberian theory and we refer the reader to for details on this .another approach would be to study the asymptotic behaviour of by renewal theory results ( see e.g. ) .the laplace transform of a density function , denoted is , we will use the following theorem from , theorem xiii.5 , [ tauberian ] if is slowly varying at infinity and , then each of the relations implies the other . defined as the solution to the renewal equation ( [ eqrenewal ] ) after differentiating will satisfy we calculate the laplace transforms of , and , we are interested in the behaviour of the transforms as , and for this we will use the well known property ( verifiable by the de lhspital rule ) of the exponential integral , to arrive at , as both the constant function and are slowly varying functions for the tauberian theorem allows us to conclude that , will now use lemmas [ lemrenewal ] and [ lemftasympt ] to prove lemma 3 from . define as there , with . by lemma [ lemftasympt ]we know that , we now need to check how behaves asymptotically .we do not know what is but using the tauberian theorem and eq .( [ twf ] ) from the proof of lemma [ lemftasympt ] we get that , therefore using integration by parts and eq .( [ eqt2f ] ) , and so we arrive at the rest of the proof is a direct repeat of the one in and so we get ( as in ) that tends to in probability implying theorem 1 for . if we applied the same chain of reasoning to the model of panel b in fig .[ fgbd ] starting off with the system of differential equations , the analogue of eq .( [ partdiffeq ] ) would be a homogeneous partial differential equation , with initial conditions in agreement with .it would therefore be an interesting problem to see what conditions are necessary on the nonhomogeneous part of eq .( [ partdiffeq ] ) to still get the same asymptotic behaviour of the markov chain and what underlying model properties do these conditions imply .we are grateful to wojciech bartoszek and joachim domsta for many helpful comments , insights and suggestions .k.b . was supported by the centre for theoretical biology at the university of gothenburg , stiftelsen fr vetenskaplig forskning och utbildning i matematik ( foundation for scientific research and education in mathematics ) , knut and alice wallenbergs travel fund , paul and marie berghaus fund , the royal swedish academy of sciences , wilhelm and martina lundgrens research fund and stersjsamarbete scholarship from svenska institutet ( 00507/2012 ) .
birth and death models are now a common mathematical tool to describe branching patterns observed in real world phylogenetic trees . liggett and schinazi ( 2009 ) is one such example . the authors propose a simple birth and death model that is compatible with phylogenetic trees of both influenza and hiv , depending on the birth rate parameter . an interesting special case of this model is the critical case where the birth rate equals the death rate . this is a non trivial situation and to study its asymptotic behaviour we employed the laplace transform . with this we correct the proof of liggett and schinazi ( 2009 ) in the critical case .
we present a numerical study of the semi - classical solutions to the following nonlinear schrdinger equations with , when a caustic ( a point or a cusp ) is formed , that is to say , beyond _ breakup time_. since the nonlinearity is homogeneous , the change of unknown function shows that is equivalent to : so that we can always consider initial data of order .there are several motivations to study the behavior of when a caustic is formed .first , on a purely academic level , we recall that the description of the caustic crossing is complete in the case of linear equations ; see . for nonlinear equations ,very interesting formal computations were proposed in ( we recall the main idea in section [ sec : analyt ] below ) . for _ dissipative _ nonlinear wave equations , joly , mtivier and rauch have proved that the amplification of the wave near the caustic can ignite the dissipation phenomenon in such a way that the oscillations ( that carry highest energy ) are absorbed .the above nonlinear schrdinger equation is the simplest model of a _ conservative , nonlinear equation_. the mass and the energy of the solution are independent of time ( see below ) .therefore , different nonlinear mechanisms are expected .we recall in section [ sec : analyt ] some results that have been established rigorously , and give heuristic arguments to extend these results .this serves as a guideline for the numerical experiments proposed after .second , may be considered as a simplified model for bose einstein condensation , which may be modeled ( see e.g. ) by : with if , and if or .the power in front of the nonlinearity depends on the rgime considered , and in particular on the respective scales of different parameters ( see e.g. and references therein ) .the role of the harmonic potential is to model a magnetic trap . in the semi - classical limit for the linear equation , this potential causes focusing at the origin for solutions whose data are independent of .this is to be compared with the case of with initial quadratic oscillations as considered below : the initial quadratic oscillations force the solution to concentrate at one point in the limit .the parallel between and was extended and justified in for these nonlinear equations . from both points of view , when a caustic point is formed , the caustic crossing may be described in terms of the scattering operator associated to this aspect is recalled in section [ sec : analyt ] . for this reason, we also pay a particular attention to this operator , independently of the above semi - classical limit .note that besides the existence of this operator , very few of its properties ( dynamical , for instance ) are known . in this paper , we always assume : one of the reasons is that when , instability occurs , see .suppose for instance that solves , and that solves , where replaced by , where is a sequence of real numbers going to zero as .then there are some choices of for which for some sequence of time ( see , and for a similar phenomenon with different initial data ) .therefore , producing reliable numerical tests in the case ( which is super - critical as far as wkb analysis is concerned ) seems to be a very delicate issue , that we leave out in the present paper .the rest of this paper is structured as follows . in section [ sec :analyt ] , we recall the general approach of wkb analysis for the schrdinger equation , the arguments of , and the rigorous results available for the semi - classical limit of when a caustic reduced to a point is formed .we then recall the definition of the scattering operator .we also give heuristic arguments to tackle the case of a `` supercritical focal point '' , and to guess what the critical indices are when a cusp caustic is formed , instead of a focal point . in section [ sec :numgen ] , we present the different strategies that have been followed in the literature to study numerically the semi - classical limit for nonlinear schrdinger equations .numerical experiments on the semi - classical limit for in the presence of a focal point appear in section [ sec : foc ] , and the scattering operator is simulated in section [ sec : scattnum ] .we present the numerical experiments of the semi - classical limit for in the presence of a cusp caustic in section [ sec : cusp ] , and make conclusive remarks in section [ sec : concl ] .consider the initial value problem , for : the aim of wkb methods is to describe the asymptotic behavior of as .for instance , can be related to the planck constant , and the asymptotic behavior of is expected to yield a good description of when is fixed , but small compared to the other parameters .more precisely , seek of the form plugging this expansion into and canceling the term , we see that the phase must solve the eikonal equation : to cancel the term , the leading order amplitude solves the transport equation : the eikonal equation is solved thanks to hamilton - jacobi theory : is constructed locally in space and time ( see e.g. for a discussion on this aspect ) .even if is smooth , develops singularities in finite time in general : the locus where is singular is called _ caustic _ ( see e.g. the second volume of ) . when becomes singular, all the terms may become singular as well .one easily observes that ( [ eq : transport ] ) admits a divergence form " : . to illustrate this general discussion , we consider two examples that will organize the rest of this paper .+ _ example ( quadratic phase ) ._ let . then andcan be solved explicitly : this shows that as , and become singular : the wave focuses at the origin .this example can be viewed as the smooth counterpart of the cauchy problem fourier analysis shows that , the dirac measure at the origin .+ of course , the solution of can be represented as an oscillatory integral : the caustic set is exactly the locus where the critical points for the phase are degenerate . outside the caustic ,an approximation of is given by the stationary phase theorem ( that we recalled as simply as possible in ) .this leads us to the second example we shall consider numerically : + _ example ( cusp ) ._ let and .the set of degenerate critical points for ( caustic ) is given implicitly by : as soon as , a caustic is formed ( see figure 2 in ) .+ when considering the asymptotic behavior of beyond the caustic , two main features must be considered : the creation of other phases , and goes to infinity as - rapid oscillations .the wavelength may be proportional to , or , say , to .] , and the maslov index ( see for more general linear equations ) . in the case of a focal point, the first aspect does not exist : there is no creation of phase , and one phase is enough to describe past the focal point .one can prove easily the following result : [ lem1 ] let and . if , then the asymptotic behavior ( in ) of the solution to is given by : in this example , the maslov index is . in the case of the cusp, three phases must be considered to describe the asymptotic behavior of beyond the caustic ( see e.g. ) .+ for future discussion on the numerical results , we state the following more precise result , which follows from the stationary phase theorem : [ lem2 ] let and .if , then the asymptotic behavior of the solution to at time is given by : consider now the perturbation of with a nonlinear term : the sign of the nonlinearity is chosen so that no finite time blow - up occurs. the following two important quantities are formally independent of time : we refer to for a justification .fix the power of the nonlinearity , and consider different values for .two notions of criticality arise : for the wkb methods on the one hand , and for the caustic crossing on the other hand .this discussion is presented in for conservation laws , and we summarize it in the case of . plugging an expansion of the form into , we see that the value is critical for the wkb methods : if , then the nonlinearity does not affect the transport equation ( `` linear propagation '' ) , while if , then the nonlinearity appears in the right hand side of ( `` nonlinear propagation '' ) .recall that in this paper , we always assume .therefore , the eikonal equation is not altered : the geometry of the propagation remains the same as in the linear wkb approach , and we have to face the same caustic sets .the idea presented in consists in saying that according to the geometry of the caustic , different notions of criticality exist , as far as is concerned , near the caustic . in the linear setting, the influence of the caustic is relevant only in a neighborhood of this set ( essentially , in a boundary layer whose size depends on and the geometry of ) .view the nonlinearity in as a potential , and assume that the nonlinear effects are negligible near the caustic : then near .view the term as a ( nonlinear ) potential .the average nonlinear effect near is expected to be : where is the region where caustic effects are relevant , and the factor is due to the integration in time ( recall that there is an in front of the time derivative in ) .the idea of this heuristic argument is that when the nonlinear effects are negligible near ( in the sense that the uniform norm of is small compared to that of near ) , the above approximation should be valid . on the other hand, it is expected that it ceases to be valid precisely when nonlinear effects can no longer be neglected near the caustic : is of the same order of magnitude as in , or even larger . practically , assume that in the linear case , has an amplitude in a boundary layer of size ; then the above quantity is the value is then critical when the above cumulated effects are not negligible : when , the nonlinear effects are expected to be negligible near the caustic : resuming the terminology of , we speak of `` linear caustic '' .the case is called `` nonlinear caustic '' . to conclude this paragraph, we examine this approach in the case of our two examples . in the case of a focal point, we have and .this leads us to the value : in the case of the cusp in dimension one , we have and ( which can be viewed thanks to the airy function and its asymptotic expansion , see e.g. or ) , which yields : one aspect of the numerical experiments presented below is to test this notion of criticality in those two examples . in this paragraph , we assume .a complete justification of the above discussion is available : + [ cols="<,^,^",options="header " , ] consider ] . * if , then the nonlinearity is negligible near the focal point , but not away from it . * if , then nonlinear effects are relevant near the focal point , and only near the focal point . * if , then the nonlinearity is never negligible .we give some precisions in some cases of interest for the numerics presented below .+ in the one - dimensional case , the following pointwise estimate is proved in when or : setting , we see that the usual energy estimate yields , for : using , we infer , for ] and , if moreover , then there holds under the same assumptions : this result is concerned with splitting errors only and relies on the knowledge of the exact solution operators and . in order to stick to this framework in the context of smooth solutions , it is rather natural to approximate by means of a fourier scheme taking advantage of optimized fft routines , as proposed in the paper . moreover, this will guarantee that the norm ( mass ) of the numerical solution will be conserved up to round - off errors .unfortunately , the hamiltonian is generally not preserved ; a method conserving both quantities exists ( see the so called mcn algorithm , page 253 of ) but it would nt be efficient in the semiclassical regime because of the results in .this is the main purpose of the paper to illustrate the ( surprising ) fact that in semi - classical regime , usual finite - difference schemes for can deliver very wrong approximations without any particular sign of instability in case very restrictive meshing constraints turn out to be bypassed. this can be quite easily checked through the location of caustics , for instance .the analysis of those standard schemes has been carried out by means of wigner measures , so the conclusions hold essentially for the quadratic observables coming out of the wave function itself .this class of schemes became popular after the publication of , mainly because treating the differential part of ( [ eq : nls ] ) by means of a discrete fourier transform looked very much like being the best possible compromise in terms of meshing constraints .indeed , in the linear case where ( [ eq : schrodlibre ] ) is supplemented with a potential on its right - hand side , it was shown that the time - step could be chosen independent of whereas the space discretization has to satisfy .this was already much better when compared to finite - differences ; moreover , the method is naturally -conservative . in , these authors extended their fourier framework " to the weakly nonlinear schrdinger equations of the form ( [ eq : nls ] ) .however , and despite the fact we do believe these `` fft time - split schemes '' realize the best numerical strategy in terms of gridding , we shall point out some shortcomings of the method in the next section .we present here a preliminary result about truncation errors in lebesgue spaces for fourier schemes ; its proof follows directly from the strichartz estimates on the torus due to j. bourgain ( see also ) , and from the study of fft by m. taylor . its derivation is not obvious though as it applies directly to widely - used schemes like the one recalled in the forthcoming section .we restrict our attention to the 1d free schrdinger equation with , and periodic boundary conditions : .hence we start from .\ ] ] we have explicitly : in order to investigate the behavior of the fft - scheme involving a finite even number of modes , we introduce the discrete fourier transform of a continuous function on ] : the second estimate looks more attractive as it sees " the finite number of modes .we therefore deduce that the first term can be controlled in by means of : for instance , if is a finite superposition of fourier modes , then it is clear that this term cancels for large enough as in the summation ; obviously , the second term vanishes too .the general case is nt completely clear yet .this section aims at visualizing the asymptotics previously recalled ; namely we shall compare numerical approximations of and in 1d ( ) for various values of the parameters and .the initial wave function is rather simple : .\ ] ] numerical results have been obtained through the time - splitting fft schemes recalled in the previous section ; we used 1024 modes and fixed .it is convenient to observe results in since .this case corresponds to and ; we expect to observe a decay of the absolute errors between and for values .this is indeed the case , but fig .[ pf1 ] shows even a bit more , namely it compares pointwise the following quantities : ( recall lemma [ lem1 ] ) on the left in fig .[ pf1 ] , we obviously observe that the absolute errors are slightly bigger when considering the solution of the nonlinear equation , .however , even for the free solution , one sees that the error does nt vanish despite the fact no time - splitting algorithm is needed . as the way of discretizing the solution reveals itself important , we include here the corresponding scilab routine for the free equation : clear;deff([y]=phase(x),[y=-0.5*(x-) ; ] ) deff([y]=position(x),[y = exp(-2*(x-)) ] ) deff([y]=az(x),[y = position(x).*exp(i*phase(x)./epsilon) ] ) nmax=2;n=-(nmax)/2:(nmax/2)-1;epsilon=1.0/150 ; xstart=0;xstop=2*;dx=(xstop - xstart)/nmax;xstop = xstop - dx ; a = xstart : dx : xstop , initialdata = az(a ) ; vepsilon = fftshift(fft(initialdata,-1 ) ) ; vepsilon = exp(-i*epsilon*()).*vepsilon ; vepsilon = fft(fftshift(vepsilon),1 ) ; + clearly , its outcome is in agreement with lemma [ lem1 ] since is already quite small .the maslov index is visible , up to an error around for 1024 fourier modes .we now put and the outcome is displayed on fig .[ pf2 ] ; we still compare the same quantities .absolute errors on wave functions ( left side ) are much bigger for in this case .in particular , no new frequencies appear in the numerical solutions .the nonlinear effect boils down to a small change on the modulus of .we close this first series of tests by considering and as shown in fig .[ pf3 ] . of course , as no pointwise convergence is expected in this case , absolute errors are even bigger for both wave functions ( left side ) and moduli ( right side ) . of course , the size of the error on the modulus is much bigger too , and one should be extremely careful about the credit to give to the numerical simulations in the supercritical case . indeed , this is a regime where a small error can be amplified at leading order ( see ) .we aim now at illustrating the results on scattering theory through numerical computations still achieved through time - splitting fft schemes .the algorithm we used for the approximation of the scattering operator is based on a nonlinear time - splitting routine flanked by two free evolution steps ( implemented the way recalled in the previous section ) : with , standing for the solution operators of equations and in 1-d with respectively .we used in the computations hereafter .as no small parameter is present in the problem , one may think that no major obstacle exists in carrying out this program ; this is nt correct as the free evolutions can ( and do ! ) dramatically increase the size of the computational domain for large . it is interesting to notice that , in case one wants to use fft - based schemes , both the direct computation for small and the scattering operator approximation lead to a large computational domain difficulty " : in the fourier space for the first case , in the usual space for the second . a way to understand the scattering operator is to visualize the average effects of the nonlinearities appearing in equations of the form for various values of and .intuitively , as increases , the nonlinearity becomes shorter range .similarly , as increases , the nonlinearity becomes stronger , and it should take a larger amount of time before we can consider it has become negligible .in all the tests we performed , it was somehow surprising to observe how fast the algorithm converges : one does not have to consider `` very large '' values of so that becomes stable and visually independent of .the parameter controls in some sense the strength of the nonlinearity inside , as can be seen on fig .this figure displays the position density of the initial data , the scattered solution for and a mixed state " . as our time - splitting / fft algorithm preserves only the norm , but not the hamiltonian , we first restricted ourselves to moderate values of ( defocusing case ) .however , as a numerical experiment , we wanted to display the outcome of our scheme for the stronger case on fig .[ run2 ] : notice the change of shape in the scattered solution . moreover , on this figure , we also tried to show what happens for , that is to say for the focusing case despite there may be finite time blow - up ( but there is scattering for small data ) .we checked that the energy associated to this data is ( and remains ) positive , a case where the virial identity , , does not imply blow - up .the computational domain for these runs was $ ] and fourier modes were used .now let s observe the effects of lowering the value while keeping other parameters equal , see fig .it is interesting to see that the change of shape appearing for is stronger than in the preceding case . on the contrary ,the increase of the numerical solution s support is slightly less important .this hints that increasing the value tends to expand the support of the scattered solution whereas increasing the value ( defocusing case ) leads to an oscillatory behavior .however , we stress that since the energy , of the numerical solution changes more with a bigger ( its mass being always kept constant ) , these oscillations might be spurious . we actually do nt know how this fact can be decided ; our profiles have been checked to be stable on a finer grid . in order to get some numerical evidence about the dependence of the scattered solution on , we display on fig .[ run4 ] the outcome for .it is quite clear that the scattered solutions for both values of are less peaked .their support is bigger and the oscillations for are weaker , their frequency remained the same though .this agrees with the behavior we sketched in the preceding subsection as and vary .let us now go back to comparing the quadratic observables generated by numerical approximation of equations and endowed with a small parameter in 1-d . in this section we fixed .figure [ run - cusp ] displays the position density of the initial data for both equations , i.e. ,\ ] ] together with the position density of the numerical approximations of , in .the point here is to investigate what happens for the case of such a self - interfering gaussian pulse , since no scattering theory is known for this problem .what we would like to check is whether the theoretical results on the focus point recalled and visualized in the preceding sections can be thought of as a guideline for this more complex case involving a non - trivial caustic. we shall observe position densities for the unique value of as a similar behavior has been seen to hold for different nonlinearities with convenient values of . fourier modes have been used in order to produce these results .this case could be referred to as subcritical since it is noticeable on the top of fig .[ run - cusp ] that the free and the nonlinear numerical solutions do agree for this reasonably small value of .in particular , the frequencies of oscillations are identical .this is very similar compared to the behavior investigated in .the parameter is now in a critical range " as we observe that both solutions differ much more , but the frequency of the oscillations looks like being still the same in both cases . in order to establish this fact , we display on the left of fig.[fft - cusp ] the fft of the position densities : a peak at the same frequency is clearly noticeable .the nonlinear effect manifests itself through a change of order zero in the moduli , as we already observed on the right side of fig .[ pf2 ] ; notice also the similarity with the scattering state shown on fig .[ run2 ] ( right ) .this does agree with the value derived in section [ sec : heur ] in this last case , there is no similarity no more between the approximate solutions of , , as seen on both fig .[ run - cusp ] and [ fft - cusp ] . especially , the right side of fig .[ fft - cusp ] reveals that new frequencies show up inside the position density of the nonlinear solution .we have therefore a change of order zero in the moduli and in the frequency .this is of course reminiscent of fig .[ pf3 ] in which a frequency doubling seems to show up in the supercritical regime .we have presented the semi - classical limit for the nonlinear schrdinger equation in the presence of a caustic . when the caustic is reduced to a point ,the numerical experiments are in good agreement with the analytical results as far as the notion of criticality is concerned .however in the critical case , described by a nonlinear scattering operator , the leading order nonlinear effects are rather hard to visualize in the semi - classical limit .this is why we simulated the scattering operator in a separate way .our numerical tests give encouraging evidence of new phenomena concerning the phase of the wave in the supercritical case when a focal point is formed ( appearance of new frequencies ) . in the presence of a cusp caustic , the numerical experiments are in good agreement with the heuristic arguments that we presented here , for which no rigorous justification is available so far ., _ an introduction to nonlinear schrdinger equations _ , in : nonlinear waves ( sapporo , 1995 ) , eds .r. agemi , y. giga , and t. ozawa , gakuto international series , math .sciences and appl . , gakktosho , tokyo , 1997 , pp .85133 . , _ a case study on the reliability of multiphase wkb approximation for the one - dimensional schrdinger equation _ , in : numerical methods for hyperbolic and kinetic problems , vol . 7 of irma lect .soc . , zrich , 2005 , pp .
the aim of this text is to develop on the asymptotics of some 1-d nonlinear schrdinger equations from both the theoretical and the numerical perspectives , when a caustic is formed . we review rigorous results in the field and give some heuristics in cases where justification is still needed . the scattering operator theory is recalled . numerical experiments are carried out on the focus point singularity for which several results have been proven rigorously . furthermore , the scattering operator is numerically studied . finally , experiments on the cusp caustic are displayed , and similarities with the focus point are discussed .
the outbreak of ebola virus disease ( evd ) in west africa caused 28,646 cases and 11,323 deaths as of march 30 , 2016 .the current outbreak is of the ebov ( zaire ebola virus ) strain , the most fatal strain .the largest previous outbreaks of the ebov strain occurred in the democratic republic of the congo in 1976 and 1995 , causing 318 and 315 cases .clinical progression includes two broad stages of infection , often characterized as early and late . in the first stage ,approximately five to seven days , symptoms include fever , weakness , headache , muscle / joint pain , diarrhea , and nausea . in some patients, the disease progresses to a second stage , with symptoms including hemorrhaging , neurological symptoms , tachypnea , hiccups , and anuria .mortality rates are higher among those exhibiting second - stage symptoms .evd is transmitted through direct contact with an infected individual .transmission risk factors include contact with bodily fluids , close contact with a patient , needle reuse , and contact with cadavers , often prepared for burial by the family .this outbreak was primarily in the contiguous countries of guinea , sierra leone , and liberia , which experienced widespread and intense transmission .interventions have included quarantine , case isolation , additional treatment centers , border closures , and lockdowns , restricting travel within a region ( as in a military - enforced curfew ) .public health authorities must allocate resources effectively , focusing personnel and funds to respond best to outbreaks .however , not all response measures are equally beneficial or cost - effective , and testing the relative benefits of each is often impossible or unethical .modeling thus provides a valuable tool for comparing interventions and identifying areas where interventions are most effective .modeling analyses can inform public health policy regarding ongoing and future outbreaks .dynamic spatial modeling has been proposed as a useful approach to understand the spread of evd and evaluate response strategies benefits .the recent outbreak has sparked an increase in evd modeling .the spread between contiguous countries in the current evd outbreak highlights the spatial element to its proliferation ; however , as previous evd outbreaks were more localized than the 2014 - 2015 epidemic , there is little historical data on the geospatial spread of ebola .mobility data , which may help inform spatial evd modeling , is limited , although some studies have highlighted its usefulness and extrapolated based on mobility data from other regions . to address the need for spatial evd models , models have been developed to examine local spatial spread within liberia and evaluate the potential risk of international spread using data such as airline traffic patterns .our models add to existing spatial models by incorporating intervention comparison and an exploration of the dynamics between different regions , using an uncomplicated model structure . between - country mobilityis important in this epidemic because the borders between countries are porous ; borders are drawn across community or tribal lines , resulting in frequent border - crossing to visit family , conduct trade , or settle disputes .in addition , santermans and coauthors recently demonstrated that the outbreak is heterogeneous : transmission rates differ between locales .our district - level model offers a mechanistic explanation of observed spatial heterogeneities , adding to the existing literature by explicitly simulating the interactions between districts . in this study , we present spatial models of evd transmission in west africa , using a gravity model framework , which captures dynamics of local ( within - region ) and long - range ( inter - region ) transmission .gravity models are used in many population - mobility applications , especially to relate spatial spread of a disease to regional population sizes and distances between population centers .gravity models have been applied to other diseases , including influenza in the us and cholera in haiti , and have been used to examine general mobility patterns in west africa .we demonstrate that the gravity modeling approach fits and forecasts case and death trends in each country , and describes transmission between countries .the district - level model successfully simulates local geospatial spread of evd , indicating that the gravity model framework captures much of the local spatial heterogeneity . using the model structures , we examined interventions at country and district scales , evaluating the relative success of intervention types as well as the most responsive regions .we developed a compartmental gravity model using ordinary differential equations to model spatiotemporal progression of evd ( figure [ fig : modelmap ] , model equations and details given in the supplementary information ) .each country s compartmental structure includes susceptible ( ) , latent ( ) , two - stage infection ( , ) , funeral ( ) , recovered ( r ) , and deceased ( d ) , based on previous compartmental models . patients in stage can recover or transition to , where they may recover with lower probability or transition to , based on clinical observations of symptom progression and mortality .the latent period and two - stage infection are based on the clinical progression of evd , wherein patients become contagious in , with increasing contagiousness in .funerals play a role in transmission because of the high viral load of the deceased and frequent contact with susceptible persons due to cultural burial practices .the precise relative magnitude of this contribution is unknown ; previous models have shown that the relative contributions of each stage are unidentifiable from early data .thus , patients in were assumed to be as contagious as those in .the spatial component of the model is a three - patch gravity model .each country is one patch , with a compartmental model within it and the capital as the population center , similar to previous gravity models , which used political capitals / large cities as centers .after evd appeared in western guinea , subsequent cases were in the capital , conakry , suggesting that the capital acts as the central population hub .similar progression occurred in each country : after evd cases appeared in border regions , cases soon appeared in the capital .the force - of - infection term for patch consists of transmission from within the region and transmission from other patches into patch .each long - range transmission term is determined by a `` gravity '' term , proportional to the population sizes , and inversely proportional to the squared distance between them .cumulative local cases were measured by the number of infections in a patch due to local transmission .cumulative long - range cases were measured by number of cases due to long - range transmission .we developed a more detailed gravity model of west africa incorporating the 14 districts / areas of sierra leone , 15 counties of liberia , and 34 prefectures of guinea .this model is structured the same way as the country - level model : a compartmental model in each administrative unit , linked to all other patches by gravity terms .transmission is separated into local ( within - district ) and long - range ( between - district ) .case and death incidence in each country was collected from world health organization ( who ) situation reports on evd from march 29 to october 31 , 2014 .the country - model simulations used data from may 24 through september 30 for fitting model parameters .data from october was used for validation , to compare model projections to data unused in fitting .road distance between centers was used for the gravity component .we evaluated direct distance between capitals in the country - level model , yielding similar results .for further verification of the model , we also tested the model s ability to fit the outbreak when data was incorporated from march 29 .the fits and forecasts from these simulations were similar to those incorporating data from may onward ( supplemental figures 1 - 2 ) .the district model was simulated from march 31 , 2014 to january 31 , 2015 .the model was compared to data on the outbreak s local geospatial progression from each district , using incidence of cases and deaths from who updates and the un ocha database .model parameters were determined from clinical literature and fitting to available data , similarly to previous models . in the country - level model, nine parameters were fitted to the data : transmission ( ) and death ( ) as well as three gravity - term constants , , were separately fitted for each country . .the parameter ( ) adjusts the `` gravity '' terms to reflect the balance of local and long - range transmission in each country .parameters , definitions , units , ranges , and sources are given in supplementary table s1 . in the country - level model ,500 sets of initial values for all parameters were selected with latin hypercube ( lh ) sampling from realistic ranges for parameters , determined from who , cdc , and literature data , given in supp . . for each of the 500 parameter sets , only transmission rates , death rates , and fitted by least - squares , using nelder - mead optimization in matlab .all other parameters were held constant to the values from the lh sample . in the district - level model ,best - fit parameters from the country - level model were used for all parameters except .a different was fitted for each district to reflect differing at - risk populations and reporting rates . since fitting 63 parametersis highly complex , slow , and possibly unidentifiable , parameters were calculated according to final size , then adjusted by hand to generate curves matching the progression of the outbreak , both overall and in each district .while all other initial conditions were determined from early data and , in conakry , guinea , the initial conditions were fitted independently from , reflecting possible reporting rate differences between initial situation reports and subsequent data .interventions were compared by the number of cumulative cases the model forecasted on october 31 for different types and levels of intervention .reduction of local transmission represents interventions that limit contact of an evd patient with others in his or her home country , including quarantine , isolation , hospitalization , and local lockdowns .reduction of long - range transmission represents border closures , large - scale lockdowns , or other intervention measures that reduce the transmission between countries . in the district - level model, interventions were simulated by eliminating the outbreak in one patch ( both local and long - range intervention ) , and simulating to january 31 .this is analogous to effective case isolation and quarantine measures within a district , which would limit the transmission of evd within and from that district .the effectiveness of intervention in one district was measured by calculating the cumulative number of cases reduced in all other districts .the percent reduction from intervention in a district and percent reduction relative to size of the outbreak in that district were calculated .the best - fit accurately fit and forecasted the outbreak data and trends for each country ( figure [ fig : modelfits ] ) .the best - fit model forecast of cases on october 29 , 2014 , was 14,070 ; the who case data on that date was 13,540 .cumulative cases in each country due to local and long - range transmission are shown in figure [ fig : local_longrange ] .liberia had more local than long - range transmission .early on , guinea had more local transmission , due to initial local cases . as the epidemic progressed , the transmission ranges overlapped , although the best - fit trajectory for long - range transmission remained smaller than the local - transmission contribution . in sierra leone ,the best - fit trajectory of long - range transmission was more significant than local transmission , although forecasted ranges were similar .the district model was successful in fitting outbreak data for cases and deaths in each district .the model captured the final size of the outbreak in each patch with an value of 0.96 ( figure [ fig : districtr2 ] ) as well as matching the progression of the outbreak ( figure [ fig : districtmapprog ] ) .the model captured the temporal dynamics of districts with negligible case counts or high levels of transmission .the time - course plots of all 63 district - level fits are included in the supplement , as is a scatter plot of all data vs model values ( = 0.83 ) .the district - level model accurately captured the trend of spread of evd through second - level administrative units .while overall patterns at the district level are captured and show the correct ordering , the actual speed of disease spread in the model was faster in some districts than in the data , likely due to a combination of stochastic introductions and reporting delays .the district - level model was able to forecast local spread of evd , matching data on spatial progression of evd to different locales as the outbreak intensified ( figure [ fig : districtmapprog ] ) .an animation of the district - level simulation is given in the web supplement ( video s1 ) .according to our intervention simulations ( figure [ fig : interventions ] ) , reduction in local transmission , from interventions such as isolation or improved case - finding and ebola treatment unit ( etu ) capacity , is most effective in liberia . in the model , eliminating local transmission in liberia reduced the outbreak by up to 11,000 cases in all countries by october 31 , 2014 , a 76% reduction .reduction of long - range transmission was most effective in sierra leone . eliminating long - range transmission into sierraleone reduced the outbreak by up to 9,600 cases in all countries by october 31 , a 66% reduction .intervention analysis identified regions in which intervention was most successful ( figure [ fig : districtinterventions ] ) .overall , the most successful regions for intervention were conakry , coyah , and dubreka , guinea , and lofa , and monrovia / montserrado , liberia . to evaluate whether intervention effects were disproportionately significant compared to cases within the target region , we defined the intervention - amplifying region ( iar ) level by the ratio of cases prevented in other patches to cases within the patch , for regions with more than 0.05% of the epidemic s cases .dubreka , coyah , and conakry were the most effective iars ( figure [ fig : districtinterventions ] ) .our results demonstrate that outbreak dynamics in all three countries can be accurately captured using a gravity - model approach . moreover , country - level model forecasts successfully predicted cumulative cases and deaths one month ahead ( figure [ fig : local_longrange ] ) .the models were able to capture simultaneously the dynamic interactions within and between each region at both country and district levels .the model simulations suggest a form of `` spatial herd protection '' , wherein interventions in one region benefit surrounding regions as well , by reducing the spatial transmission between them . in the country - level model , we compared the effects of local and long - range interventions in each country .local interventions include more strict quarantine and isolation procedures , increased case - finding and etu capacity , safer burial practices , and other interventions that reduce contact of susceptible persons with infected persons in the local community , as well as behavioral changes that reduce local transmission .the model indicates that these interventions were most effective in liberia , both in reducing the outbreak in liberia and mitigating the whole epidemic .indeed , liberia s outbreak had the fastest initial growth rate , it was the first to turn over and end .our results suggest that liberian interventions may have had significant indirect effects on the epidemic dynamics in guinea and sierra leone as well .the most effective long - range transmission reduction was in sierra leone , with 66% case reduction across all countries when long - range transmission was completely eliminated .this reflects the porous border in sierra leone .based on the district - level intervention analysis , no single district in sierra leone acts as a source of long - range transmission to other locales , rather all districts play some combined role .thus , the country - level impact of long - range transmission is possibly a summative combination of two factors across all districts : long - range transmission into sierra leone and early introduction of cases from sierra leone into liberia . in the district - level model , the capitals of montserrado , liberia and conakry , guinea were highly effective intervention sites .their large population size makes them important in dictating the dynamics of the outbreak in surrounding regions .dubreka and coyah , guinea , near conakry , were effective intervention sites despite low numbers of cases .this indicates that they influenced the larger outbreaks surrounding them , especially conakry .intervention in these districts was especially significant in reducing the scope of the outbreak , not just within the district but also in surrounding regions . in some regions, intervention produced unexpectedly significant reductions in the overall outbreak : these districts amplify the effects of intervention to reduce disproportionately many cases outside the region these are denoted intervention - amplifying regions ( iars ) .the most significant iars include conakry , coyah , and dubreka in guinea and montserrado in liberia ( figure [ fig : districtinterventions ] ) .for instance , dubreka made up only 0.13% of the outbreak but intervention in dubreka alone reduced the size of the outbreak in all regions by 21% , drastically higher than its case contribution would suggest .coyah and dubreka were the most dramatic iars because they had relatively small numbers of cases .their proximity to conakry , which had a large number of cases and also functioned as an iar , implies that coyah and dubreka could have influenced outbreak dynamics in conakry .the outsize importance of these districts in the dynamics of the current outbreak makes them a possible target for increased intervention during future outbreaks in west africa .several papers have noted that guinea s peculiar outbreak curve is difficult to fit using simple models , due to plateaus in cumulative incidence between growth periods .however , by including spatial interaction between countries , we successfully captured guinea s outbreak dynamics ( figure [ fig : modelfits ] ) .this suggests that spatial interactions , such as local die - outs of evd followed by long - range reintroductions , may be responsible for the unusual incidence patterns in guinea .the country - level model s ability to capture the singular epidemic progression in guinea , due to its spatial transmission component , demonstrates that long - range dynamics do not just affect introduction , but also later stages of the outbreak .the district - level model , which separates the affected countries into 63 patches based on secondary administrative units , was successful in capturing the overall dynamics of spatial transmission .it captured the spread of evd from the initial sites of the outbreak to other locales in the region in an order and magnitude similar to the actual progression of the epidemic .this suggests that the county - level model captures the underlying transmission patterns that led to spread of evd between different locales in the affected countries ( figure [ fig : districtmapprog ] ) .these results are perhaps surprising given that the district - level model uses the same transmission parameters for all patches within the same country this suggests that even though there are likely to be significant local heterogeneities from patch to patch , most of these variations can be captured using the relatively simple framework afforded by the gravity model .in fact , a model that incorporates local heterogeneities often requires a large amount of case data at the local scale .this gravity model provided a reasonable approximation of the early dynamics of the outbreak , based simply on initial conditions from march 30 , soon after surveillance began : it can capture local differences in transmission without complex ( difficult - to - simulate ) models or extensive datasets .thus , it could be applicable for early - outbreak forecasting by providing an indication of the areas at greatest risk for ebola cases , in order to guide resources and aid to those locations .our models focus on early proliferation throughout west africa : interactions between regions , which led to the spatial spread of ebola . regional evd introductionsare inherently stochastic ; thus the model did not capture the precise temporal pattern in west africa in every district .stochastic model simulations might better characterize the randomness of regional introduction , but a stochastic implementation of the 63-patch regional structure would be computationally demanding .therefore , a deterministic model offers an elegant method to provide insight into the role of spatial dynamics .in addition , we considered the time scales that allowed us to capture the spatial dynamics of interest . in the country - level model , the turnover and re - ignition of the epidemic in guinea was of particular interest , since simple compartmental models provide little insight into its mechanism .our time - course included the first ignition , turnover , and re - ignition ; since simpler compartmental models fail to capture the dynamics of the outbreak , our model s highly accurate fit implies that the spatial structure captures the dynamics affecting guinea .our time - course ends in november - december 2014 , at which time interventions were increasing throughout west africa .this massive intervention scale - up affected the parameters for the outbreak : in order to simulate the progression of the outbreak after this time , time - variant parameters could be used .the late dynamics were not the focus of this research ; however , the model could be implemented with time - variant parameters , potentially providing insight into late dynamics of the outbreak .while this model captures epidemic dynamics without incorporating explicit movement patterns , further mobility data ( e.g. from cell phone carriers ) could elucidate dominant movement patterns in west africa .this data could provide a validation of the gravity model s success , if the gravity model accounts for these patterns .the model does not account for a detailed population structure : an organization of individuals into households , villages , or other communities .the gravity model could be applied with a structured - population ( network ) model , such as kiskowski s model .this model has been applied to study community - based interventions ; it could provide a granular simulation of ebola dynamics , with possible insights into community and differing levels of local intervention . during the course of the outbreak , parameters such as reporting rates , etu availability , and intervention rates changed .models that use a broader time - scale than our model see more benefits to incorporating time - variant parameter .in addition , transmission changes as behavior of infected and susceptible individuals changes .thus , models that incorporate behavioral changes could be successful in capturing that aspect of the epidemic .we developed dynamic transmission models that account for spatial spread by considering the `` gravitational pull '' of larger populations and shorter distances in west africa .the country - level model accurately captures epidemic dynamics and successfully forecasts cases and deaths for all three countries simultaneously .the district - level model captures the progression of evd between and within districts in west africa .our models suggest that reduction of local transmission in liberia and reduction of long - range transmission in sierra leone were the most effective interventions for the outbreak .the models also reveal differences in transmission levels between the three countries , as well as different at - risk population sizes .our gravity spatial model framework for evd provides insight into the geographic spread of evd in west africa and evaluate the relative effectiveness of interventions on a large , heterogeneous spatial scale .ultimately , gravity spatial models can be applied by public health officials to understand spatial spread of infectious diseases and guide interventions during disease epidemics .this work was supported by the national institute of general medical sciences of the national institutes of health under award number u01gm110712 ( supporting mce ) , as part of the models of infectious disease agent study ( midas ) network .the content is solely the responsibility of the authors and does not necessarily represent the official views of the national institutes of health ._ country - level model_. as discussed above , the model consists of , , , , , and compartments in each country .the model equations are given by : where , , and indicates each patch ( guinea , sierra leone , liberia ) , and and the other two patches ( with ) .the risk of new infections from within each patch was represented by the term for the `` home '' patch .tthe risk terms for evd transmission from outside the patch , and , include , the gravity term .the distance exponent of the gravity term , , is fixed at two .a range of values for were tested , however we found that changes in were compensated by changes in the fitted value of .thus , was fixed for clarity , based on values in previous gravity models .there are separate for each country , so the rate of transmission into each country is different .this reflects the differing border porosities and rates of travel between countries .+ _ district - level model_. similarly , the district - level model equations for patch are given by : this model structure is similar to that of the country - level model .the main difference is that the long - range transmission is grouped into one term , which sums the long - range transmission of every other patch into patch .the model parameters were estimated by simulating total cumulative cases , defined as the integral of the incidence ( ) multiplied by , a correction factor for underreporting and the fraction of the population at risk , among other factors .initial conditions for both models were determined based on the initial values of the data as follows : the number of susceptible persons in each patch was estimated based on the population size scaled by .the total population of each patch was determined using data from the world bank and national censuses .the number of exposed persons was determined as twice the initial number of infected ( and ) persons , based on the for evd , which has been estimated to be approximately 2 .the number of infected persons in i1 was determined based on the number of new cases in the previous nine days , based on the incubation period for evd , and the number of infected persons in i2 was determined by subtracting the number of deaths within the next four days from the number of currently infected persons at the starting date .the number of infected persons from local versus long - range transmission was estimated based on the origin of the outbreak and the location of cases up until the starting point for the data : since the outbreak began in guinea , the initial cases in guinea were considered from local transmission , while the initial cases in sierra leone and liberia were considered to be from long - range transmission from guinea .the number of recovered persons was estimated based on the number of infected persons who did not die within the nine - day period that preceded the starting date .the number of persons in the f class was based on a fraction of the number of persons who died within the period before the starting date , estimated using the burial rate in table 1 .the initial conditions were all scaled to fractions of the population using the parameter .we tested the country - level model s ability to fit the outbreak when data was incorporated using a start date of march 30 . in the simulations starting on march 30 , local transmission in liberiawas turned off until may ( when the first case appeared in liberia ) , as the ode framework used here can not capture the stochastic process which leads to emergence of an outbreak in a new locale .the resulting fits and forecasts from these simulations were similar to those incorporating data from may onward ( supplemental figures [ fig : modelfits_sup ] , [ fig : local_longrange_sup ] ) . for the district - level model , parameterswere determined using sampling from within plausible ranges and from the best - fits of the three - patch models .the 1 transmission rates were determined at the country level , reflecting the differing risks of transmission in the patches in each country .each was calculated to match final outbreak size , then adjusted by hand .the initial conditions were determined based on data from the who reports and from data compiled by the un .the initial conditions were determined similarly to the country - level model , based on data from the who reports and from data compiled by the un .the district capitals were considered to be the population centers for all districts except montserrado , liberia , and bonthe , sierra leone .montserrado county is the location of the national capital and largest city , monrovia , which was considered the population center .the capital of bonthe is on a coastal island , so yagoi , the closest city to bonthe on the coast , was considered the population center . in the country - level model, interventions were simulated by reducing transmission parameters for either local or long - range transmission .local transmission reductions were simulated by reducing 1 , the rate of local infections , in increments of one percent of the original value .long - range transmission reductions were simulated by reducing 2 and 3 , the rates of infections from outside the patch , in one - percent increments . in the district - level model , interventions were simulated by eliminating the outbreak in one district ( local transmission , long - range transmission into the district , and long - range transmission out of the district ) .the effect on the 62 surrounding patches was measured ( difference in number of cases in those 62 patches in the normal model simulation versus number of cases in the 62 patches in the intervention simulation ) .in addition , the iar score was determined as follows : best fit & range & sources + & transition rate from to & days & 0.1059 & 0.1 - 0.125 & + & transmission rate for in guinea & persons days & 0.0950 & initial : 0 - 0.5 , fitted : 0.0000 to 0.0676 & fitted to data + & transmission rate for in sierra leone & persons days&0.0504&initial : 0 - 0.5 , fitted : 0.0005 to 0.1858&fitted to data + & transmission rate for in liberia&persons days&0.1555&initial : 0 - 0.5 , fitted : 0.0727 to 0.1656&fitted to data + &ratio of transmission to and transmission&none&2.4565&1.5 - 3.0 & + & transmission rate for and in guinea&persons days&0.2335&calculated from and &see , + &transmission rate for and in sierra leone&persons days&0.1238&calculated from and &see , + & transmission rate for and f in liberia&persons days&0.3820&calculated from and & see , + & mortality for infected persons in guinea&none&0.6643&initial : 0.5 - 0.9 , fitted : 0.3601 to 1.4023&fitted to data + & mortality for infected persons in sierra leone&none&0.3910&initial : 0.5 - 0.9 , fitted : 0.2547 to 0.6159&fitted to data + & mortality for infected persons in liberia&none&0.6710&initial : 0.5 - 0.9 , fitted : 0.5453 to 0.8236&fitted to data + &fraction of persons in that dies&none&0.9732&0.9- 1.0& + &death rate for persons in &days&0.7597&calculated from and &see and + &burial rate&days&0.7425&0.3333 - 1.0 & + &1/duration of &days&0.7806&0.3 - 0.8 & + &1/duration of &days&0.1822&0.1429 -0.2& + &distance exponent&unitless&2&&fixed + &between - patch coupling strength for guinea&km persons&&initial : -,fitted : - &fitted to data + &between - patch coupling strength for sierra leone&km persons&&initial : -,fitted : -&fitted to data + &between - patch coupling strength for liberia&km persons&&initial:-,fitted : -&fitted to data + & recovery rate of persons in in guinea&days&0.0578&calculated from , , &fitted to data ( see , , ) + & recovery rate of persons in in sierra leone&days&0.1090&calculated from , , &fitted to data ( see , , ) + & recovery rate of persons in in liberia&days&0.0566&calculated from , , &fitted to data (see , , ) + & recovery rate of persons in & days&0.0209&calculated from , & + &rate and proportion of movement of persons from to in guinea & days&0.1243&calculated from , , &fitted to data ( see , , ) + & rate and proportion of movement of persons from to in sierra leone&days&0.0732&calculated from , , &fitted to data (see , , ) + &rate and proportion of movement of persons from to in liberia&days&0.1256&calculated from , , &fitted to data (see , , ) + & transmission rate from s to for locally transmitted cases of evd&days&varies&calculated from , , , , &see , + & transmission rate from s to for long - range transmitted cases of evd&days&varies&calculated from , , , , &see , + &influence of long range cases on home patch&unitless & & calculated from , , , &see , , , + & population in patch &persons&&fixed for each country& + & distance between patch and & km & &fixed for each combination of patches& + & correction factor for reporting rate , population at risk , and other factors & unitless&0.0030&0.001 - 0.1&sampled + sources + & transition rate from to & days & 0.1059 & + & transmission rate for in guinea & persons days & 0.0950 & fitted to data + & transmission rate for in sierra leone & persons days&0.0504 & fitted to data + & transmission rate for in liberia&persons days&0.1555&fitted to data + &ratio of transmission to and transmission&none&2& + & mortality for infected persons in guinea&none&0.6643&fitted to data + & mortality for infected persons in sierra leone&none&0.3910&fitted to data + & mortality for infected persons in liberia&none&0.6710&fitted to data + &fraction of persons in that dies&none&0.97& + &burial rate&days&0.9 & + &1/duration of &days&0.8 & + &1/duration of &days&0.2& + &rate and proportion of movement of persons from to in guinea & days&0.1370&fitted to data ( see , , ) + & rate and proportion of movement of persons from to in sierra leone&days&0.0806&fitted to data (see , , ) + &rate and proportion of movement of persons from to in liberia&days&0.1384&fitted to data (see , , ) + &distance exponent&unitless&2 & fixed + &between - patch coupling strength & km persons & & fitted to data + &influence of long range cases on home patch&unitless & calculated from , , , &see , , , + & population in patch &persons&fixed for each patch& + & distance between patch and & km & fixed for each combination of patches& + & correction factor for reporting rate , population at risk , and other factors & unitless&varies by patch&fitted to data + caption for supplementary animation : + * video s1 . * a gif representing the progression of ebola according to the model from march 30 , 2014 , to january 31 , 2015 . color intensity of district represents number of cumulative cases in that district : darker color represents higher number of cases .
the 2014 - 2015 ebola virus disease ( evd ) epidemic in west africa was the largest ever recorded , representing a fundamental shift in ebola epidemiology with unprecedented spatiotemporal complexity . we developed spatial transmission models using a gravity - model framework to explain spatiotemporal dynamics of evd in west africa at both the national and district - level scales , and to compare effectiveness of local interventions ( e.g. local quarantine ) and long - range interventions ( e.g. border - closures ) . incorporating spatial interactions , the gravity model successfully captures the multiple waves of epidemic growth observed in guinea . model simulations indicate that local - transmission reductions were most effective in liberia , while long - range transmission was dominant in sierra leone . the model indicates the presence of spatial herd protection , wherein intervention in one region has a protective effect on surrounding regions . the district - level intervention analysis indicates the presence of intervention - amplifying regions , which provide above - expected levels of reduction in cases and deaths beyond their borders . the gravity - modeling approach accurately captured the spatial spread patterns of evd at both country and district levels , and helps to identify the most effective locales for intervention . this model structure and intervention analysis provides information that can be used by public health policymakers to assist planning and response efforts for future epidemics . + + * keywords * : ebola virus disease , transmission modeling , spatial modeling , interventions + * abbreviations * : evd , ebola virus disease ; etu , ebola treatment unit ; , who , world health organization , iar , intervention - amplifying region
quantum bit commitment ( qbc ) is a two - party cryptography including two phases . in the commit phase , alice ( the sender of the commitment )decides the value of the bit ( or ) that she wants to commit , and sends bob ( the receiver of the commitment ) a piece of evidence , e.g. , some quantum states .later , in the unveil phase , alice announces the value of , and bob checks it with the evidence .an unconditionally secure qbc protocol needs to be both binding ( i.e. , alice can not change the value of after the commit phase ) and concealing ( bob can not know before the unveil phase ) without relying on any computational assumption .qbc is recognized as an essential primitive for quantum cryptography , as it is the building block for quantum multi - party secure computations and more complicated post - cold - war eramulti - party cryptographic protocols .unfortunately , it is widely accepted that unconditionally secure qbc is impossible - , despite of some attempts toward secure ones ( e.g. , hejpa , hepra , hearxiv , hearxiv2,qbc75,qi92 and the references therein ) .this result , known as the mayers - lo - chau ( mlc ) no - go theorem , was considered as putting a serious drawback on quantum cryptography .( note that cheat - sensitive qbc is not covered , as its security goal is defined differently from that of the qbc studied in the current paper . )very recently , we proposed a qbc protocol using orthogonal states hejpa , where the density matrices do not satisfy a crucial condition on which the mlc no - go theorem holds .the protocol is based on the goldenberg - vaidman ( gv ) quantum key distribution ( qkd ) scheme , which makes use of the mach - zehnder interferometer involving symmetric beam splitters .koashi and imoto qi917 pointed out that the gv qkd scheme can be simplified by replacing the symmetric beam splitters with asymmetric ones . herewe will apply the same idea to propose a simplified version of our qbc protocol , so that it can be more feasible and efficient . in the next section ,we introduce some basic notations and settings used throughout the paper .the koashi - imoto ( ki ) qkd scheme will be briefly reviewed in section iii .then we propose our qbc protocol in section iv , and analyze its security in section v. the feasibility will be discussed in section vi . section vii summarizes the conclusion and gives some remarks .an example of the technical attack mentioned in section vi will be provided in the appendix .generally , in both qkd and qbc the two participants are called alice and bob . but similar to , in our current qbc protocol , the behavior of bob is more like that of the eavesdropper rather than the bob in qkd . to avoid confusion , here we use the names in the following way . in qkd, the sender of the secret information is called alice , the receiver is renamed as charlie instead of bob , and the external eavesdropper is called eve . in qbc , the sender of the commitment is alice , the receiver is bob , and there is no eve since qbc merely deals with the cheating from internal dishonest participants , instead of external eavesdropping . as our main interest is focused on the theoretical possibility of secure qbc, we will only consider the ideal case where no transmission error occurs in the communication channels , nor there are detection loss or dark counts , etc .our qbc proposal is inspired by the ki qkd scheme , which makes use of the mach - zehnder interferometer illustrated in fig .1 . let and denote the reflectivity and transmissivity of the asymmetric beam splitters and , with and .alice encodes the bit values and she wants to transmit to charlie , respectively , using two orthogonal states is the photon fock state for the arm .that is , each or is split into two localized wave packets , and sent to charlie separately in quantum channels and , respectively ; thus single photon nonlocality is presented .this is done by sending a single photon either from the source ( sending ) or from the source ( sending ) , then splitting it with the beam splitter made of a half - silvered mirror ( note that polarizing beam splitters are not recommended due to the security problem addressed at the end of section 6 of ) . to ensure the security of the transmission , the wave packet in channel is delayed by the storage ring , which introduces a sufficiently long delay time so that this wave packet will not leave alice s site until the other wave packet in channel already entered charlie s site .thus the two wave packets of the same photon are never present together in the transmission channels .this makes it impossible for eve to prepare and send charlie a perfect clone _ on time _ if she waits to intercept and measure both wave packets , even though are orthogonal , because she has to decide what to resend to channel before she can receive anything from channel . on the other hand ,when no eavesdropping occurs , charlie can distinguish and unambiguously by adding a storage ring to channel whose delay time is also , while introducing a phase shift to channel .the two wave packets of the same photon will then recombine and interfere on the beam splitter , which is identical to .thus the complete apparatus of alice and charlie forms a mach - zehnder interferometer , so that ( ) will always make the detector ( ) click with certainty , allowing charlie to decode the transmitted bit correctly .any mismatched result between alice s transmitted state and charlie s measurement will immediately reveals the presence of eve . comparing with the gv qkd scheme , the key difference is that and in the ki scheme are asymmetric beam splitters , while the gv scheme uses symmetric ones .the advantage of this modification is that the sending time of each photon can be fixed and publicly known beforehand , while in the gv scheme it has to be random and kept secret from eve until the security check .an important fact that will be useful for our qbc protocol is that , since the ki scheme is unconditionally secure , it is clear that eve s intercept - resend attack will always be detected with a nontrivial probability .that is , if she intercepts a state sent from alice and decodes a nontrivial amount of information , then the state she resends to charlie can not always make charlie s correct detector click at the right time with the probability , no matter what is eve s strategy on preparing the resent state .there will always be a nontrivial probability that the resent state will be detected by the wrong detector ( the one that should not click when no eavesdropping presented ) , or at the wrong time ( which is earlier or later than the correct arrival time when the state is not intercepted ) .please see for the rigorous proof , as well as some examples showing why eve s strategies do not work .as illustrated in fig . 2 , to build a qbc protocol upon the above ki qkd scheme , we treat charlie s site as a part of alice s , so that the two parties merge into one .alice sends out a bit - string encoded with the above orthogonal states , whose value is related to the bit she wants to commit .then she receives the states herself . meanwhile , let bob take the role of eve .his action shifts between two modes . in the _ bypass _ mode, he simply does nothing so that the corresponding parts of the states return to alice intact . in the _ intercept _ mode , he applies the intercept - resend attack .that is , he intercepts the state and decodes the corresponding bit ( which can be done using the same device as that of charlie s ) , and prepares a fake state to resend to alice on time .while there could be many strategies for bob to prepare the resent state , we use to denote the lower bound of the average probability for his resent state to be caught in alice s check . as we mentioned above, the unconditional security of the ki qkd scheme guarantees that can not always equal exactly to zero for both and even when bob uses the optimal strategy .therefore , alice is able to estimate the upper bound of the frequency of the presence of the intercept mode , to limit bob from intercepting the whole string , so that the value of the committed bit can be made concealing .meanwhile , since , at the end of the commit phase there will be some bits of the string become known to bob , while alice does not know the exact position of all these bits .thus she can not alter string freely at a later time , making the protocol binding .the complete qbc protocol is described below .the _ commit _ protocol : \(1 ) bob chooses a binary linear -code and announces it to alice , where , , are agreed on by both alice and bob .\(2 ) alice chooses a nonzero random -bit string and announces it to bob .this makes any -bit codeword in sorted into either of the two subsets and . here .\(3 ) now alice decides the value of the bit that she wants to commit .then she chooses a codeword from randomly .\(4 ) alice encodes each bit of this specific as where the state is defined by equation ( [ eqpsi ] ) .she sends bob the two wave packets of the same state separately in channels and , with the storage ring on channel introducing a delay time known to bob .\(5 ) for each of alice states ( ) , bob chooses the intercept mode with probability ( ) and the bypass mode with probability .if bob chooses to apply the bypass mode , he simply keeps channels and intact so that the state sent from alice will be returned to her detectors as is .else if bob chooses to apply the intercept mode , he uses the same measurement device as that of alice s , to measure so that he can decode with certainty .meanwhile , he prepares another state and sends it back to channels and at the right time , i.e. , the time which can ensure that it reaches alice s detectors at a time that looks as if bob were applying the bypass mode .there are many different strategies how bob sends this state ( thus we left this part of bob s device as a black box in fig .one of the simplest ways is to use the same device that alice uses for sending her state .more strategies will be discussed in section v.a .it will be shown there that all these strategies can not hurt the security of the protocol , so that bob are allowed to use any of them here . as stated above , in any strategythere is a nonvanishing probability that bob s resent state does not equal to alice s original since he has to send it before completing the measurement on ( to be further explained in section v.a too ) .let be the lower bound of this probability for all these strategies .\(6 ) alice uses the same device that charlie used in the ki qkd scheme , to measure the output of the quantum channels from bob .she counts how many times her measurement results do not match the states she sent , and denotes it as . from step ( 5 )it can be shown that . thus alice can estimate the upper bound of the probability of bob choosing the intercept mode as .alice agrees to continue with the protocol if she finds , which means that the number of s known to bob is .otherwise she concludes that bob is cheating .the _ unveil _ protocol : \(7 ) alice announces the values of and the specific she chose .\(8 ) bob accepts the commitment if and is indeed a codeword from , and every agrees with the state he detected in the intercept mode .for easy understanding , here we will first give a heuristic explanation of the security of the protocol . a more general theoretical proof will be provided in section v.b .the binary linear -code used in the protocol can simply be viewed as a set of classical -bit strings .each string is called a codeword .this set of strings has two features .\(a ) among all the possible choices of -bit strings , only a particular set of the size ( ) is selected to form this set .\(b ) the distance ( i.e. , the number of different bits ) between any two codewords in this set is not less than ( ) .feature ( a ) puts a limit on alice s freedom on choosing the initial state , since each ( ) can not be chosen independently . instead, the string formed by the indices s can only be chosen among the codewords .meanwhile , feature ( b ) guarantees that if alice wants to change the string from one codeword into another so that the value of her committed can be altered , she needs to change at least bits of the codeword .but among the states she sent to bob , there are only states which she knows for sure that bob has applied the intercept mode . for each of the rest states, her measurement result always matches the state she sent , so that she can not distinguish unambiguously whether bob has applied the bypass mode or the intercept mode .if it was the intercept mode , bob already knew the corresponding bit of the codeword , so that alice s altering it will be caught inevitably .rigorously speaking , among these states , the probability that bob has applied the intercept mode is alice s altering one bit of the codeword will stand the probability to be detected , and her probability of altering bits without being detected will be . alternatively , alice may prepare initially in a state where is not a codeword .instead , it is half - way between two codewords , so that changing will be sufficient to turn it into one of the codewords . in this case, her probability of escaping the detection will be increased to .nevertheless , in either way the probability drops exponentially as increases .thus it can be made arbitrarily close to zero .on the other hand , feature ( a ) also guarantees that the number of different codewords having less than bits in common increases exponentially with .this is the key that makes our protocol secure against bob , as alice can bound the frequency that bob applies the intercept mode , which in turns bounds the number of the bits known to bob so that he can not know alice s actual choice of the codeword .the reason is that in step ( 5 ) of the protocol , no matter what is the strategy that bob used in his intercept mode to prepare the resent state , it can be shown that his resent state can not _always _ equal to the he received from alice , as long as he wants to ensure that the time it reaches alice s detectors will show no difference than the case where he were applying the bypass mode .that is , either bob s resent state in the intercept mode will inevitably make alice s detectors click earlier or later than the time it does when the bypass mode were used instead , or the resent state will be different from with a nonvanishing probability . to see why this is the case ,let us first consider the strategy where bob prepares the resent state using the same device that alice uses for sending her .suppose that the first wave packet of alice s enters bob s site via channel ( before entering his storage ring ) at time ( which should be agreed on by both alice and bob beforehand ) .then the second wave packet of will enter bob s site via channel at time due to the existence of the storage ring in alice s sending device ( see fig .2 ) . as there is also another storage ring in alice s measurment device, bob must send the first wave packet of his resent state into channel at time , and send the second wave packet into channel at time , so that they can reach alice s measurement device at a time that looks like bob is running the bypass mode . here for simplicity , we assume that the time for the wave packets to travel the rest part of the channels ( other than the storage rings ) is negligible . that is , the first wave packet of bob s state needs to be sent before the second wave packet of alice s reaches him .otherwise his resent state will reach alice s detectors later than the expected arrival time when no interception occurs .therefore , by the time bob prepares the resent state , he has not received alice state completely yet so he does not know the form of .thus his resent state can not match alice s state with probability , and he can not make them match by his local operations acting solely on the second wave packet of his resent state ( the one to be sent via channel ) after the first wave packet in channel was already sent .on the other hand , suppose that bob waits until so that alice s enters his measurement device completely , and prepares the resent state following the measurement result .then even though the form of the states will match perfectly , the resent state will reach alice s detectors much later than it should when bob uses the bypass mode , because the storage ring in channel of alice s measurement device will delay the corresponding wave packet .this will make alice aware that bob is using the intercept mode too .alternatively , there can be another strategy where bob simply sends all wave packets of his state simultaneously via one of the channels alone , e.g. , through channel at time or through channel at time .but he will not be able to guarantee which of alice s detectors will click with certainty .though here we have only analyzed the above few strategies as examples , the result is common .that is , in any strategy potentially existed , there will be a nonvanishing probability ( let denote the lower bound of the probability for all strategies ) that alice will find a mismatched result between the she sent and her measurement on the state she received from bob , or the arrival time of bob s state is different from what it should be in the bypass mode .this is because , as mentioned in section iv , the role of bob in our protocol is actually the same as that of eve in the ki qkd scheme .if there is a strategy which is not bounded by the above result , then it can be utilized to fulfill a successful eavesdropping to the ki qkd scheme , which will conflict with the existing proof of the unconditional security of the scheme . as a consequence ,whenever bob applies the intercept mode , alice can distinguish it from the bypass mode with the probability . then as described in step ( 6 ) , by counting the number of the mismatched results , alice can estimate the upper bound of the probability of bob choosing the intercept mode as .as she agrees to continue only when , it is guaranteed that during the commit phase , bob knows only bits of . since feature ( a ) of the binary linear -code ensures that the number of different codewords having bits in common increases exponentially with , the potential choices for are too much for bob to determine whether belongs to the subset or .thus the amount of information bob gained on the committed bit before the unveil phase can be made arbitrarily close to zero by increasing .taking both alice s and bob s cases studied above into consideration , we can see that fixing and while increasing will then result in a protocol secure against both sides .while it is hard to find a completely general proof like those for qkd qi70 showing that the protocol is secure against all cheating strategies that may potentially exist , here it can be shown that our protocol is at least secure against the specific cheating strategy proposed in the mlc theorem . as pinpointed out in section 4 of , while there are many variations and extensions of the mlc no - go theorem - , their proofs all have the following common features .\(i ) the reduced model .according to the no - go proofs , any qbc protocol can be reduced to the following model .alice and bob share a quantum state in a given hilbert space .each of them performs unitary transformations on the state in turns .all measurements are performed at the very end .( ii ) the coding method .the quantum state corresponding to the committed bit has the form the systems and are owned by alice and bob respectively .\(iii ) the concealing condition . to ensure that bob s information on the committed bit is trivial before the unveil phase, any qbc protocol secure against bob should satisfy is the reduced density matrix corresponding to alice s committed bit , obtained by tracing over system in equation ( [ coding ] ) .note that in some presentation of the no - go proofs ( e.g. , ) , this feature was expressed using the trace distance or the fidelity instead of the reduced density matrices , while the meaning remains the same .\(iv ) the cheating strategy .as long as equation ( [ eqconcealing ] ) is satisfied , according to the hughston - jozsa - wootters ( hjw ) theorem ( a.k.a. the uhlmann theorem , etc . ) , there exists a local unitary transformation for alice to map into successfully with a high probability .thus a dishonest alice can unveil the state as either or at her will with a high probability to escape bob s detection .consequently , a concealing qbc protocol can not be binding . following the method of the mlc theorem to reduce our protocol into an equivalent form where alice and bob perform unitary transformations in turns on the quantum state they share, we can see that our commit protocol is essentially the following 3-stage process .\(i ) alice prepares and sends bob a system containing qubits ( ) .\(ii ) bob prepares an -qubit system and an -qutrit system .then he performs an operation on the combined system .\(iii ) bob sends system to alice . in this process, the initial state of each qubit in is chosen at bob s choice , and kept secret from alice .the system is for storing the result of bob s measurement on .the three orthogonal states of each qutrit in are denoted as , and , where ( ) means that the measurement result of is ( ) , and means that is not measured so that the result remains unknown .all the qutrits in system are initialized in .the operation is defined by the mode that bob chooses for each qubit in , where each ( ) is a -particle operator acting on the -th qubits / qutrit of , and .if bob chooses the bypass mode for the -th qubit of , then is a -qubit permutation operator which interchanges the states of the -th qubits of and , and is the identity operator that keeps the -th qutrit of unchanged . or if bob chooses the intercept mode for the -th qubit of , then is the identity operator on the -th qubit of , ( ) is an unitary transformation on the -th qutrit of which can map the state into ( ) .that is , applying is equivalent to measuring the -th qubit of and storing the result in the -th qutrit of , while keeping the -th qubit of unchanged . among all the possible forms of the operation , bob chooses one of those that can pass the security check in step ( 6 ) of our original protocol in section iv ( i.e. , the number of in should be ) , and keeps his choice secret from alice .now we will show why the specific cheating strategy in the mlc theorem does not apply to our protocol . with the above equivalent description , we can see that after the commit phase , the density matrix of the systems shared between alice and bob is in the form , where , and are the density matrices for the systems , and , respectively .note that here the first qubits are now owned by alice , i.e. , they serve like the system in equation ( [ coding ] ) even though system was prepared by bob .meanwhile , the rest qubits and qutrits now belong to bob and serve like the system in equation ( [ coding ] ) , even though system was prepared by alice .let ( ) denote the initial density matrix of system prepared by alice , that can unveil the committed bit as ( ) . according to the mlc theorem ,now the goal for an dishonest alice is to find a local unitary transformation acting on the qubits at her side alone , that can map into .that is , should satisfy{a}^{\dagger } = u_{b}(\rho ^{\beta } \otimes \rho _ { 1}^{\alpha } \otimes \rho ^{\gamma } ) u_{b}^{\dagger } .\]]applying on both sides of this equation , we yield that was kept secret from alice , thus she has to choose an which is independent of .meanwhile , the right side of equation ( [ ua ] ) is independent of too .therefore , equation ( ua ) can not be satisfied in general , unless always commutes with the specific s that bob may choose in the protocol . in this case , and equation ( [ ua ] ) becomes in many previous qbc protocols ( e.g. ) there is .then the hjw theorem guarantees that such a local exists , so that alice can alter the value of her committed bit . this is why the cheating strategy in the mlc theorem is always successful in such protocols .however , as shown from equation ( eqpsi ) , in our protocol and are orthogonal .thus the state corresponding to a specific codeword is orthogonal to the state corresponding to any other codeword .consequently , our qbc protocol satisfies , which is a crucial difference from previous insecure protocols . in this casethe hjw theorem does not apply , making alice unable to find an local transformation acting on her system ( i.e. , system ) alone while satisfying equation ( [ ua2 ] ) . for this reason ,our protocol is immune to the specific cheating strategy proposed in the mlc no - go theorem . as the system alice sent to bob in our qbc protocol satisfies , at the first glance it seems that bob can simply perform a measurement to project the state of into the hilbert spaces supported by ( ) and thus learn the value of alice s committed .but this is not true , because as shown above , the density matrix of the systems shared between alice and bob after the commit phase is .that is , bob is required to perform the operation on system .this is incommutable with the measurement on for distinguishing and , since equation ( [ u bypass ] ) indicates that contains permutation operators which act on not only system , but also other systems .therefore , and can not be both applied on .bob can only choose one of them .the timing of the sending of the states in the protocol also prevents bob from keeping system unsent until he receives the entire system .thus he needs to decide on the fly which operation to apply .once he applied , then the state of will be disturbed so that bob will lose the chance to apply on it for decoding .moreover , the difference between and is detectable to alice . according to step ( 5 ) of our protocol, an is considered legitimate if it includes the operator ( i.e. , bob is using the bypass mode ) for times . on the contrary , because the minimal distance between the codewords is , the number of in must be less than . otherwise as a basic property of the binary linear -code, the number of the possible codewords having less than bits in common will be at the order of magnitude of so that such an can provide only trivial information on the value of alice s committed .but as elaborated in section v.a , the unconditional security of the ki qkd scheme ensures that , for each of alice s state which bob applies the intercept mode , his resent state can equal to with the probability only , where there is always no matter which resend strategy bob uses .when the number of the applied bypass mode is less than , alice will find that the number of mismatched result between her received state and the original she sent will be .if she takes as stated in step ( 6 ) of the protocol , then she will find that .thus she knows that bob was attempting to apply the operation instead of a legitimate .namely , while alice s committed state satisfies so that it is distinguishable to bob , alice s security check in the protocol requires bob to perform an operation , which is incommutable with the operation for distinguishing and .the protocol thus becomes concealing against bob .in the above we focused only on the theoretical possibility of evading the mlc no - go theorem .but we can see from fig .2 that our protocol is also very feasible , as the required experimental technology is already available today . also ,as the secret random sending time is no longer required , the commit phase will take less time than that of our previous protocol , and the total number of photons that bob needs to send in the intercept mode will be significantly reduced . more rigorously ,suppose that the minimal time for bob to shift between the intercept and bypass modes is .when both protocols choose the same as the length of the codewords , our previous protocol requires bob to send ( .note that was denoted as in ) photons in total , and the duration time of the commit phase is .but in the current protocol , the photon number is reduced to , and the duration time is . as it was suggested in section 3 of that a typical choice of the parameters is , we can see that the current protocol can be times more efficient than our previous one .there is another advantage of removing the need to keep the sending time secret at first .that is , now alice and bob can determine the binary linear -code and decide on the sending time of the states ( ) beforehand , so that no more classical communication is needed at all during the commit phase , unless one of them finds the other participant cheating and announces to abort .therefore , it not only reduces the communication cost significantly , but also makes it easier for security analysis and comparison with the mlc theorem , which was generally deduced in a theoretical model where classical communication is not presented directly . nevertheless , under practical settings , some more security checks should be added against technical attacks .especially , the physical systems implementing the states may actually have other degrees of freedom , which leave rooms for alice s cheating .for example , she may send photons with certain polarization or frequency , so that she can distinguish them from the photons bob sends in the intercept mode .in this case , bob and alice should discuss at the beginning of the protocol , to limit these degrees of freedom to a single mode . in step ( 5 )when bob chooses the intercept mode , he should also measure occasionally these degrees of freedom of some of alice s photons , instead of performing the measurement in the original step ( 5 ) . then if alice wants to send distinguishable photons with a high probability so that they are sufficient for her cheating , she will inevitably be detected .moreover , when bob applies the bypass mode , he should add phase shifters to both channels and to introduce the same phase shift in both channels so that an honest alice will not be affected ( to be explained in the appendix ) , while the amount of this phase shift is randomly chosen and kept secret from alice , so that the counterfactual attack described in the appendix can be defeated .note that all these technical attacks and the corresponding modifications are due to the imperfection of the physical systems implementing the protocol .they should not be messed with the theoretical possibility of evading the mlc no - go theorem .in conclusion , inspired by the ki simplified version of the gv qkd scheme , we improved our previously proposed qbc protocol , and proved that it is immune to the specific cheating strategy used in the mlc no - go theorem of unconditionally secure qbc .the key reason is that the density matrices of alice s states corresponding to the committed values and are orthogonal to each other , making her unable to find the local unitary transformation for the cheating by using the hjw theorem . meanwhile , the protocol remains concealing against bob because he is required to perform another operation , which is incommutable with the measurement for distinguishing the density matrices .however , there may potentially exist innumerous cheating strategies other than the one in the mlc theorem .it is natural to ask whether our protocol can be unconditionally secure against all these strategies .a rigorous evaluation of the upper bound of the cheating probability is also needed .for example , our protocol involves a probability which denotes the lower bound of the average probability for bob s resent state in step ( 5 ) to be caught in alice s check .the exact value of should be determined by the rigorous quantitatively security analysis of the ki qkd scheme .unfortunately , such a value was not yet provided in the literature . in turns ,the parameters and in the binary linear -code used in our protocol need to be chosen according to .therefore it remains unknown whether the cheater can have a nontrivial probability of success if these parameters are improperly chosen .we would like to leave these questions open for future researches .though the current qbc protocol and the one in have similarities in many ways , the underlying origins of their security against bob are somewhat different .while both protocols are immune to bob s cheating because they are based on unconditionally secure qkd schemes , as pointed out in , the gv qkd scheme can actually be viewed as utilizing three orthogonal states two photon states and one vacuum state .its security is provided by the random sending time of the photons . on the contrary, the ki qkd scheme does not require the vacuum state and the secret sending time .the security is guaranteed by the fact that the eavesdropper can not fake the states with certainty owe to the use of the asymmetric beam splitters .similarly , the security of the qbc protocol in against bob is based on alice s random sending time that remains secret before the last step of the commit phase , while in our current qbc proposal , it is because bob can not fake the states with certainty when he runs the intercept mode .therefore our current protocol is more than merely a simplification on the presentation .the work was supported in part by the nsf of china , the nsf of guangdong province , and the foundation of zhongshan university advanced research center .as we mentioned in sec . 6 , under practical settings our qbc protocol may need some modifications against technical attacks .here we give such an example .recently a cheating strategy against counterfactual qkd protocols qi801,qi1026 was proposed . unlike general intercept - resend attacks in which measurements are performed on the quantum states carrying the secret information , in this strategy the cheater makes use of quantum counterfactual effect to detect the working modes of the devices of other participants .thus it was named the counterfactual attack . herewe will skip how it applies to qkd protocols , while focus only on its impact on our qbc protocol .( f2 ) when the paths and are adjusted correctly , two wave packets coming from paths and , respectively , will interfere and combine together , and enter path with certainty .an ideal fbs that can realize these functions faithfully does not exist in principle .thus it is called fictitious .for example , devices with the functions ( f2 ) and ( f3 ) may not accomplish the function ( f1 ) perfectly , i.e. , a photon coming from path could pass the devices with a nontrivial probability , making the attack detectable . however , fbs can be implemented approximately by using an infinite number of ordinary bs . in practice , the number of bs involved in the implementation has to be finite .but if the deviation from an ideal fbs is too small to be detected within the capability of available technology , then the attack could become a real threat .suppose that an ideal fbs is available to a dishonest alice in our qbc protocol .when she is supposed to send bob a state in step ( 4 ) , she runs both the fbs system in fig .3 and the apparatus in the honest protocol ( i.e. , the one shown in fig .2 ) simultaneously in parallel , with path of the fbs system connecting to both the input and output of bob s channel ( or both the input and output of bob s channel ) . the apparatus in fig .2 works as usual so that the protocol can be executed as if she is honest , while the fbs system serves as a probe to detect bob s mode .according to the function ( f2 ) of the fbs , whenever bob applies the bypass mode in step ( 5 ) , the wave packets of a photon alice sent to the fbs will be returned from both paths and so that the detector will click with certainty . on the other hand , whenever bob applies the intercept mode , an ideal fbs can guarantee that will never click as path is actually blocked . therefore alice can learn bob s mode unambiguously . since bobdoes not know the state alice sends when he applies the bypass mode , alice can lie about the value of the corresponding freely , thus alters her committed in the unveil phase .nevertheless , it is easy to defeat this counterfactual attack . as pointed out in ref . , bob s randomizing the optical length of path is sufficient to destroy the interference effect in the fbs system .therefore in our protocol , bob can simply add extra phase shifters ( other than the one shown in fig .2 ) to both channels and when he applies the bypass mode , to introduce the same phase shift in both channels , where the value of is randomly chosen and kept secret from alice , and can vary for different . in this case , after passing bob s apparatus , alice s initial states and will become , respectively , we can see that ( ) differs from ( ) merely by a global factor .it is well known that such a global factor is not detectable .in fact , in our case the interference pattern of the two wave packets meeting at the beam splitter of alice s measurement device is determined by their relative phase difference . no change will be detected when they both receive a phase shift .thus an honest alice who uses the original apparatus described in fig . 2 will still detect ( ) as ( ) , even though she does not know the value of .on the other hand , if alice wants to apply the above counterfactual attack , without knowing she can not know how to adjust path to ensure in fig .3 clicking with certainty . consequently, there will be times that alice does not know which mode bob is running . then the number of s that she can alter will be limited , which is insufficient to change the committed as long as the value of in our qbc protocol is properly chosen .99 bennett , c.h . , brassard , g. : quantum cryptography : public key distribution and coin tossing . in : proceedings of ieee international conference on computers , systems , and signal processing , pp .ieee , new york ( 1984 ) brassard , g. , crpeau , c. , jozsa , r. , langlois , d. : a quantum bit commitment scheme provably unbreakable by both parties . in proceedings of 34th annual ieee symposium on foundations of computer science , pp .ieee , los alamitos ( 1993 ) crpeau , c. : what is going on with quantum bit commitment ? in : proceedings of pragocrypt 96 : 1st international conference on the theory and applications of cryptology .czech technical university publishing house , prague ( 1996 ) dariano , g.m . : the quantum bit commitment : a complete classification of protocols ( shortened version of quant - ph/0209149 ) .in : proceedings of qcm&c .rinton press , boston . also available as .e - print .quant - ph/0209150v1 ( 2002 ) dariano , g.m . ,kretschmann , d. , schlingemann , d. , werner , r.f . : reexamination of quantum bit commitment : the possible and the impossible .a * 76 * , 032328 ( 2007 ) .e - print .quant - ph/0605224v2 ( 2006 ) chiribella , g. , dariano , g.m ., perinotti , p. , schlingemann , d.m . ,werner , r.f . : a short impossibility proof of quantum bit commitment .a * 377 * , 1076 ( 2013 ) .e - print .arxiv:0905.3801v1 ( 2009 ) li , q. , li , c.q . ,long , d .- y . , chan , w.h ., wu , c .- h . : on the impossibility of non - static quantum bit commitment between two parties .quantum inf . process . * 11 * , 519 ( 2012 ) .e - print .arxiv:1101.5684v1 ( 2011 )
we simplified our previously proposed quantum bit commitment ( qbc ) protocol based on the mach - zehnder interferometer , by replacing symmetric beam splitters with asymmetric ones . it eliminates the need for random sending time of the photons ; thus , the feasibility and efficiency are both improved . the protocol is immune to the cheating strategy in the mayers - lo - chau no - go theorem of unconditionally secure qbc , because the density matrices of the committed states do not satisfy a crucial condition on which the no - go theorem holds .
in the study of population dynamics , it turns out to be very useful to classify individual interactions in terms of evolutionary games . early mathematical theories of strategic interactions were based on the assumption of rational choice : an agent s optimal action depends on its expectations on the actions of others , and each of the other agents actions depend on their expectations about the focal agent . in evolutionary game theory , successful strategies spread by reproduction or imitation in a population .evolutionary game theory not only provides a platform for explaining biological problems of frequency dependent fitness and complex individual interactions such as cooperation and coordination . in finite populations , it also links the neutral process of evolution to frequency dependence by introducing an intensity of selection .evolutionary game theory can also be used to study cultural dynamics including human strategic behavior and updating .one of the most interesting open questions is how do individuals update their strategies based on the knowledge and conception of others and themselves ?two fundamentally different mechanisms can be used to classify strategy updating and population dynamics based on individuals knowledge about their strategic environment or themselves : imitation of others and self - learning based on one s own aspiration . in imitation dynamics , players update their strategies after a comparison between their own and another individual s success in the evolutionary game . for aspiration - driven updating, players switch strategies if an aspiration level is not met , where the level of aspiration is an intrinsic property of the focal individual . in both dynamics, novel strategies can not emerge without additional mechanisms such as spontaneous exploration of strategy space ( similar to mutation ) .the major difference is that the latter does not require any knowledge about the payoffs of others .thus aspiration level based dynamics , a form of self - learning , require less information about an individual s strategic environment than imitation dynamics .aspiration - driven strategy - update dynamics are commonly observed in studies of animal and human behavioral ecology .for example , fish would ignore social information when they have relevant personal information , and experienced ants hunt for food based on their own previous chemical trials rather than imitating others .furthermore , a form of aspiration - level - driven dynamics play a key role in the individual behaviors in rat populations .these examples clearly show that the idea behind aspiration dynamics , i.e. , self - evaluation , is present in the animal world . in behavioral sciences ,such aspiration - driven strategy adjustments generally operate on the behavioral level .however , it can be speculated that self - learning processes can have such an effect that it might actually have a downward impact on regulatory , and thus genetic levels of brain and nervous system .this , in turn , could be seen as a mechanism that alters the rate of genetic change .whereas such wide reaching systemic alterations are more speculative , it is clear that aspiration levels play a role in human strategy updating .we study the statistical mechanics of a simple case of aspiration - driven self - learning dynamics in well - mixed populations of finite size .deterministic and stochastic models of imitation dynamics have been well studied in both well - mixed and structured populations . for aspiration dynamics ,numerous works have emerged studying population dynamics on graphs , but its impact in well - mixed populations a basic reference case , one would think is far less well understood .although deterministic aspiration dynamics , i.e. , a kind of win - stay - lose - shift dynamics , in which individuals are perfectly rational have been analyzed , it is not clear how processes with imperfect rationality influence the evolutionary outcome . here, we ask whether a strategy favored under pairwise comparison driven imitation dynamics can become disfavored under aspiration - driven self - learning dynamics . to this end , in our analytical analysis , we limit ourselves to the weak selection , or weak rationality approximation , where payoffs via the game play little role in the decision - making .as it has been shown that under weak selection , the favored strategy is invariant for a wide class of imitation processes .we show that for pairwise games , the aspiration dynamics and the imitation dynamics always share the same favored strategies . for multi - player games ,however , the weak selection criterion under aspiration dynamics that determines whether a strategy is more abundant than the other differs from the criterion under imitation dynamics .this paves the way to construct multi - player games , for which aspiration dynamics favor one strategy whereas imitation dynamics favor another .furthermore , in contrast to deterministic aspiration dynamics , if the favored strategy is determined by a global aspiration level , the average abundance of a strategy in the stochastic aspiration dynamics is invariant with respect to the aspiration level , provided selection is weak .we also extrapolate our results to stronger selection cases through numerical simulation .we consider evolutionary game dynamics with two strategies and players . from these , the more widely studied games emerge as a special case . in individual encounters ,players obtain their payoffs from simultaneous actions .a focal player can be of type , or , and encounter a group containing other players of type , to receive the payoff , or .for example , a player , which encounters individuals of type , obtains payoff .an player in a group of one other player and thus players obtains payoff .all possible payoffs of a focal individual are uniquely defined by the number of in the group , such that the payoff matrix reads for any group engaging in a one - shot game , we can obtain each member s payoff according to this matrix . in a finite well - mixed population of size , groups of size are assembled randomly , such that the probability of choosing a group that consists of another players of type , and of players of type , is given by a hypergeometric distribution .for example , the probability that an player is in a group of other is given by , where ( ) is the number of players in the population , and is the binomial coefficient . the expected payoffs for any or in a population of size , with players of type and players of type , are given by in summary , we define a -player stage game , shown in eq .( [ payoff_matrix ] ) , from which the evolutionary game emerges such that each individual obtains an expected payoff based on the current composition of the well - mixed population . in the following , we introduce an update rule based on a global level of aspiration . this allows us to define a markov chain describing the inherently stochastic dynamics in a finite population : probabilistic change of the composition of the population is driven by the fact that each individual compares its actual payoff to an imaginary value that it aspires .note here that we are only interested in the simplest way to model such a complex problem and do not address any learning process that may adjust such an aspiration level as the system evolves . for a sketch of the aspiration - driven evolutionary game , see fig . [ model ] .players is chosen randomly from the finite population to play the game .according to this , game players calculate and obtain their actual payoffs .they are more likely to stochastically switch strategies if the payoffs they aspire are not met . on the other hand ,the higher the payoffs compared to the aspiration level are , the less likely they switch their strategies . besides , strategy switchingis also determined by a selection intensity . for vanishing selection intensity ,switching is entirely random irrespective of payoffs and the aspiration level . for increasing selection intensity, the self - learning process becomes increasingly more `` optimal '' in the sense that for high , individuals tend to always switch when they are dissatisfied , and never switch when they are tolerant .we examine the simplest possible setup , where the level of aspired payoff is a global parameter that does not change with the dynamics .we show that , however , statements about the average abundance of a strategy do not depend on under weak selection ( ) ., scaledwidth=95.0% ] in addition to the inherent stochasticity in finite populations , there is randomness in the process of individual assessments of one s own payoff as compared to a random sample of the rest of the population ; even if an individual knew exactly what to do , he might still fail to switch to an optimal strategy , e.g. , due to a trembling hand . herewe examine the simplest case of an entire population having a certain level of aspiration .players need nt see any particular payoffs but their own , which they compare to an aspired value .this level of aspiration , , is a variable that influences the stochastic strategy updating .the probability of switching strategy is random when individuals payoffs are close to their level of aspiration , reflecting the basic degree of uncertainty in the population .when payoffs exceed the aspiration , strategy switching is unlikely . at high values of aspirationcompared to payoffs , switching probabilities are high .the level of aspiration provides a global benchmark of tolerance or dissatisfaction in the population .in addition , when modeling human strategy updating , one typically introduces another global parameter that provides a measure for how important individuals deem the impact of the actual game played on their update , the intensity of selection , .irrespective of the aspiration level and the frequency dependent payoff distribution , vanishing values of refer to nearly random strategy updating . for large values of ,individuals deviations from their aspiration level have a strong impact on the dynamics .note that although the level of aspiration is a global variable and does not differ individually , due to payoff inhomogeneity there can always be a part of the population that seeks to switch more often due to dissatisfaction with the payoff distribution . in our microscopic update process, we randomly choose an individual , , from the population , and assume that the payoff of the focal individual is . to model stochastic self - learning of aspiration - driven switching , we can use the following probability function which is similar to the fermi - rule , but replaces a randomly drawn opponent s payoff by one s own aspiration .the wider the positive gap between aspiration and payoff , the higher the switching probability .reversely , if payoffs exceed the level of aspiration individuals become less active with increasing payoffs . the aspiration level , ,provides the benchmark used to evaluate how `` greedy '' an individual is .higher aspiration levels mean that individuals aspire to higher payoffs .if payoffs meet aspiration , individuals remain random in their updates .if payoffs are below aspiration , switching occurs with probability larger than random ; if they are above aspiration , switching occurs with probability lower than random .the selection intensity governs how strict individuals are in this respect . for , strategy switchingis entirely random ( neutral ) .low values of lead to switching only slightly different from random but follow the impact of . for increasing ,the impact of the difference between payoffs and the aspiration becomes more important . in the case of , individuals are strict in the sense that they either switch strategies with probability one if they are not satisfied , or stay with their current strategy if their aspiration level is met or overshot .the spread of successful strategies is modeled by a birth and death process in discrete time . in one time step ,three events are possible : the abundance of , , can increase by one with probability , decrease by one with probability , or stay the same with probability .all other transitions occur with probability zero .the transition probabilities are given by } } , \label{tc_i+1 } \\t_{i}^{-}&=\frac{i}{n}\ , \frac{1}{1+e^{-\omega\,[\,\alpha-\pi_{a}(i)\ , ] } } , \label{tc_i-1}\\ t_{i}^{0}&=1-t_{i}^{+}-t_{i}^{- } \label{tc_i}.\end{aligned}\ ] ] in each time step , a randomly chosen individual evaluates its success in the evolutionary game , given by eqs .( [ payoff_c ] ) , ( [ payoff_d ] ) , compares it to the level of aspiration , and then changes strategy with probability lower than if its payoff exceeds the aspiration .otherwise , it switches with probability greater than , except when the aspiration level is exactly met , in which case it switches randomly ( note that this is very unlikely to ever be the case ) . compared to imitation ( pairwise comparison ) dynamics , our self - learning process , which is essentially an ehrenfest - like markov chain , has some different characteristics . without the introduction of mutation or random strategy exploration ,there exists a stationary distribution for the aspiration - driven dynamics . even in a homogenous population, there is a positive probability that an individual can switch to another strategy due to the dissatisfaction resulting from payoff - aspiration difference .this facilitates the escape from the states that are absorbing in the pairwise comparison process and other moran - like evolutionary dynamics .hence there exists a nontrivial stationary distribution of the markov chain satisfying detailed balance .specifically , for the case of ( neutral selection ) , the dynamics defined by eqs .( [ tc_i+1])([tc_i ] ) are characterized by linear rates , while these rates are quadratic for the neutral imitation dynamics and moran process . in the following analysis and discussion, we are interested in the limit of weak selection , , and its ability to aptly predict the success of cooperation in commonly used evolutionary games .the limit of weak selection , which has a long standing history in population genetics and molecular evolution also plays a role in social learning and cultural evolution .recent experimental results suggest that the intensity with which human subjects adjust their strategies might be low .although it has been unclear to what degree and in what way human strategy updating deviates from random , the weak selection limit is of importance to quantitatively characterize the evolutionary dynamics . in the limiting case of weak selection , we are able to analytically classify strategies with respect to the neutral benchmark , .we note that a strategy is favored by selection , if its average equilibrium frequency under weak selection is greater than one half . in order to come to such a quantitative observation, we need to calculate the stationary distribution over frequencies of strategy . the markov chain given by eqs .( [ tc_i+1])([tc_i ] ) is a one dimensional birth and death process with reflecting boundaries .it satisfies the detailed balance condition , where is the stationary distribution over frequencies of in equilibrium . considering , we find the exact solution by recursion , given by where is the probability of successive transitions from to . the analytical solution eq .( [ stationary distribution ] ) allows us to find the exact value of the average abundance of strategy , for any strength of selection .it has been shown that imitation processes are similar to each other under weak selection . thus in order to compare the essential differences between imitation processes and aspiration process , we consider such selection limit . to better understand the effects of selection intensity , aspiration level , and payoff matrix on the average abundance of strategy ,we further analyze which strategy is more abundant based on eq .( [ stationary distribution ] ) . for a fixed population size , under weak selection ,i.e. , the stationary distribution can be expressed approximately as {\omega=0 } , \label{stationary distribution approximation}\end{aligned}\ ] ] where the neutral stationary distribution is simply given by , and the first order term of this taylor expansion amounts to {\omega=0 } = \frac{c_{n}^{j}}{2^{n+1}}\ , \left\ { \sum\limits_{k=1}^{j}[\,\pi_{a}(k)-\pi_{b}(k-1)\ , ] -\frac{1}{2^{n}}\,\sum\limits_{k=1}^{n}\,c_{n}^{k}\sum\limits_{l=1}^{k}[\,\pi_{a}(l)-\pi_{b}(l-1)\ , ] \right\}. \label{psi'_j(0)}\end{aligned}\ ] ] interestingly , in the limiting case of weak selection , the first order approximation of the stationary distribution of does not depend on the aspiration level . for higher order terms of selection intensity , however, does depend on the aspiration level . in the following we discuss the condition under which a strategy is favored andcompare the predictions for stationary strategy abundance under self - learning and under imitation dynamics .thereafter we consider three prominent examples of games with multiple players through analytical , numerical and simulation methods , the results of which are detailed in figs .[ fig : pgg][fig : sdg ] and appendix [ appendix ii ] .all three examples are social dilemmas in the sense that the nash equilibrium of the one - shot game is not the social optimum .first , the widely studied public goods game represents the class of games where there is only one pure nash equilibrium .next , the public goods game with a threshold , a simplified version of the collective risk dilemma , represents the class of coordination games with multiple pure nash equilibria , depending on the threshold .last , we consider the -player volunteer s dilemma , or snowdrift game , which has a mixed nash equilibrium .based on the approximation ( [ stationary distribution approximation ] ) , for any symmetric multi - player game with two strategies of normal form ( [ payoff_matrix ] ) , we can now calculate a weak selection condition such that in equilibrium is more abundant than .since for neutrality , holds and thus , it is sufficient to consider positivity of the sum of {\omega=0}/n ] with , here is strictly increasing with increasing . denoting , we have .then , for , , and eq .( [ psi_j(0 ) ] ) can be rewritten in a more general form {\omega=0 } = \frac{g'(0)}{g(0)}\,\frac{c_{n}^{j}}{2^{2n}}\ , \left\ { 2^{n}\,\sum_{k=1}^{j}[\,\pi_{a}(k)-\pi_{b}(k-1)\ , ] -\sum_{k=1}^{n}\,c_{n}^{k}\sum_{i=1}^{k}[\,\pi_{a}(i)-\pi_{b}(i-1)\ , ] \right\}. \label{general_psi'_j(0)}\end{aligned}\ ] ] since is a positive constant , eq .( [ criterion condition ] ) is still valid for any such probability function , see appendix [ appendix ] .public goods games emerge when groups of players engage in the sustenance of common goods .cooperators pay an individual cost in form of a contribution that is pooled into the common pot .defectors do not contribute .the pot is then multiplied by a characteristic multiplication factor and shared equally among all individuals in the group , irrespective of contribution .if the multiplication factor is smaller than the size of the group , each cooperator recovers only a fraction of the initial investment .switching to defection would always be beneficial in a pairwise comparison of the two strategies .the payoff matrix thus reads where is typically assumed .since is a negative constant for any number of cooperators in the group , we find that is always negative .cooperation can not be the more abundant strategy in the well - mixed population ( see fig .[ fig : pgg ] ) .however , if the self - learning dynamics are driven by a sufficiently high aspiration level , individuals are constantly dissatisfied and switch strategy frequently , even as defectors , such that cooperation can break even if selection is strong enough , namely for all values . on the other hand , if the aspiration level is low , cooperators switch more often than defectors such that the average abundance of assumes a value closer to the evolutionary stable state of full defection , which depends on . in the extreme case of very low and strong selection ,defectors fully dominate , thus the stationary measure retracts to the all defection state . , population size , and cost of cooperation .in both panels , the group sizes are ( dark shaded ) , and ( light shaded ) .panel * a * shows the mean fraction of cooperators as a function of selection intensity for , the inset shows a detail for lower selection intensities .panel * b * shows the mean fraction of cooperators as a function of selection intensity for .the inset shows the stationary distribution for , and ., scaledwidth=95.0% ] here we consider the following public goods game with a threshold in the sense that the good becomes strictly unavailable when the number of cooperators in a group is below a critical threshold , . this threshold becomes a new strategic variable .here , is an initial endowment given to each player , which is invested in full by cooperators .whatever the cooperators manage to invest is multiplied by and redistributed among all players in the group irrespective of strategy , if the threshold investment is met .defectors do not make any investment , and thus have an additional payoff of , as long as the threshold is met .once the number of cooperators is below , all payoffs are zero , which compares to the highest risk possible ( loss of endowment and investment with certainty ) in what is called the collective - risk dilemma .the payoff matrix for the two strategies , cooperation , and defection , reads we can examine when the self - learning process favors cooperation .we can also seek to make a statement about whether under self - learning dynamics , cooperation performs better than under pairwise comparison process .for self - learning dynamics , we find while the equivalent statement for pairwise comparison processes based on the same payoff matrix would be .thus , the criterion of self - learning dynamics can be written as whereas positivity of the imitation processes condition , , simply leads to . comparing the two conditions, we find since the first factor on the right hand side of eq .( [ tpgg_03 ] ) is always positive , the factor determines the relationship between self - learning dynamics and pairwise comparison processes : for sufficiently large threshold , expression ( [ tpgg_04 ] ) is positive . in conclusion , the aspiration - level - driven self - learning dynamics can afford to be less strict than the pairwise comparison process .namely , it requires less reward for cooperators contribution to the common pool ( lower levels of ) in order to promote the cooperative strategy .the amount of cooperative strategy depends on the threshold : higher thresholds support cooperation , even for lower multiplication factors ( see fig . [fig : tpgg ] ) . for fixed ,our self - learning dynamics are more likely to promote cooperation in a threshold public goods game , if the threshold for the number of cooperators needed to support the public goods is large enough , i.e. , not too different from the total size of the group . for small thresholds , andthus higher temptation to defect in groups with less cooperators , we approach the regular public goods games , and the conclusion may be reversed . under such small cases , imitation - driven ( pairwise comparison ) dynamicsare more likely to lead to cooperation than aspiration dynamics . , population size , group size , and cost of cooperation .in both panels , the threshold sizes are ( dark shaded ) , and ( light shaded ) .panel * a * shows the mean fraction of cooperators as a function of selection intensity for .the inset shows the critical multiplication factor above which cooperation is more abundant as a function of the threshold .high thresholds lower the critical multiplication factor of the public good such that cooperation can become more abundant than defection .panel * b * shows the mean fraction of cooperators as a function of selection intensity for .the inset shows the stationary distribution for , , , and ., scaledwidth=95.0% ] evolutionary games between two strategies can have mixed evolutionary stable states .strategy can invade and can invade ; a stable coexistence of the two strategies typically evolves . in the replicator dynamics of the snowdrift game , cooperators can be invaded by defectors as the temptation to defect is still larger than the reward of mutual cooperation .in contrast to the public goods game , cooperation with a group of defectors now yields a payoff greater than exclusive defection .the act of cooperation provides a benefit to all members of the group , and the cost of cooperation is equally shared among the number of cooperators .hence , the payoff matrix reads cooperation can maintain a minimal positive payoff from the cooperative act , then cooperation and defection can coexist .the snowdrift game is a social dilemma , as selection does not favor the social optimum of exclusive cooperation .the level of coexistence depends on the amount of cost that a particular cooperator has to contribute in a certain group . evaluating the weak selection condition , ( [ criterion condition ] ) in case of the -player snowdrift game leads to the condition in order to observe in aspiration dynamics under weak selection . for imitation processes , on the other hand , we find .note that , except for , holds for any other .because of this , the different nature of these two conditions , given by the positive coefficients for any , reveals that self - learning dynamics narrow down the parameter range for which cooperation can be favored by selection . in the snowdrift game ,self - learning dynamics are less likely to favor cooperation than pairwise comparison processes .larger group size hinders cooperation : the larger the group , the higher the benefit of cooperation , , has to be in order to support cooperation ( see fig . [fig : sdg ] ) . , population size , and cost of cooperation . in both panels , the group sizes are ( dark shaded ) , and ( light shaded ) .panel * a * shows the mean fraction of cooperators as a function of selection intensity for . the inset shows the cooperation condition as a function of group size for benefits . only for high benefit and low group size , cooperation can be more abundant than defection .panel * b * shows the mean fraction of cooperators as a function of selection intensity for .the inset shows the stationary distribution for , , and ., scaledwidth=95.0% ]previous studies on self - learning mechanism have typically been investigated on graphs via simulations , which often employ stochastic aspiration - driven update rules .although results based on the mean field approximations are insightful , further analytical insights have been lacking so far .thus it is constructive to introduce and discuss a reference case of stochastic aspiration - driven dynamics of self - learning in well - mixed populations . to this end , here we introduce and discuss such an evolutionary process .our weak selection analysis is based on a simplified scenario that implements a non - adaptive self - learning process with global aspiration level .probabilistic evolutionary game dynamics driven by aspiration are inherently innovative and do not have absorbing boundaries even in the absence of mutation or random strategy exploration .we study the equilibrium strategy distribution in a finite population and make a weak selection approximation for the average strategy abundance for any multi - player game with two strategies , which turns out to be independent of the level of aspiration .this is different from the aspiration dynamics in infinitely large populations , where the evolutionary outcome crucially depends on the aspiration level .thus it highlights the intrinsic differences arising from finite stochastic dynamics of multi - player games between two strategies .based on this we derive a condition for one strategy to be favored over the other .this condition then allows a comparison of a strategy s performance to other prominent game dynamics based on pairwise comparison between two strategies .most of the complex strategic interactions in natural populations , ranging from competition and cooperation in microbial communities to social dilemmas in humans , take place in groups rather than pairs .thus multi - player games have attracted increasing interest in different areas .the most straightforward form of multi - player games makes use of the generalization of the payoff matrix concept .such multi - player games are more complex and show intrinsic difference from games .hence , as examples here we have studied the dynamics of one of the most widely studied multi - player games the linear public goods game , a simplified version of a threshold public goods game that requires a group of players to coordinate contributions to a public good , as well as a multi - player version of the snowdrift game where coexistence is possible .our analytical finding allows a characterization of the evolutionary success under the stochastic aspiration - driven update rules introduced here , as well as a comparison to the well known results of pairwise comparison processes . while in coordination games , such as the threshold public goods game , the self - learning dynamics support cooperation on a larger set in parameter space ; the opposite is true for coexistence games , where the condition for cooperation to be more abundant becomes more strict. it will be interesting to derive analytical results that either hold for any intensity of selection , or at least for the limiting case of strong selection in finite populations .on the other hand , the update rule presented here does not seem to allow a proper continuous limit in the transition to infinitely large populations , which might give rise to interesting rescaling requirements of the demographic noise in the continuous approximation in self - learning dynamics .our simple model illustrates that aspiration - driven self - learning dynamics in well - mixed populations alone may be sufficient to alter the expected strategy abundance . on previous studies of such processes in structured populations , this effect might have been overshadowed by the properties of the network dynamics studied _ in silico_. our analytical results hold for weak selection , which might be a useful framework in the study of human interactions , where it is still unclear to what role model individuals compare their payoffs and with what strength players update their strategies .although weak selection approximations are widely applied in the study of frequency dependent selection , it is not clear whether the successful spread of behavioral traits operates in this parameter regime .thus , by numerical evaluation and simulations we show that our weak selection predictions also hold for strong selection .models such as the one presented here may be used in attempts to predict human strategic dynamics .such predictions , likely to be falsified in their simplicity , are essential to our fundamental understanding of complex economic and social behavior and may guide statistical insights to the effective functioning of the human mind .this work is supported by the national natural science foundation of china ( nsfc ) under grants no .61020106005 and no .61375120 . b.w . gratefully acknowledges generous sponsorship from the max planck society .greatefully acknowledges support from the deutsche akademie der naturforscher leopoldina , grant no .lpds 2012 - 12 .in this appendix , we detail the deducing process of the criterion of for general -player game . we consider the first order approximation of stationary distribution , , and get the criterion condition ( shown in sec .[ sec : res ] ) , as follows : {\omega=0}\,\omega>0 .\label{a1}\end{aligned}\ ] ] inserting eq .( [ stationary distribution ] ) , we have denoting , the above equation can be simplified as where \ , ( \prod_{i=1}^{j}t_{i}^{- } ) - ( \prod_{i=0}^{j-1}t_{i}^{+})\ , [ \sum_{i=1}^{j}(t_{i}^{-})'\,(\prod_{k=1,k\neq i}^{j } t_{k}^{- } ) ] } { ( \prod_{i=1}^{j}t_{i}^{-})^{2 } } , \label{a4 } \\ \psi_{d } ' & = ( 1+\sum_{k=0}^{n-1}\frac{\prod_{i=0}^{k}t_{i}^{+}}{\prod_{i=1}^{k+1}t_{i}^{- } } ) ' = \sum_{k=0}^{n-1 } \frac { ( \prod_{i=0}^{k}t_{i}^{+})'\ , ( \prod_{i=1}^{k+1}t_{i}^{- } ) -(\prod_{i=0}^{k}t_{i}^{+})\ , ( \prod_{i=1}^{k+1}t_{i}^{- } ) ' } { ( \prod_{i=1}^{k+1}t_{i}^{-})^{2 } } \nonumber \\ & = \sum_{k=0}^{n-1 } \frac { [ \sum_{i=0}^{k}(t_{i}^{+})'\ , ( \prod_{s=0,s\neq i}^{k}t_{s}^{+})]\ , ( \prod_{i=1}^{k+1}t_{i}^{- } ) - ( \prod_{i=0}^{k}t_{i}^{+})\ , [ \sum_{i=1}^{k+1}(t_{i}^{-})'\ , ( \prod_{s=1,s\neq i}^{k+1}t_{s}^{- } ) ] } { ( \prod_{i=1}^{k+1}t_{i}^{-})^{2}}. \label{a5}\end{aligned}\ ] ] we have }}\right\ } ' = \frac{n - i}{n}\ , \frac { \left\{e^{-\omega\,[\alpha-\pi_{b}(i)]}\right\}\ , [ \alpha-\pi_{b}(i ) ] } { \{1+e^{-\omega\,[\alpha-\pi_{b}(i)]}\}^{2 } } , \label{a6 } \\ ( t_{i}^{- } ) ' & = \frac{i}{n}\ , \left\{\frac{1}{1+e^{-\omega\,[\alpha-\pi_{a}(i)]}}\right\ } ' = \frac{i}{n}\ , \frac { \left\{e^{-\omega\,[\alpha-\pi_{a}(i)]}\right\}\ , [ \alpha-\pi_{a}(i ) ] } { \left\{1+e^{-\omega\,[\alpha-\pi_{a}(i)]}\right\}^{2}}. \label{a7}\end{aligned}\ ] ] since , , \label{a8 } \\ & \left.(t_{i}^{-})'\right|_{\omega=0}=\frac{i}{4n}\,[\alpha-\pi_{a}(i ) ] , \label{a9 } \\ & \left.(\prod_{i=0}^{j-1}t_{i}^{+})\right|_{\omega=0 } = \prod_{i=0}^{j-1}\frac{n - i}{2n } = \frac{n!}{(n - j)!\,(2n)^{j } } , \label{a10 } \\ & \left.(\prod_{i=1}^{j}t_{i}^{-})\right|_{\omega=0 } = \prod_{i=1}^{j}\frac{i}{2n } = \frac{j!}{(2n)^{j } } , \label{a11}\end{aligned}\ ] ] \right|_{\omega=0 } = \frac{n!\,\sum_{i=0}^{j-1}[\alpha-\pi_{b}(i)]}{2\,(n - j)!\,(2n)^{j } } , \label{a12 } \\ & \left.[\sum_{i=1}^{j}(t_{i}^{-})'\,(\prod_{k=1,k\neq i}^{j } t_{k}^{-})]\right|_{\omega=0 } = \frac{j!\,\sum_{i=1}^{j}[\alpha-\pi_{a}(i)]}{2\,(2n)^{j}}. \label{a13}\end{aligned}\ ] ] then , inserting eqs .( [ a10])([a13 ] ) into eq .( [ a4 ] ) , \right\}\ , \frac{j!}{(2n)^{j } } - \frac{n!}{(n - j)!\,(2n)^{j}}\ , \frac{j!}{2\,(2n)^{j}}\ , \sum_{i=1}^{j}[\alpha-\pi_{a}(i ) ] } { [ \frac{j!}{(2n)^{j}}]^{2 } } \nonumber \\ & = \frac{n!}{2\,j!\,(n - j)!}\ , [ \,-\sum_{i=0}^{j-1}\pi_{b}(i)+\sum_{i=1}^{j}\pi_{a}(i)\ , ] \nonumber\\ & = \frac{c_{n}^{j}}{2}\ , \sum_{i=1}^{j}\left[\,\pi_{a}(i)-\pi_{b}(i-1)\,\right ] .\label{a14}\end{aligned}\ ] ] similarly , we can get \ , \frac{(k+1)!}{(2n)^{k+1 } } } { [ \frac{(k+1)!}{(2n)^{k+1}}]^{2 } } \nonumber \\ & - \frac { \frac{n!}{(n - k-1)!\,(2n)^{k+1}}\ , \frac{(k+1)!}{2\,(2n)^{k+1}}\ , \sum_{i=1}^{k+1}[\alpha-\pi_{a}(i ) ] } { [ \frac{(k+1)!}{(2n)^{k+1}}]^{2 } } \,\ }\nonumber\\ & = \sum_{k=1}^{n}\frac{c_{n}^{k}}{2}\ , \sum_{i=1}^{k}\left[\,\pi_{a}(i)-\pi_{b}(i-1)\,\right ] .\label{a15}\end{aligned}\ ] ] and therefore , inserting eqs .( [ a14])([a17 ] ) into eq .( [ a3 ] ) , {\omega=0 } & = \frac { c_{n}^{j}\ , \{\sum_{i=1}^{j}[\pi_{a}(i)-\pi_{b}(i-1)]\}\,2^{n } } { 2\,(2^{n})^{2 } } \nonumber \\ & -\frac { c_{n}^{j}\ , \sum_{k=0}^{n-1 } \ { c_{n}^{k+1}\,\sum_{i=1}^{k+1}[\pi_{a}(i)-\pi_{b}(i-1)]\ } } { 2\,(2^{n})^{2}}. \label{a18}\end{aligned}\ ] ] combined with eq .( [ a1 ] ) , the criterion is rewritten as \}\ , 2^{n } } { 2\,(2^{n})^{2 } } \nonumber \\ & -\frac { c_{n}^{j}\ , \sum_{k=0}^{n-1 } \ { c_{n}^{k+1}\ , \sum_{i=1}^{k+1}[\pi_{a}(i)-\pi_{b}(i-1 ) ] \ } } { 2\,(2^{n})^{2 } } \,)>0 , \label{a19}\end{aligned}\ ] ] where and refer to eqs .( [ payoff_c ] ) , and ( [ payoff_d ] ) .hence , therefore the criterion equals to >0 \nonumber \\ \longleftrightarrow & \frac{\omega}{2\,n\,(2^{n})^{2}}\ , [ \,\sum_{j=1}^{n}j\,c_{n}^{j}\,\sum_{i=1}^{j } \sum_{k=0}^{d-1}\frac{c_{i-1}^{k}\,c_{n - i}^{d-1-k}}{c_{n-1}^{d-1}}\,(a_{k}-b_{k})\,2^{n } \nonumber \\ -&\sum_{j=1}^{n}j\,c_{n}^{j}\,\sum_{m=1}^{n}c_{n}^{m}\,\sum_{i=1}^{m}\sum_{k=0}^{d-1}\frac{c_{i-1}^{k}\,c_{n - i}^{d-1-k}}{c_{n-1}^{d-1}}\,(a_{k}-b_{k})\,]>0 \nonumber \\ \longleftrightarrow & \frac{\omega}{4\,n\,(2^{n})}\ , [ \,\sum_{j=1}^{n}2\,j\,c_{n}^{j}\,\sum_{i=1}^{j } \sum_{k=0}^{d-1}\frac{c_{i-1}^{k}\,c_{n - i}^{d-1-k}}{c_{n-1}^{d-1}}\,(a_{k}-b_{k})\ , ] \nonumber \\ -&\frac{\omega\,n\,2^{n-1}}{2\,n\,(2^{n})^{2}}\ , [ \,\sum_{m=1}^{n}c_{n}^{m}\,\sum_{i=1}^{m}\sum_{k=0}^{d-1}\frac{c_{i-1}^{k}\,c_{n - i}^{d-1-k}}{c_{n-1}^{d-1}}\,(a_{k}-b_{k})\,]>0 \nonumber \\ \longleftrightarrow & \frac{\omega}{4\,n\,(2^{n})}\ , [ \,\sum_{j=1}^{n}(2j - n)\,c_{n}^{j}\ , \sum_{i=1}^{j } \sum_{k=0}^{d-1}\frac{c_{i-1}^{k}\,c_{n - i}^{d-1-k}}{c_{n-1}^{d-1}}\,(a_{k}-b_{k})\,]>0 . \label{a21}\end{aligned}\ ] ] we can prove that the above inequality leads to a general criterion as follows >0 . \label{a22}\end{aligned}\ ] ] this is the result we want to show . for this, we only need to demonstrate = \frac{\omega}{4\,(2^{d})}\ , \left[\,\sum_{k=0}^{d-1 } c_{d-1}^{k}\,(a_{k}-b_{k})\,\right ] .\label{a23}\end{aligned}\ ] ] this equals to since such equation should hold for any choice of ( )s , thus using the identity , we can simplify the equivalent condition as this can be easily proved through mathematical induction .it is found that for the examples we discussed , namely the linear public goods game , the threshold collective risk dilemma and a multi - player snowdrift game , our result under weak selection can be generalized for a wide range of parameters ( higher values of , small and large populations ) .
studying strategy update rules in the framework of evolutionary game theory , one can differentiate between imitation processes and aspiration - driven dynamics . in the former case , individuals imitate the strategy of a more successful peer . in the latter case , individuals adjust their strategies based on a comparison of their payoffs from the evolutionary game to a value they aspire , called the level of aspiration . unlike imitation processes of pairwise comparison , aspiration - driven updates do not require additional information about the strategic environment and can thus be interpreted as being more spontaneous . recent work has mainly focused on understanding how aspiration dynamics alter the evolutionary outcome in structured populations . however , the baseline case for understanding strategy selection is the well - mixed population case , which is still lacking sufficient understanding . we explore how aspiration - driven strategy - update dynamics under imperfect rationality influence the average abundance of a strategy in multi - player evolutionary games with two strategies . we analytically derive a condition under which a strategy is more abundant than the other in the weak selection limiting case . this approach has a long standing history in evolutionary game and is mostly applied for its mathematical approachability . hence , we also explore strong selection numerically , which shows that our weak selection condition is a robust predictor of the average abundance of a strategy . the condition turns out to differ from that of a wide class of imitation dynamics , as long as the game is not dyadic . therefore a strategy favored under imitation dynamics can be disfavored under aspiration dynamics . this does not require any population structure thus highlights the intrinsic difference between imitation and aspiration dynamics .
let be a fixed probability distribution on the real line such that and for any let denote the associated hermitian wigner matrix of size .this means that where is a collection of real random variables with distribution .the second - order correlation function of the characteristic polynomial of the random matrix is defined by where .we are interested in the asymptotic behaviour of as , for certain sequences , which will be specified below .furthermore , the correlation coefficient of the characteristic polynomial of the random matrix is defined by in the special case where is the gaussian distribution with mean and variance , the distribution of the random matrix is the so - called gaussian unitary ensemble ( gue ) ( see forrester or mehta but note that we work with a different variance ) . in this case, it is well - known that where and the are the monic orthogonal polynomials with respect to the weight function ( see chapter 4.1 in forrester ) .thus , up to scaling , the coincide with the hermite polynomials ( as defined in szeg ) , and it is possible to derive the asymptotics of the second - order correlation function from the well - known asymptotics of the hermite polynomials ( see theorem 8.22.9 in szeg ) .more precisely , one obtains the following ( well - known ) results ( see also chapter 4.2 in forrester ) : for and any , where , , and for and any , where , and denotes the airy function ( see abramowitz and stegun ) . by symmetry ,a similar result holds for . the functions in ( [ sine - kernel ] ) and ( [ airy - kernel ] ) are also called the sine kernel and the airy kernel , respectively .furthermore , it is well - known that the eigenvalues of a random matrix from the gue are distributed roughly over the interval ] . then, for , and $ ] , we have and therefore thus , since , the integral under consideration is bounded by obviously , this upper bound can be made arbitrarily small by picking and large enough .this proves claim 2 .we start from the following well - known integral representation of the airy function : a standard application of cauchy s theorem shows that the contour can be deformed into the contour , .thus , we obtain observe that the resulting integral exists in the lebesgue sense , since we have for any .it follows from ( [ ai - int ] ) that substituting and doing a small calculation , we find that using the well - known relation ( where ) , it follows that or by another application of cauchy s theorem , the contour , , may be deformed back into the contour . replacing the contour in lemma [ airyproduct ] by the contour and substituting , we have by means of abbreviation , write for the numerator inside the integral . then , for any , we have and therefore the assertion of proposition [ theproposition3 ] now follows by induction .fix , and put , . using well - known results about the asymptotic properties of the hermite polynomials ( see theorem 8.22.9(c ) in szeg ) , we find that the function given by ( ) satisfies for any .setting , we therefore obtain where we have used proposition [ theproposition2 ] as well as the assumptions , .this completes the proof of proposition [ theproposition5 ] .first of all , note that the definition ( [ idefinition ] ) may be extended to the case and that for any by lemma [ airyproduct ] .thus , for any , with strict inequality for ( since it is well - known that the airy function does not have any zeroes on the positive half - axis ) .moreover , note that for any , for any .since , this implies that for any , for any .proposition [ theproposition6 ] now follows by a straightforward induction on .
we investigate the asymptotic behaviour of the second - order correlation function of the characteristic polynomial of a hermitian wigner matrix at the edge of the spectrum . we show that the suitably rescaled second - order correlation function is asymptotically given by the airy kernel , thereby generalizing the well - known result for the gaussian unitary ensemble ( gue ) . moreover , we obtain similar results for real - symmetric wigner matrices .
the likelihood function for a complex multivariate model may not be available or very difficult to evaluate , and a composite likelihood function constructed from low - dimensional marginal or conditional distributions has become a popular alternative ( varin , 2008 ; varin , reid & firth , 2011 ) .suppose is a -dimensional random vector with probability density function , with a -dimensional parameter vector .given a set of likelihood functions , , defined by the joint or conditional densities of some sub - vectors of , the composite likelihood function ( lindsay , 1988 ) is defined as where the s are nonnegative weights and the component likelihood might depend only on a sub - vector of .the choice of and the weights is critical for improving the efficiency of the resulting statistical inference ( lindsay , 1988 ; joe & lee , 2009 ; lindsay , yi & sun , 2011 ) . in this paperwe focus on the two most commonly used composite likelihood functions in literature , independence likelihood and pairwise likelihood , which are defined as and , respectively . given a random sample , where each is a -dimensional vector , the composite log - likelihood function is and the maximum composite likelihood estimator ( mcle ) is .in addition to the computational simplicity , the composite likelihood function has many appealing theoretical properties . in particular , under some regularity conditions , is consistent and asymptotically normally distributed with variance equal to the inverse of the godambe information matrix : ( lindsay , 1988 ; varin , 2008 ; xu & reid , 2011 ) . here is the sensitivity matrix , and is the variability matrix , with the composite score function . throughout this paperwe use to denote the fisher information matrix of the full likelihood function . given two composite likelihood functions and , said to be _ more efficient _ than if has a greater godambe information matrix than in the sense of matrix inequality .it is well known that the full likelihood function is more efficient than any other composite likelihood function under regularity conditions ( godambe , 1960 ; lindsay , 1988 ) , i.e. is non - negative definite .in general , the second bartlett identity does not hold for composite likelihood functions , i.e. . after lindsay ( 1982 ) , we call a composite likelihood _ information - unbiased _ if , and _ information - biased _ , otherwise. composite likelihood - based inferential tools have been developed for hypothesis testing ( chandler & bate , 2007 ; pace , salvan , & sartori , 2011 ) and model selection ( varin & vidoni , 2005 ; gao & song , 2010 ) .information bias of a composite likelihood can make the resulting inference more difficult .for example , if the composite likelihood is information - unbiased , the likelihood ratio statistic has the same asymptotic chi - square distribution as its full likelihood counterpart . on the other hand, if it is information - biased the likelihood ratio statistic converges in distribution to a weighted sum of some independent random variables ( kent , 1982 ) .adjustments have been proposed to the information - biased composite likelihood ratio statistic such that the adjusted statistic has an asymptotic chi - square distribution ( e.g. , chandler & bate , 2007 ; pace , salvan , & sartori , 2011 ) .the full likelihood function is information - unbiased , but an information - unbiased composite likelihood is not necessarily fully efficient .in fact , any component likelihood function is information - unbiased .more generally , any composite likelihood function as the product of component likelihoods with mutually uncorrelated score functions is information - unbiased . as an example , consider a -dimensional vector with density function , and defining .it is easy to show that the covariance between the score function of and the score function of is zero for any .hence any composite likelihood of the form where , is information - unbiased .conversely , an information - biased composite likelihood function can be fully efficient .the pairwise likelihood function for the equal - correlated multivariate normal model in section 2 is fully efficient when estimating the common variance and the correlation coefficient ( cox and reid , 2004 ) , but it is not information - unbiased ( pace , salvan , & sartori , 2011 ) .a sufficient and necessary condition for a composite likelihood to be fully efficient is given in the following theorem .theorem 1.suppose the full likelihood function has the score function and fisher information .then , for any composite likelihood function with the score function , sensitivity matrix , variability matrix and godambe information , if and only if with probability for a constant vector with respect to the random vector . proofit is easy to show that ( lindsay , 1988 ) .as and , the result follows as the difference is the covariance matrix of .theorem 1 with gives a sufficient condition for the maximum composite likelihood estimator to coincide with the mle ( kenne pagui , salvan & sartori , 2014 ) , which is satisfied by the pairwise likelihood for closed exponential family models ( mardia et al ., 2009 ) . in particular, the equicorrelated multivariate normal model with unknown variance and correlation coefficient belongs to the closed exponential family , and it has been shown that and or equivalently , for the pairwise likelihood function ( pace , salvan , & sartori , 2011 ) .its application in more general exponential family models has been studied in kenne pagui , salvan , & sartori ( 2014b ) . in this paperwe explore the impact of information bias on the composite likelihood inference in more detail . in section 2we show through the equicorrelated multivariate normal model that an information - biased composite likelihood may lead to less efficient estimates of the parameters of interest when the nuisance parameters are known .a sufficient condition is also provided for the occurrence of such a paradoxical phenomenon .we would expect that a more efficient composite likelihood can be obtained by incorporating additional independent component likelihoods or using higher dimensional component likelihoods .however such strategies do not always work for information - biased composite likelihood functions , as shown in section 3 .we conclude with a discussion in section 4 .in the presence of nuisance parameters , it is well known that the maximum likelihood estimator of the parameter of interest will have a smaller asymptotic variance when the nuisance parameters are known .it is easy to check that this also holds for information - unbiased composite likelihood functions .suppose the -dimensional parameter vector is partitioned as , where is a -dimensional parameter vector of interest and is a -dimensional nuisance parameter vector , .the godambe information matrix of a information - unbiased composite likelihood is , and where is the submatrix of pertaining to , and the sub - matrix of pertaining to .when is unknown , the asymptotic variance of the mcle of is given by ; when is known , the asymptotic variance of the mcle of can be shown to be .since is a nonnegative matrix , we have .however , the reverse relationship may be observed for an information - biased composite likelihood , which is illustrated through the equicorrelated multivariate normal model in the rest of this section . from previous section we know that the pairwise likelihood is information - biased for this model .* example 1 .* suppose are independent observations from the same -dimensional multivariate normal distribution with zero mean and covariance matrix , where is identity matrix and is a matrix with all entries equal to .the common correlation coefficient is the parameter of interest .the equicorrelated multivariate normal model has been well studied to compare the efficiency of pairwise likelihood and full likelihood in different settings ( arnold & strauss , 1991 ; cox & reid , 2004 ; mardia et al . , 2009 ) : when is unknown , the maximum pairwise likelihood estimator of , denoted as , is identical to the mle of and hence fully efficient ; when is known , the maximum pairwise likelihood estimator , denoted as , is less efficient than the maximum likelihood estimator of .here we are interested in comparing the asymptotic variances of and .the asymptotic variance of is ( cox & reid , 2004 ) where .the asymptotic variance of can be shown to be comparing the equations ( [ eq : known ] ) and ( [ eq : unknown ] ) , we find that as approaches its lower bound , decreases to zero while does not .the ratio of the asymptotic variances , avar , as a function of is plotted in figure [ fig2 ] for .we can see that when is positive , is more efficient than ; when , the opposite phenomenon is observed , and when approaches the lower bound , this ratio diverges to infinity .we performed the comparisons for different and observed the same phenomenon .= avar at . the vertical and horizontal dashed line denotes and respectively.,height=283 ] to see that information - biasedness is not a sufficient condition for the paradox to occur , we consider another information - biased composite likelihood function , the full conditional likelihood for the same model : where denotes the random vector excluding .when is unknown , the maximum full conditional likelihood estimator of , is identical to and fully efficient ( mardia et al . , 2009 ); when is known , the maximum full conditional likelihood estimator , is less efficient than the maximum likelihood estimator for . using the formula in mardia , hughes , & taylor ( 2007 ) ,the ratio of the asymptotic variances , avar , as a function of is plotted in figure [ fig3.3 ] for .we can see that the ratio is less than for all ] respectively .we can compare the variances of the two maximum composite likelihood estimators directly .for example if , the variance of is which is smaller than if and only if .note that if this result is expected as determines exactly with ; but the dependence on of the range of over which degrades the inference is surprising ; as increases this range approaches .intuitively , a composite likelihood with higher dimensional component likelihoods should achieve a higher efficiency , although it usually demands more computational cost . in this subsectionwe focus on comparing the independence likelihood and the pairwise likelihood . can be written as the product of and some pairwise conditional likelihood functions . under independence , is identical to the full likelihood , and , which is also fully efficient . for a multivariate normal model with continuous responses ,zhao & joe ( 2005 ) proved that the maximum pairwise likelihood estimator of the regression coefficient has a smaller asymptotic variance than the maximum independence likelihood estimator .in fact , within the family of information - unbiased composite likelihood functions pairwise likelihood is at least as efficient as independence likelihood : each bivariate density has a larger information matrix than and the total number of the bivariate densities in is when .however , the two composite likelihoods are information - biased in general especially for complex dependent data and we may observe the reverse relationship .a bivariate binary model was used in arnold & strauss ( 1991 ) to show that the pairwise conditional likelihood could be less efficient than the independence likelihood . for a bivariate modelthe pairwise likelihood is the full likelihood and hence fully efficient .here we consider a four dimensional binary model which has a complex dependence structure but also allows us to compare the ( asymptotic ) variances of different composite likelihood estimators analytically .* example 3 .* suppose follows a multinomial , where is a positive constant and .the parameter controls both the mean and covariance structures , and we can change the value of to adjust the strength of dependence .the value of is completely determined by . given a random sample of size from this model , we estimate based on the independent triplets , . the full likelihood for the model of is solving the score equation we get the maximum likelihood estimator of , .the exact variance of is the independence likelihood function for the model of is and we can calculate its sensitivity matrix and variability matrix as the pairwise likelihood function is and we can calculate its sensitivity matrix and variability matrix as where , . for , the asymptotic variances of the maximum composite likelihood estimators for ( [ eq : eg2full ] ) , ( [ eq : eg2ind ] ) and ( [ eq : eg2pair ] ) multiplied by are plotted as a function of in figure [ fig3.1 ] .we can see that when , the three estimators perform almost equally well ; when , the full likelihood becomes more efficient than the independence likelihood , and the independence likelihood estimator is more efficient than the pairwise likelihood estimator .we also carried out the comparisons for different values of and found that at , both the independence likelihood and the pairwise likelihood are fully efficient , but when , the independence likelihood is more efficient than the pairwise likelihood and the ratio of asymptotic variances approaches when .when , the pairwise likelihood estimator is more efficient than the independence likelihood estimator and the ratio of asymptotic variances approaches when .this example suggests that in practical applications of composite likelihood inference , where the models will usually have more complex dependence structure and incomplete data ( e.g. , yi , zeng , & cook , 2011 ) , some care is required for the use of higher dimensional composite likelihood to obtain more efficient estimators .a hybrid composite likelihood combining lower dimensional marginal and conditional likelihoods with different weights is suggested to guarantee the improvement of efficiency ( cox & reid , 2004 ; kenne pagui , salvan , & sartori , 2014a ) .as a complement to the discussion on composite likelihood inference in reid ( 2012 ) , in this paper we explored the impact of information bias on the composite likelihood based inference in different scenarios .an information - unbiased composite likelihood behaves somewhat like the ordinary likelihood but it can be very inefficient . on the other hand an information - biased likelihood brings extra difficulty to the computation and is more likely to exhibit undesirable inferential properties , although the information loss can be minimized with a set of carefully selected weights .one way to avoid the paradoxical phenomenon in section 2 is to convert the composite score function to an unbiased estimating function by projecting ( henmi & eguchi , 2004 ; lindsay , yi , & sun , 2011 ) : where is the score function of full likelihood , ranges over all matrices , and are the sensitivity matrix and variability matrix .it is easy to check that is information - unbiased .since and are constant matrices , this projection does not change the point estimator of , and has the same godambe information as . in the equicorrelated multivariate normal model with ,the score function of the pairwise likelihood is ( pace , salvan & sartori , 2011 ) . from equation ( [ eq : project ] ) , the projected estimating funtion of is equal to the score function of full likelihood , . in complex models ,the required computation for the projected estimating function can be intractable and it may be a better idea to design a nuisance - parameter - free composite likelihood function carefully for practical use . as an example , a pairwise difference likelihood that eliminates nuisance parameters in a neyman scott problem is described in hjort & varin ( 2008 ) .
does the asymptotic variance of the maximum composite likelihood estimator of a parameter of interest always decrease when the nuisance parameters are known ? will a composite likelihood necessarily become more efficient by incorporating additional independent component likelihoods , or by using component likelihoods with higher dimension ? in this note we show through illustrative examples that the answer to both questions is no , and indeed the opposite direction might be observed . the role of information bias is highlighted to understand the occurrence of these paradoxical phenomenon . + * key words : * pairwise likelihood ; estimating function ; bartlett s second identity ; godambe information matrix ; nuisance parameter . 2.0
the spectral resolving power is perhaps the most important single property of a spectrograph .the wavelength increment is the minimum separation for two spectral lines to be considered as just resolved .the problem is that the definition of is arbitrary , and inconsistent between various usages .classically the rayleigh criterion was used , while in recent years by far the most common practice has been to use the full - width at half maximum , i.e. .it is clear that there can be no fundamental definition of the minimum resolvable wavelength difference , because with arbitrarily high signal / noise ratio , sufficiently fine sampling and a perfectly known instrumental response function ( here abbreviated as the line spread function , lsf ) an observed spectrum could be deconvolved to any desired spectral resolution .what spectroscopists understand by the ` resolution ' of an instrument is the smallest which does not require ( significant ) deconvolution to obtain spectral line strengths and locations ( wavelengths ) .lines of this separation can be distinguished at moderate signal / noise levels .this arbitrariness in the definition of has always been recognised , from the early use of the rayleigh criterion .there is in principle no problem with an arbitrary definition of and hence , provided it is consistent between various systems that are to be compared .thus meaningful comparisons could be made using _ provided _ that the lsf has the same form in each case .but the problem arises because this is not true : a diffraction - limited slit spectrograph gives a profile , a projected multi - mode circular fibre feed gives a boxy profile ( a half ellipse ) , a fabry - perot etalon with high finesse gives a lorentzian profile , a single - mode fibre or waveguide will give a gaussian profile , and a lsf with significant aberrations may resemble a gaussian but in general will have its own unique form .it is when comparing resolving power between instruments with different forms of lsf that inconsistency arises , and as shown below the inconsistency can exceed a factor of two in resolving power .this is a significant error in the context of scientific requirements for resolution , e.g. in stellar abundance studies .moreover , resolving power is typically one of the formal specifications of a spectrograph , yet without a description of the lsf and the way is to be measured , any requirement on is necessarily imprecise in its meaning .likewise , the concept of signal / noise per resolution element is vague because the ` resolution element ' is not well defined. inconsistencies also occur between the well - known formulas for theoretical resolving power : \a ) for a diffraction - limited slit spectrograph with uniform illumination of all grating lines ( = diffraction order , = number of illuminated lines ) assumes a lsf and the rayleigh criterion , i.e. the maximum of a spectral line of wavelength occurs at the same position on the detector as the first zero of the line at .\b ) for a fabry - perot instrument ( = order of interference , = etalon finesse ) assumes separation of the two lorentzian lsfs by their fwhm .\c ) for a slit - limited spectrograph used in littrow configuration ( = collimated beam diameter , = grating incidence angle , = telescope diameter , = slit width in angular measure on the sky ) assumes rectangular lsfs ( i.e. perfect images of a uniformly illuminated slit ) and two lines are regarded as just resolved when the two slit images just touch .there is thus a need to provide a more consistent definition of resolving power , so that comparisons can be made with better precision . in this paperi first illustrate the problem by comparing various lsf forms with two lines separated according to the various criteria that have been proposed .i then attempt to provide a consistent definition of resolution across different lsf forms .the influence of sampling of spectra into discrete pixels is important in practice , but will be considered separately in a later work . for the present paper , sampling issues will be avoided by using a sufficiently large number of pixels so that profiles are effectively continuous .this will keep the discussion focused on the issue of resolution itself .the discussion here will be confined to 1-dimensional spectra , e.g. after processing to integrate over the spatial direction of a raw 2-dimensional data set .figure [ fig : bigplot ] compares the different lsf profiles used in this work and the various resolution criteria .there are a number of points to note from this figure . taking the rows in order : \1 ) the top row shows a single spectral emission line of each lsf form .the , rectangular and lorentizian lsfs were introduced above .the gaussian is often used as a general form of smooth profile , perhaps caused by many small errors and aberrations smoothing the ideal profile and combining via the central limit theorem to give a gaussian distribution .the projected circle profile in column d applies to the case of a multimode fibre , which presents a uniformly illuminated circular image at the spectrograph entrance . when integrated over the spatial direction and presented as a profile along the wavelength axis , it has the form of a half - ellipse .( this is an abel transform ; see e.g. bracewell 1995 p 367 . )\2 ) this and the subsequent rows show a pair of identical lines separated according to various criteria .the three numbers towards the right hand side of each panel show the separation / fwhm , the local minimum and the value of the autocorrelation at the separation shown .panel 2a shows the classical rayleigh criterion separation of two profiles .the local minimum between the peaks is 81.1% of the peak height . to many spectroscopiststhis does indeed represent what is meant by two lines being just resolved .but the separation is 1.129 fwhm , illustrating the inconsistency of the two criteria .the rayleigh criterion , where one peak is placed over the zero of the other profile , can not be used for the gaussian or lorentzian profiles which do not have a zero . for the projected circle the boxy profile , with slope increasing as the edge of the profile is approached , produces the central spike in the sum as seen in all of panels 2d to 6d . in practice aberrations and pixelisationwill remove this to some extent , but its effects must still be considered .\3 ) the rayleigh criterion can be generalised by taking its local minimum of 81.1% as the defining criterion .this can be applied to all except the projected circle , due to its central spike .\4 ) the fwhm is the most - used criterion nowadays .but as panels 4a and 4b show , for the and gaussian profiles the resulting blended profile is not well resolved . for the profile ( 4a )the local minimum is 97% of the peak , which does not accord with the common understanding of resolution .a gaussian profile ( 4b ) is only a little better .the projected circle ( 4d ) has an overall flux deficit between the peaks but a central spike at the midpoint . in practicethe result will depend on the degree of smoothing and pixelisation .for the lorentzian profile ( 4e ) the relative minimum is well seen but only with good signal / noise , due to the substantial overlap of the line wings ( note the high autocorrelation of 0.498 ) .\5 ) again using the profiles separated at the rayleigh criterion as a standard , this row takes the resulting autocorrelation value of 0.151 and uses it as a criterion for two lines to be just resolved . due to the high wings of the lorentzian, it requires a separation of 2.366 fwhm to meet this criterion ( panel 5e ) .\6 ) the equivalent width ( area / height ) has been proposed to meet some of the above objections ( e.g. jones et al .1995 , 2002 ) . for the profiles ,a separation of 1.0 equivalent width is extremely close to the rayleigh criterion . for other profilesit also gives reasonable results .the conclusion from figure [ fig : bigplot ] is that none of the separation criteria shown is clearly superior for all lsf forms , and in particular the fwhm is a poor indicator of resolution for the important cases of smooth sinc or gaussian profiles . for reference ,the main properties of the lsf functional forms discussed in this paper are given in table [ tab : props ] .lccccc & formula & peak & fwhm & ew & z + + & & & & & + gaussian & & & & & + projected circle & [ & & & & - + lorentzian & & & & & + + formulas are normalised to unit area under the profile .this paper aims to present criteria by which resolving power can be more meaningfully compared across lsfs of different functional forms .two approaches have been taken . in section [ sec : wavelength ]a criterion based on wavelength accuracy will be given .however the first criterion , to be discussed in this section , is developed by recognising that what an astronomer means by two close spectral lines being resolved is that the two can be seen separately and can have their strengths and positions ( wavelengths ) measured without undue influence of one on the other .there will still be an arbitrary definition of what constitutes ` undue ' influence , but the aim is to ensure that there is only _one _ arbitrary definition and that all other measures are consistent with it .the influence of one spectral line on another is measured by its effect in increasing the noise in measurement of the flux of the line .the procedure to use this method was to generate lsfs of various functional forms , with two equal strength peaks at separations varying from 0.8 to 2.0 fwhm , add noise to them and then perform least squares fits to extract the positions and strengths of the two peaks .importantly , the width of each peak was treated as known rather than as a further variable to fit .this was done for two reasons : ( 1 ) at the ultimate closest resolvable approach of two spectral lines it is recognised that the issue is to separate two unresolved lines .it is well known that if the line width is itself resolved then lines would have to be further apart to be properly resolved .this is not what ` spectral resolving power ' is taken to mean .( 2 ) once two lines begin to blend , in the presence of noise the fitting process would be likely to result in one broadened line rather than two partly blended lines .figure [ fig : example_dual_peak ] shows an example of two lines , with added noise , and the least squares fits .the simulations were performed using the same noise power within the fwhm for each lsf form .this is an unavoidably arbitrary choice of noise power normalisation , but it does not influence the results to be derived from these simulations .the different lsfs were normalised to the same total area i.e. flux ( not peak ) .this reflects the fact that total signal power in the spectral line is the quantity of importance to astronomers .fwhm , with 62.6 samples over a fwhm , and subject to independent gaussian distributed noise with standard deviation 1.0 in each sample .red curve : the least squares fit to two gaussians .this plot shows one of 4000 realisations at one of 25 peak spacings . ]figure [ fig : sig_flux_sepn ] shows the results of this process . for each of the five lsfs shown ( , gaussian , lorentzian ,projected circle and projected circle convolved with a gaussian ) , a large number of trials ( 4000 ) was done at each of 25 separations from 0.8 to 2.0 the fwhm . from each set of 4000 trials the standard deviation of the least squaresfitted flux was found .the smooth curves shown are semi - empirical model fits to the data of standard deviation versus peak separation and are used to smooth out irregularities due to random fluctuations .the functional form fitted was : where is the lsf function , being the independent variable along the dispersion axis .two free parameters , and , were adjusted to fit the simulation results for each lsf and in all cases gave a very good fit , within the residual fluctuations .the values of were obtained using equation [ eqn : sig_flux ] below . at large separationsthe standard deviations approach the value obtained for an isolated peak , i.e. by this criterion the lines are not influencing each other , and are fully resolved .however the lorentz profile has such broad wings that it has not yet reached a constant level at the separation of 2.0 fwhm .the lorentz profile shows the effect of one peak disturbing another ( i.e. increasing its noise ) at substantially larger separations than the other lsfs , when measured in multiples of the fwhm .the and gaussian lsfs show very similar curves in figure [ fig : sig_flux_sepn ] , consistent with the fact that both are peaked functions which drop smoothly and rapidly towards zero .the projected circle lsf has a very different curve of vs separation .there is no effect at all of one peak on the other until they begin to touch , at fwhm = 1.1547 fwhm . at smaller separationsthere is some interaction but it is very small because the profiles are convex with such steep sides .figure [ fig : sig_flux_sepn ] also includes a curve for a projected circle lsf convolved with a gaussian of width such that the final fwhm is a minimum ( see section [ sec : proj_circ ] ) . figure [ fig : sig_flux_sepn ] makes clear that different lsf functional forms do indeed have very different properties as regards the mutual effects of two lines , and to simply use the fwhm as a resolution criterion is a poor indicator of spectral resolution as it affects line finding and fitting .it is also clear that the lorentzian profile will give poor resolution at a given separation in fwhms , while the projected circle is exceptionally good .the data in figure [ fig : sig_flux_sepn ] can be used to derive scaling factors to quantitatively compare different lsfs .the method used here was to take a profile separated according to the rayleigh criterion as the standard of ` just resolved ' spectral lines .this leads to a value increased by a factor of 1.0514 compared with its limiting value at large separations ( i.e. for isolated peaks ) .other lsf forms will thus be considered to be just resolved when their values are likewise increased by over the value at large separations . defining a resolving power according to this criterion : where is the separation / fwhm required to achieve the above criterion , and is the standard deviation of a flux measurement for an isolated peak ( equation [ eqn : sig_flux ] ) , the values given in table [ tab : scaling_factors ] are obtained .although the profile was used as the standard for resolution , its value of is not unity because the rayleigh criterion corresponds to a peak separation of 1.129 fwhm .the values show how much the resolving powers determined by the present criterion of equal disturbance in peak fitting due to an adjacent line differ from those based simply on the fwhm . as expected , the lorentzian is the worst , with an only 59% of its while the projected circle is the best , with exceeding by 20% .the convolved projected circle is a more realistic case ( to be discussed in section [ sec : proj_circ ] ) and its resolving power , while less than the exact projected circle , is still substantially greater than a gaussian or .vs separation of two peaks , for five different lsf forms .from highest to lowest at peak separation = 1.0 the curves are : black - lorentzian ; green - gaussian ; blue - sinc ; magenta - projected circle convolved with a gaussian ( see section [ sec : proj_circ ] ) ; red - projected circle .the blue square on the sinc curve indicates the rayleigh criterion separation . ]lcc lsf form & & + + [ -0.2 cm ] & 1.129 & 1.129 + gaussian & 1.21 & 1.127 + lorentzian & 1.70 & 1.605 + projected circle & 0.83 + projected circle ( conv ) & 0.95 & 0.943 + figure [ fig : medplot ] shows profiles presented in the same style as figure [ fig : bigplot ] but with row 2 showing various lsf types with two peaks separated according to the criterion . these show the separations which are regarded as ` just resolved ' according to the criterion introduced here .before introducing a second method of quantifying resolving power , it is necessary to review the formulas for uncertainties in the flux and position ( wavelength ) of a single spectral line peak .when the width of the peak is known and only the amplitude ( flux ) and position ( wavelength ) are fitted by least squares , and assuming a symmetrical lsf form , clarke et al ( 1969 ) give the formulas : in these formulas is the rms noise in each wavelength channel and is assumed to be the same for all channels .the summation is over all wavelength channels contributing to the profile .the lsf function is , and denotes its derivative with respect to wavelength .note that in these equations is normalised to a peak of 1.00 , and the ` pk ' in eqn [ eqn : sig_x ] is the peak flux of the response whose is to be found .these formulas have been verified by monte carlo tests and show that the precision in finding the strength of a peak depends most on the values where the intensity is greatest , while the precision in location of the peak depends on the regions of greatest slope .it is not appropriate in this paper to consider a detailed noise model where one would take into account shot noise from both the object and the background sky , as well as read - out noise and dark noise .instead , it will suffice to use the above assumption of constant noise in all channels .the results are thus most directly applicable to spectra that are background ( or read - out noise ) limited but serve as a guide for other noise models as well .they can also be applied to absorption lines , especially those that do not depress the continuum by a large fraction . in the present worka large number of channels ( pixels ) have been used , e.g. 62.5 or 100 across the fwhm , to avoid the issue of sampling effects . however , for the projected circle the gradient becomes infinite as the intensity drops to zero , and the sum in equation [ eqn : sig_x ] would always be dominated by the edge pixels ( see section [ sec : proj_circ ] ) .hence this case is omitted here .the more realistic convolved projected circle avoids this problem .the second method to be considered originates from a somewhat independent property of high resolving power , namely the ability to measure accurate positions ( wavelengths ) of unresolved spectral lines .thus two spectrographs can be considered as having equal resolving power if they give the same wavelength accuracy despite their different lsf forms , assuming the noise power per wavelength interval remains constant and equal total fluxes are received in both cases . to compare resolving powers using this criterion , there is no need to perform noise simulations as in section [ sec : consistent ] but instead equation [ eqn : sig_x ] can be used as follows .define is a type of width measure of a lsf , which will be referred to as the ` noise width ' , given its role in calculating . for empirically determined lsfs generally be calculated numerically as where is the channel width in the summation .values of for the lsf types discussed here are included in table [ tab : props ] .equation [ eqn : sig_x ] can now be written as where again is the rms noise in the channel of width and the subscript ` iso ' has been omitted because all profiles considered in this section are single .the basis of this second resolution criterion is that of any lsf will be equated to , with the condition that the two profiles have equal total fluxes ( not equal peak values ) .the condition for equal total fluxes is simply where stands for the equivalent width .equating the s for the given lsf and for sinc and using the values of and for sinc from table [ tab : props ] there follows this is the fwhm of a sinc profile which would have the same wavelength noise error as the actual lsf being examined .if the value is large it means that a wide sinc could give accuracy equal to the lsf i.e. the lsf is poor ( e.g. a lorentzian ) .if the fwhm is narrow it means that a high resolution sinc is needed to equal the accuracy of a good lsf , e.g. the convolved projected circle .the final step is to form the ratio of this calculated fwhm with that of the actual lsf and scale it by a factor 1.129 which will make the final scaled resolving powers consistent with the rayleigh criterion for sinc profiles .this gives values of for the standard lsf forms are included in table [ tab : scaling_factors ] , except for the projected circle where the infinite gradient limit makes the calculation invalid .values are quite similar to the scaling factors derived in section [ sec : consistent ] .the interpretation of is that is the effective which should be used in place of the fwhm in order to calculate resolution on a scale consistent with the rayleigh criterion for a sinc profile .thus is the resolving power on this consistent scale. this criterion will be easier to use in practice than the -based criterion of section [ sec : consistent ] .for an empirically determined lsf , for example resulting from ray tracing of a spectrograph design , one would need to interpolate the lsf to a fine sampling interval , and smooth out any fine structure artefacts from the lsf calculation ( e.g. from a finite number of traced rays ) , then use equation [ eqn : z_sum ] to find the noise width and also find the fwhm and equivalent width ( area / peak ) .then equation [ eqn : beta ] can be used to find the scaling factor which is finally applied in equation [ eqn : r ] . in the case of an asymmetric lsf the more general form of equation [ eqn : sig_x ] given by clarke et al .( 1969 ) eqn ( a7 ) should be replaced by should be used , although the corrections for asymmetry are small .the projected circle lsf is important in practice and has very different properties compared with other forms , and so warrants further discussion .the use of multi - mode fibres to feed images to a pseudo - slit in a spectrograph is increasingly common . taking the fibre exit face as a uniformly illuminated circle ( a good approximation given the spatial scrambling produced by transmission along the fibre ), when its image has been integrated over the spectrograph s spatial direction , the result will be the projected circle as illustrated in panel 1d of figure [ fig : bigplot ] .it differs markedly from the sinc , gaussian and lorentzian forms in that the projected circle lsf approaches the x - axis with infinite slope .this convex - outwards form results in the formation of a central spike when two such lsfs overlap , as in figure [ fig : bigplot ] .interestingly , the projected circle line profile also results from doppler broadening of an intrinsically narrow line in a rapidly rotating star .this is because the radial velocity is constant along strips parallel to the rotation axis , and the flux at any one wavelength is due to an integration along such a strip , i.e. a projection .the effects of the very steep sides of such a profile have been noted , and dravins ( 1992 ) drew attention to the sharp spectral features which could appear at wavelengths where no spectral line is present , i.e. the central spikes as seen in figure [ fig : bigplot ] .he also noted that information about the true stellar spectrum could be obtained regarding features considerably narrower than the fwhm of the full broadened profile - this is again due to the steep sides , which lead to the central spike being narrow and easily smoothed out ( in this case by intrinsic line width in a stellar spectrum ) .as shown in section [ sec : consistent ] the lack of wings of the projected circle lsf result in minimal noise interaction of two close lines , i.e. its effective resolving power is substantially higher than its fwhm would suggest .the pure projected circle lsf can not be directly compared with other lsfs as regards wavelength uncertainties , because of the infinite slopes .this means that however fine the sampling may be , the value will still depend on the sampling interval .this is illustrated in figure [ fig : proj_circle_beta ] which shows dropping approximately logarithmically with increasing sampling frequency .the values of shown are all substantially less than any of those in table [ tab : scaling_factors ] .even with some blurring due to aberrations a well - sampled lsf resembling the projected circle will have much higher wavelength accuracy than a gaussian - like peak of the same fwhm . for a projected circle lsf as a function of the number of samples across the full width to zero intensity ( fwzi ) .] one of the peculiarities produced by the convex boxy shape of this lsf is that the fwhm is _ reduced _ by convolution with a gaussian of moderate width .this effect was noted in the design of the aaomega spectrograph ( saunders et al .this is another illustration of the inadequacy of fwhm as a measure of resolution , since one would not claim that convolution of the lsf by spectrograph aberrations increases the resolving power .figure [ fig : conv_width ] illustrates this behaviour , using a projected circle lsf of fwhm = 1.00 convolved with gaussians of various fwhms up to 0.7 .the resulting fwhm drops by as much as 5% , when the gaussian fwhm = 0.3259 , before rising again as the gaussian convolving function is further broadened .figure [ fig : three_conv ] shows three of the profiles : the pure projected circle ; the case of the minimum final fwhm , and the case of gaussian fwhm = 0.595 which restores the final fwhm to 1.00 , albeit with a very different lsf form compared with the initial projected circle .the case of the minimum final fwhm was used as the example of a convolved projected circle in table [ tab : scaling_factors ] and figures [ fig : sig_flux_sepn ] and [ fig : medplot ] . .the curves from highest to lowest at the peak are : black : pure unconvolved projected circle ; blue : gaussian fwhm = 0.3259 gives the minimum final fwhm of 0.9494 ; red : gaussian fwhm = 0.595 results in a final fwhm of 1.00 . ]the analysis above has shown that characterising the resolution of a spectrograph by its instrumental fwhm is a poor measure because it fails to take fully into account the variation among different line spread function forms of the quantities which matter most in spectroscopy - namely the disturbance which a spectral line causes to a near neighbour , or the accuracy with which a single line s wavelength can be measured .using these two criteria , a very different picture emerges , as shown by the and scaling factors in table [ tab : scaling_factors ] . there is more than a factor of two difference in resolving power between the best and worst lsfs ( with identical fwhm ) when resolving power is measured on a consistent scale . comparing the various resolution criteriashown in figure [ fig : bigplot ] with the -based criterion of figure [ fig : medplot ] shows that the equivalent width is the one that comes closest to matching the consistent resolution scale introduced here .but the match is not exact , with a significant difference in the case of the gaussian lsf .the lorentzian lsf s broad wings greatly increase its effective and hence reduce the resolving power of an instrument with this lsf well below the value given by the fwhm .it is well known by users of imaging fabry - perot instruments , for example , that this lsf makes the instrument unsuitable for absorption line studies , because a line core is influenced by convolution with continuum fluctuations over a substantial wavelength range . here, this influence has been quantified and the lorentzian s low relative resolving power explicitly demonstrated .conversely , the projected circle , even after smoothing by significant aberrations , has a steep - sided form which gives substantially higher resolving power than its fwhm would suggest .gaussian and sinc profiles have properties intermediate between these two extremes .but even they have ambiguities at the 10 - 15% level , with a pair of gaussian profiles requiring a separation of 1.129 fwhm to achieve the 81% relative minimum of a generalised rayleigh criterion .either of the two resolution element scaling factors can serve as a quality indicator for any given lsf profile .it is notable that the and scaling factors in table [ tab : scaling_factors ] are quite similar for a given lsf type , despite the former being based on the additional error in fitting the flux of a line caused by a near neighbour , while the latter is based on accuracy of wavelength determination for isolated lines .this agreement strengthens the case for using one of these resulting scaling factors to bring resolving power of any spectrograph on to a consistent scale . in principle , the ` ' factor , based on mutual disturbance in fitting a line is the more appropriate in low to moderate signal / noise spectra , while the ` ' factor , based on wavelength accuracy , is the more appropriate for high - resolution , high signal / noise work . but given the similarity of the two factors and that the factor is much easier to calculate for a general empirically - determined instrumental profile , the factor is recommended as a suitable standard measure for comparison of resolving power between different spectrographs . + * references * + bracewell , r.n ._ two dimensional imaging , _ prentice hall 1995 .+ clarke , t.w . ,frater , r.h . , large , m.i . , munro , r.e.b . andmurdoch , h.s . aust .10 , 3 , 1969 .+ dravins , d. _ high resolution spectroscopy with the vlt , _ ed .ulrich , m .- h . ,eso workshop no .40 , 55 , 1992 .+ jones , a.w . , bland - hawthorn , j. and shopbell , p.l .77 , 503 , 1995 .+ jones , d.h . , shopbell , p.l . and bland - hawthorn , j. mnras 329 , 759 , 2002 .+ saunders , w. aao newsletter no . 108 , 8 , august 2005 .+ spronck , j.f.p . ,fischer , d.a . ,kaplan , z.a . , schwab , c. and szymkowiak , a. arxiv 1303.5792 2013 .
the spectral resolving power is a key property of any spectrograph , but its definition is vague because the ` smallest resolvable wavelength difference ' does not have a consistent definition . often the fwhm is used , but this is not consistent when comparing the resolution of instruments with different forms of spectral line spread function . here two methods for calculating resolving power on a consistent scale are given . the first is based on the principle that two spectral lines are just resolved when the mutual disturbance in fitting the fluxes of the lines reaches a threshold ( here equal to that of sinc profiles at the rayleigh criterion ) . the second criterion assumes that two spectrographs have equal resolving powers if the wavelength error in fitting a narrow spectral line is the same in each case ( given equal signal flux and noise power ) . the two criteria give similar results , and give rise to scaling factors which can be applied to bring resolving power calculated using the fwhm on to a consistent scale . the differences among commonly encountered line spread functions are substantial , with a lorentzian profile ( as produced by an imaging fabry - perot interferometer ) being a factor of two worse than the boxy profile from a projected circle ( as produced by integration across the spatial dimension of a multi - mode fibre ) when both have the same fwhm . the projected circle has a larger fwhm in comparison with its true resolution , so using fwhm to characterise the resolution of a spectrograph which is fed by multi - mode fibres significantly underestimates its true resolving power if it has small aberrations and a well - sampled profile . astronomical instrumentation spectrographs spectral resolving power data analysis and techniques spectral resolution
conventional logic and memory devices are built out of stable deterministic units such as standard mos ( metal oxide semiconductor ) transistors , or nanomagnets with energy barriers in excess of 40 kt . the objective of this paper is to introduce the concept of what we call `` p - bits '' representing unstable , stochastic units which can be interconnected to create robust correlations that implement precise logical functions .remarkably this `` probabilistic spin logic '' or psl can also be used to implement the inverse function with multiple solutions which are all visited with equal probability . any random signal generatorwhose randomness can be tuned with a third terminal should be a suitable building block for the kind of probabilistic spin logic ( psl ) proposed in this paper .the icon in fig.1b represents our generic building block whose input controls the output according to the equation ( fig.1a ) where rand(,+1 ) represents a random number uniformly distributed between and + 1 , and is assumed large enough that memory of past history has been lost .if the input is zero , the output takes on a value of or + 1 with equal probability , as shown in the middle panel of fig .[ fi : fig1 ] . a negative input makes negative values more likely ( left panel ) while a positive input makes positive values more likely ( right panel ) .[ fi : fig1]c shows as the input is ramped from negative to positive values . also shown is the time - averaged value of which equals . a possible physical implementation of p - bits could use stochastic nanomagnets with low energy barriers whose retention time : is very small , of the order of which is a material dependent quantity called the attempt time and is experimentally found to be among different magnetic materials .such stochastic nanomagnets can be pinned to a given direction with spin - currents that are a factor of 40 less than those needed to switch 40 kt magnets .we have performed detailed simulation of such magnets using the stochastic landau - lifshitz - gilbert ( s - llg ) equation for both perpendicular ( pma ) and in - plane ( i m a ) low barrier nanomagnets and the results are in good agreement with those presented in this paper using the simple generic model in eq .( [ eq : sigmoid ] ) .these magnet - specific results will be presented separately , .all results in this paper are based on eq .( [ eq : sigmoid ] ) in order to emphasize the generality of the concept of p - bits which need not necessarily be nanomagnet - based .the sigmoidal tuning curve in fig.1c represents the essence of a p - bit , and it seems like a natural feature of nanomagnets driven by spin currents , but it is not too different from those discussed in the context of cmos . to ensure that individual p - bits can be interconnected to produce robust correlations , it is also important to have separate terminals marked w and r for writing ( more correctly for biasing ) and for reading as shown in fig .[ fi : fig1]b . with im a nanomagnets this could be accomplished following existing experiments using the giant spin hall effect ( gshe ) .recent experiments using a built - in exchange bias could make this approach applicable to pma as well .note , however , that these experiments have all been performed with stable free layers , and have to be carried out with low barrier magnets in order to establish their suitability for the implementation of p - bits .as the field progresses , one can expect the bias terminal to involve voltage control instead of current control , just as the output could invove quatities other than magnetization . _ ensemble - average versus time - average : _ a sigmoidal response was presented in for the ensemble - averaged magnetization of large barrier magnets biased along a neutral state . thiswas proposed as a building block for both ising computers and directed belief networks and a recent preprint describes a similar approach applied to a graph coloring problem .by contrast low barrier nanomagnets provide a sigmoidal response for the time - averaged magnetization and a suitably engineered network of such nanomagnets cycle through the collective states at ghz rates , with an emphasis on the `` low energy states '' which can encode the solution to the combinatorial optimization problems , like the traveling salesman problem ( tsp ) as shown in .once the time - varying magnetization has been converted into a time - varying voltage through a read circuit , a simple rc circuit can be used to extract the answer through a moving time average .for example , in fig .[ fi : fig1]c the red trace was obtained from the rapidly varying blue trace using an rc circuit in a spice simulation .the central feature underlying both implementations is the _ p - bit _ that acts like a tunable random number generator , providing an intrinsic sigmoidal response for the ensemble - averaged or the time - averaged magnetization as a function of the spin current .it is this response that allows us to _ correlate _ the fluctuations of different p - bits in a useful manner by interconnecting them according to where provides a local bias to magnet and defines the effect of bit j to bit i , and sets a global scale for the strength of the interactions like an inverse `` pseudo - temperature '' .equations ( [ eq : sigmoid]-[eq : weight ] ) are essentially the same as the defining equations for boltzmann machines introduced by hinton and his collaborators which have had enormous impact in the field of machine learning , but they are usually implemented in software that is run on standard cmos hardware .equation ( [ eq : sigmoid ] ) arises naturally from the physics of low barrier nanomagnets as we have discussed above .equation ( [ eq : weight ] ) represents the `` weight logic '' for which there are many candidates such as memristors , floating - gate based devices , domain - wall based devices , standard cmos .the suitability of these options will depend on the range of j values and the sparsity of the j - matrix ._ outline of paper : _ the objective of this paper is to show that p - bits can provide the building block for a probabilistic spin logic ( psl ) framework that is suitable not just for heuristic non - boolean logic but _ even for precise boolean logic _ by implementing relatively small gates as bidirectional boltzmann machines ( bm , fig .[ fi : fig2]a ) which are then interconnected in a directed fashion to implement complex functionalities ( fig .[ fi : fig2]b ) . in section ii we will briefly describe how a given truth table can be implemented reliably and reconfigurably ( fig .[ fi : fig3 ] ) using bidirectional networks of p - bits for which we can define an energy functional such that the boltzmann law accurately describes the long - time average of the individual probabilities for the interconnected network to be in different states : fig .[ fi : fig4 ] shows the close agreement between the probabilities obtained from a direct numerical solution of eq .( [ eq : sigmoid][eq : weight ] ) over long times and those obtained from eq .( [ eq : bl ] ) for an and gate , consistent with its designation as a bm .unlike standard transistor - based implementations , these bms are reversible like the memcomputers proposed in .they not only implement boolean functions providing the correct output for a given input , but also the _ inverse function _ providing the correct input(s ) for a given output , thus providing solutions to the boolean satisfiability problem ( fig .[ fi : fig5 ] ) . in section iiiwe show that full adders implemented as bms can be interconnected to implement relatively large logic operations like 32-bit adders .this is particularly surprising since we do not expect a large collection of stochastic units to get correlated precisely enough to converge on the one correct answer out of possibilities !note that the 32-bit adder is not fully bidirectional , even though it comprises full adders that are individually bidirectional .the carry bits connect unidirectionally from the less significant to the more significant bit as dictated by the logic of addition .interestingly , despite this deviation from complete bidirectionality , the overall adder seems to work in reverse : when we fix the sum bits and let the input bits `` float '' , the input bits fluctuate randomly , but in a correlated manner such that their sum is always precisely equal to the fixed sum as shown in fig .[ fi : fig7 ] . finally ,by combining individual full adders ( fa ) and and gates as a directed network of bms , we construct a 4-bit multiplier that performs integer factorization ( fig .[ fi : fig8 ] ) when operated in reverse similar to memcomputing based on deterministic memristors . note , however , that the building blocks and operating principles of stochastic p - bits and memcomputing are very different .) which ensures that the truth table corresponds to the low energy states of the boltzmann machines according to eq .( [ eq : bl ] ) .a handle bit of + 1 is introduced to each line of the truth table which can be biased to ensure that the complementary truth table does not appear along with the desired one .this bit also allows a truth table to be electrically reconfigured into its complement . ]we now present examples showing how any given truth table can be implemented reliably using boltzmann machines .our approach , pictorially described in fig .[ fi : fig3 ] , begins by transforming a given truth table from binary to bipolar variables .the lines of the truth table are then required to be eigenvectors each with eigenvalue + 1 , all other eigenvectors are assumed to have 0 eigenvalues .this leads to following prescription for j as shown in fig .[ fi : fig3 ] : = \sum_{i , j}\mathrm{[s^{-1}]_{ij } u_i u_j^\dagger}\\ \label{eq : amit } \rm s_{ij } = u_i^\dagger u_j \end{aligned}\ ] ] where are the eigenvectors corresponding to lines in the truth table of a boolean operation and s is a projection matrix that accounts for the non - orthogonality of the vectors defined by different lines of the truth table .note that the resultant j - matrix is always symmetric ( ) with diagonal terms that are subtracted in our models such that .the number of p - bits in the system is made greater than the number of lines in a truth table through the addition of hidden layers ( fig .[ fi : fig3 ] ) to ensure that the number of conditions we impose is less than the dimension of the space defined by the number of p - bits .another important aspect is that an eigenvector implies that its complement is also a valid eigenvector .however only one of these might belong to a truth table .we introduce a `` handle '' bit to each that is biased to distinguish complementary eigenvectors .these handle bits provide the added benefit of reconfigurability .for example , and and or gates have complementary truth tables , and a given gate can be electrically reconfigured as an and or an or gate using the handle bit . note that this prescription for [ j ] is based on the principles developed originally for hopfield networks ( , and eq .( 4.20 ) in ) .however , other approaches are possible along the lines described in the context of ising hamiltonians for quantum computers .we have tried some of these other designs for [ j ] and many of them lead to results similar to those presented here . for practical implementations , it will be important to evaluate different approaches in terms of their demands on the dynamic range and accuracy of the weight logic .once a j - matrix and the h - vector are obtained for a given problem , the system is initialized by randomizing all at time , .first , the current ( voltage ) that a given p - bit ( ) feels due to all the other is obtained from eq .( [ eq : weight ] ) , and the value is updated according to eq .( [ eq : sigmoid ] ) . next the procedureis repeated for the remaining p - bits by finding the current they feel due to all other using the _ updated _ values of .for this reason , the order of updating was chosen randomly in our models and we found that the order of updating has no effect in our results .this is similar to the `` asynchronous '' updating of hopfield networks .[ fi : fig4 ] shows the time evolution of an and gate designed as outlined in fig .[ fi : fig3 ] with a total of 8 p - bits comprising 5-auxiliary bits in addition to the 3 input / output bits of an and gate .initially for the interaction strength , making the pseudo - temperature of the system infinite , and the network produces uncorrelated noise , visiting each state with equal probability . in the second phase ( ) , the interaction strength is suddenly increased to , effectively `` quenching '' the network by reducing the temperature .this correlates the system such that only the states corresponding to the truth table of the and gate are visited , each with equal probability when a long time average is taken .the average probabilities in each phase are reproduced quantitatively by the boltzmann law defined by eq .( [ eq : bl ] ) . in fig .[ fi : fig5 ] , we show how a correlated network that produces a given truth table can be used to do directed computation analogous to standard cmos logic . by `` clamping '' the input bits of an or gate ( ) through their bias terminals , to ( a , b)=(+1,+1 ) ,the system is forced to only one of the peaks of the truth table , effectively making c=1 .the psl gates however exhibit a remarkable difference with standard logic gates , in that inputs and outputs are on an equal footing .not only do clamped inputs give the corresponding output , _ a clamped output gives the corresponding input(s ) ._ in the second phase ( ) the output of the or gate is clamped to + 1 , that produces three possible peaks for the input terminals , corresponding to various possible input combinations that are consistent with the clamped output ( a , b)=(0,1),(1,0 ) and ( 1,1 ) .the probabilistic nature of psl allows it to obtain multiple solutions at the same time ( fig .[ fi : fig5]c ). it also seems to make the results more resilient to _ unwanted _ noise due to `` stray fields '' that are inevitable in physical implementations as shown in fig .[ fi : fig6 ] . ) as indicated in the figure .the noise is assumed to be uniformly distributed over all p - bits in a given network , and centered around zero with magnitude .each gate is simulated 50,000 times for t=100 time steps to produce an error probability for a given noise value , and the maximum peak produced by the system is assumed to be an output that can be read with certainty .the system shows robust behavior even in the presence of large levels of noise . ]when constructing larger circuits composed of individual boltzmann machines , the reciprocal nature of the boltzmann machine often interferes with the directed nature of computation that is desired .it seems advisable to use a hybrid approach .for example in constructing a 32-bit adder we use full - adders ( fa ) that are individually bms with symmetric connections , .but when connecting the carry bit from one fa to the next , the coupling element is non - zero in only one direction from the less significant to the more significant bit .this directed coupling between the components distinguishes psl from purely reciprocal boltzmann machines . indeedeven the full adder could be implemented not as a boltzmann machine but as a directed network of more basic gates ._ adder : _ fig .[ fi : fig7 ] shows the operation of a 32-bit adder that sums two 32-bit numbers a and b to calculate the 33-bit sum s. in the initial phase ( ) we have corresponding to infinite temperature so that the system fluctuates among 8 billion possibilities . at increase to 70 , and the system unerringly settles down to the one unique solution within 10 to 15 time steps , as we have observed in many different runs , even while adding two binary numbers that propagates a carry all the way from the first bit to the 32 bit , as in this example .interestingly , although the overall system includes several unidirectional connections , it seems to be able to perform the inverse function as well . witha and b clamped it calculates s = a+b as noted above .conversely with s clamped , the input bits a and b fluctuate in a correlated manner so as to make their sum sharply peaked around s ! fig .[ fi : fig7 ] shows the time evolution of the input bits that have broad distributions spanning a wide range .their sum too shows a broad distribution for when , but once steps up at the distributions of a and b get strongly correlated making the distribution of a+b sharply peaked around the fixed value of s. _ factorizer : _ fig .[ fi : fig8 ] shows how the reversibility of psl logic blocks can be used to perform integer factorization using a multiplier in reverse similar to memcomputing devices .normally , the factorization problem requires specific algorithms and hardware to be performed in cmos - like hardware , but here we simply use a digital 4-bit multiplier working in _ reverse _ to achieve this operation , which would not be possible in standard cmos implementations . specifically with the output of the multiplier clamped to a given integer from 0 to 15, the input bits float to the correct factors . as with other examples ,the interconnection strength is stepped up from zero at .if the output is set to 9 , both inputs float to 3 as shown in fig .[ fi : fig8]b . with the output set to 6, both inputs fluctuate between two values , 2 and 3 .note that factors like do not show up , since encoding 9 in binary requires 4-bits ( 1001 ) and the input terminals only have 2-bits .we have checked other cases where factorizing 3 shows both and , and factorizing zero shows all possible peaks since there are many solutions such that and so on .we also kept the same directed connections between the full adders for the carry bits , making them a directed network of boltzmann machines , similar to the 32-bit adder .moreover , we kept a directed connection _ from _ the full adders _ to _ the and gates as shown in fig .[ fi : fig8]a since the information needs to flow from the output to the input in the case of factorization .the input bits that go to multiple and gates are `` tied '' to each other with a positive exchange ( ) value much like 2-spins interacting ferromagnetically , however in psl we envision these interactions to be controlled purely electrically .we have presented a framework for using probabilistic units or p - bits as a building block for deterministic computation by establishing robust correlations among them .this requires the randomness of individual units to be tunable with an input signal , with a clear separation of input and output terminals as shown in fig .[ fi : fig1 ] .low barrier nanomagnets driven by spin currents generated through the spin hall effect could be used to provide a physical realization of a p - bit , but this is only one of many possibilities . to emphasize this generality we use a generic hardware - agnostic model defined by eqs .( [ eq : sigmoid]-[eq : weight ] ) whose predictions are in good agreement with more specific simulations based on the s - llg equation for stochastic nanomagnets . *it shows that unstable , stochastic units can be interconnected to create robust correlations that can even implement precise boolean functions .a 32-bit adder is shown to converge unerringly to the one correct answer by sorting through possibilities at ghz speeds . *it demonstrates that the proposed probabilistic spin logic ( psl ) can implement not just a function but also its inverse , which in general has multiple solutions .a 4-bit multiplier working in the inverse mode is shown to function as a factorizer . *it identifies an electrically tunable random bit generator as the building block which we call a p - bit that can be interconnected in large numbers to implement psl .low barrier nanomagnets are suggested as a natural realization of p - bits , but not necessarily the only one . *it defines a new direction for the field of spintronic and nanomagnetic logic by shifting the emphasis from stable high barrier magnets to stochastic low barrier magnets .* it introduces the intriguing possibility of p - bits emerging as a robust version of qubits that could tackle some of the same challenging problems like factorization , but could be achieved at room temperature with present - day technology .it is a real pleasure to acknowledge many helpful discussions with behtash behin - aein ( globalfoundries ) , ernesto e. marinero ( purdue university ) and with jaijeet roychowdhury ( uc berkeley ) .this work was supported in part by c - spin , one of six centers of starnet , a semiconductor research corporation program , sponsored by marco and darpa , in part by the nanoelectronics research initiative through the institute for nanoelectronics discovery and exploration ( index ) center , and in part by the national science foundation through the ncn - needs program , contract 1227020-eec .28ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwosecondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] * * , ( ) ( ) ( ) , in _ _ ( , ) pp . ( ) * * , ( ) in _ _ ( ) pp . * * , ( ) * * , ( ) * * ( ) ( ) ( ) ( ) * * , ( ) ( ) * * , ( ) * * , ( ) * * , ( ) ( ) * * , ( ) ( ) ( ) * * , ( ) _ _ ( , ) * * , ( ) * * , ( ) * * , ( ) * * , ( )
conventional logic and memory devices are built out of stable deterministic units such as standard mos ( metal oxide semiconductor ) transistors , or nanomagnets with energy barriers in excess of 40 kt . in this paper we show that unstable , stochastic units can be interconnected to create robust correlations that implement precise functions . we use a generic model for such p - bits " to illustrate some of the functionalities that can be implemented with this `` probabilistic spin logic ( psl ) '' . remarkably psl can also be used to implement the inverse function with multiple solutions which are all visited with equal probability . the results from the simple generic model agree well with those obtained for low barrier magnets from the stochastic landau - lifshitz - gilbert ( sllg ) equation , but p - bits need not be magnet - based : any tunable random bit generator should be suitable . we show that such p - bits can be interconnected symmetrically to construct hardware boltzmann machines ( bm ) that are designed to implement any given truth table . such bms can be used to implement boolean functions by fixing the inputs to produce a given output . they can also be run in reverse by fixing the outputs to produce _ all the consistent inputs_. we further show that bm units like individual full adders can be interconnected in a directed manner to implement large logic operations such as 32-bit binary addition , that precisely correlate hundreds of stochastic p - bits to pick out a single state out of possibilities ! interestingly some directed networks of bms can also perform the inverse function . for example , a 4-bit multiplier acts as a factorizer in reverse : with the output clamped to an integer value , we show that the inputs float to the only factors consistent with that integer .
the flow of two immiscible fluids through porous media arises in many important industrial and natural situations such as secondary oil recovery , ground water remediation , and geological storage .such flows are known to be potentially unstable , especially when the displacing fluid is more viscous than the displaced one . thereexist some similarities between porous media and hele - shaw flows ( i.e. flow in a hele - shaw cell , see below ) ; for example the pressure drop in both such flows are governed by darcy s law for single fluid flow . due to this and the fact that it is significantly easier to study hele - shaw flows theoretically , numerically , and experimentally , there have been numerous theoretical and numerical studies even for hele - shaw flow of two immiscible fluids since the early 1950s , starting with the work of saffman and taylor .there are many review articles on such studies , for example see .these studies were originally motivated by displacement processes arising in secondary oil recovery , even though these studies have much wider appeal in the sciences and engineering . in the late 1970s , tertiary displacement processes involved in chemicalenhanced oil recovery generated interest in three - layer and multi - layer hele - shaw flows ( see ) . in this paper , we first briefly derive the non - standard eigenvalue problem . this eigenvalue problem has been derived earlier by the first author and his collaborators ; for example see .but the difference is that the derivation presented here is more general and shows how to generate higher order correction terms if necessary in order to study the effect of nonlinear terms that may dominate the dynamics , particularly in view of the sensitivity of fingering problems to finite amplitude perturbations .however , we do not study or discuss such nonlinear effects in this paper which will be taken up in the future as it falls outside the scope of this paper .we then analytically study this non - standard eigenvalue problem using non - linear transformation for the case when the viscous profile of the middle layer is linear .we will see below that this case is relatively hard to study in comparison to the case when the viscous profile is exponential which we have recently addressed in . the physical set - up consists of rectilinear motion of three immiscible fluids in a hele - shaw cell which is a device separating two parallel plates by a distance ( see fig . [fig : hs_rect_linearprofile ] ) .the fluid in the extreme left layer with viscosity extends up to , the fluid in the extreme right layer with viscosity extends up to , and the fluid in the middle - layer of finite length has a smooth viscous profile with viscosity increasing in the direction of displacement .the interfacial tensions of the leading and the trailing interfaces are given by and respectively .it is well established that this hele - shaw flow is similar to flow in homogeneous porous media with equivalent permeability . without any loss of generality ,we take this to be one below .the mathematical model considered here consists of conservation of mass , darcy s law and advection equation for viscosity .thus we have due to the continuity equation , we can define the stream function such that and .this then implies that since when , we consider a small perturbation of the basic scalar fields and of the form substituting into the original equations , we get the following and equations. equations : these equations provide the basic solution given by where is an arbitrary function of , meaning the viscous profile is fixed with respect to a moving frame moving at a constant velocity . equations : now , introducing the moving frame change of variables , namely , we get the following system of equations . taking cross derivatives of the first two equations with respect to and respectively and then subtracting the resulting equations from each other gives this combined with the equation leads to using the ansatz in the above equation together with the appropriate boundary conditions ( see ) give the following eigenvalue problem . + \frac{sk^4}\sigma\right\}\\ & \mu^-_0(0)f_\xi(0 ) = f(0 ) \left\{-\mu_2k + \frac{u_0k^2}\sigma [ \mu_2-\mu^-_0(0 ) ] - \frac{tk^4}\sigma\right\}\end{aligned}\ ] ] where the viscous profile of the middle layer , namely , is an arbitrary function .this is a non - standard eigenvalue problem in that the spectral parameter appe ars in the equation as well as in the boundary conditions .recently , this problem has been numerically solved by daripa for a constant viscous profile and by daripa & ding for non - constant viscous profiles to determine the most optimal profile , i.e. , the least unstable profile .this problem has been too difficult to solve analytically for non - constant profiles .progress made in this direction for the linear viscous profile is presented below . in this paperwe consider a linear viscous profile for the intermediate fluid region given by where and are jump discontinuity values at the interfaces and , respectively . in the left regionthe problem reduces to which has solution . in the right regionthe problem reduces to which has solution in the intermediate region the problem reduces to where is the spectral parameter and introduce the nonlinear transformation and change of variables given by after some manipulation of the eigenvalue problem using the above transformation , we obtain the following eigenvalue problem for the kummer s equation . where a prime denotes derivative , the eigenvalue problem is a regular two point boundary value problem for each wave number .one solution of the kummer s equation is given by where and .this is an analytic solution .it is easily seen that the derivative of this solution which we will need below for the dispersion relation is given by the linearly independent second solution is easily constructed by the method of frobenius .avoiding all the details , the second solution is given by where .its derivative which we will need below is then given by the general solution of the kummer s equation is then given by where and are arbitrary constants .substituting the general solution into the two boundary conditions of the eigenvalue problem , we obtain the following linear system of equations for and . therefore , for a non - trivial solution we have this formally gives the dispersion relation in terms of the problem data : and . in terms of the original variables and , the fundamental solutions and are then given by ( see ) where also , recall that , and .noticing that both series are centered at and applying the ratio test for series for above , we have for any fixed .thus the series for converges absolutely , but the series _ evaluated _ at reduces to 1 .this implies that the radius of this series is .hence and since it is an alternating singular series , the error in approximating by terms up to is smaller than the last neglected term , namely for a fixed .similarly , applying the ratio test to the series for , for any .thus the series inside the brackets converges absolutely , but the series inside the brackets evaluated at reduces to 1 .therefore , the radius of convergence of the series within the brackets is .hence notice that has a branch point at . in any case , since the series within the brackets is an alternating sign series , if we truncate it , the error is smaller than the last neglected term , i.e. , the general solution of the ode is then given by .the boundary values of follow from and whi ch are now given by substituting these in the boundary conditions , we obtain the following system of equations for the constants and . where and . for the existence of nontrivial solutions, we then have which gives us the dispersion relation in the form : . because of the nature of the series solutions given above , it is not possible to give this dispersion relation explicitly .there are an infinite number of eigenvalues ( recall ) which can be ordered : . we know that these infinite number of eigenvalues should reduce to ( i ) only two in the limit corresponding to the constant viscosity of the intermediate layer fluid ( see daripa ) ; ( ii ) only one in the limit ( see saffman & taylor , daripa ) and ( iii ) only two in the limit of ( see daripa ) .in fact , we also know the eigenvalues in these limiting cases from the pure saffman - taylor growth rate of individual interfaces . these results by no means are transparent from the solutions of the eigenvalue problem given in the previous section .below , we show how to recover these limit solutions ( eigenvalues ) from the infinite number of eigenvalues for the linear viscous profile . in this case , the eigenvalue problem reduces to in this case , the change of variable introduced previously , namely , which converts the equation to kummer s equation , i s not well - defined .therefore we work with the boundary value problem .now , consider the general solution of the ode in ( [ slp3 ] ) such that therefore , we get we search for a solution of the boundary value problem ( [ slp3 ] ) of the form to find a solution of ( [ slp3 ] ) of this form , we start determining the coefficients using the shooting technique such that the boundary condition at is satisfied . obviously , the coefficients and depend on the parameter .then , we find in such a way that the solution satisfies the boundary condition at .hence , we look for and such that then it follows directly from ( [ cond1 ] ) that and therefore satisfies the ode in and the boundary condition at . from theseit follows that the spectrum of problem ( [ slp3 ] ) can be studied using the following algebraic equation ( see ) where is the function defined in .evaluating and from and substituting directly in one obtains then , taking one obtains which is equivalent to the equation since ( or ) , it follows that now , using the definition of the coefficient given in ( [ coef ] ) we have \lambda =- k(\mu_{1}+\mu_{2})\ ] ] from which it follows that which is the formula for the growth rate of an interface with surface tension , which is what should be expected in this limit .thus we recover the classical formula for the growth rate in this limit .to take the limit when , we go back to equation ( [ explicitall ] ) and write it as follows now , taking the limit when , we obtain using and expressions for the coefficients from , we obtain finally , since ( or ) it follows that which gives the classical formula for saffman - taylor instability of the leading interface . similarly , we can recover the the classical formula for saffman - taylor instability of the trailin g interface by reversing the shooting technique ( see after ) , i.e. , first find the solution which is analogous to but satisfies the boundary condition at instead and then shoot to satisfy the boundary condition at ( i.e. , replace by a similar formula derived from the boundary condition at and follow the procedure ) . in this section , we study asymptotic limits ( and ) of the solutions to the eigenvalue problem . to this end, we consider the following form of two linearly independent solutions of kummer s equation .these are convenient for the asymptotic analysis presented below . where is euler s digamma function ( see abramowitz , chapter 13 ) . to this end, we follow the steps presented in the previous section [ subsection : constant - viscosity ] for the particular case ( or ) . from the transformation in, it follows that is the general solution of the ode where and are chosen such that where . substituting in the boundary conditions , we obtain the following linear systems of equations solving the above two systems and using the relations ( see abramowitz , chapter 13 ) we obtain where is the determinant of the coefficient matrix of the system .similar to the procedure of the previous section [ subsection : constant - viscosity ] , we find and so that and . therefore , it follows from and that and therefore and . substituting these constants in the function defined by , we obtain a solution of the ode that satisfies the boundary condition at of the eigenvalue problem .since and depend on the spectral parameter , it follows that the eigenvalues of the problem can be obtained by studying the following algebraic equation which is a reformulation of the boundary condition at of the eigenvalue problem . since the right - hand side of the above equation does not depend on , we need to study the asymptotic limits ( and ) of the lefthand side of ( [ al2 ] ) .notice that the expression above is given by ( see ) therefore , we first find the asymptotic approximations for , and in both cases below before estimating the ratio using for its use in .below , we write and where and are given by ( see ) , 0.1truein ( when ) : it follows from abramowitz and stegun that using the identities from ( [ relspecialfunction ] ) we obtain using , and the relation in the expression for , we obtain which can be written as where ( see ) . using similar arguments it follows that for ,see ( [ coef2 ] ) for the dependence of and of the coefficient and .thus , using the above asymptotic results for the coefficient and and the asymptotic results for the confluent hypergeometric functions given in ( [ asymphypergeometric1 ] ) and ( [ asymphypergeometric2 ] ) , we get substituting this in , we obtain therefore , equation ( [ al2 ] ) becomes . using and expressions for the coefficients from , we obtain which is the classical formula for saffman - taylor instability of the leading interface . similarly , we can also recover the the classical formula for saffman - taylor instability of the trailing interface by reversing the shooting technique as discussed at the end of section [ subsection : constant - viscosity ] .0.1truein ( when ) : similar to the previous case , we will first need to get asymptotic approximations for , and in this limit .notice that in this case , singularities of the confluent hypergeometric function of the second kind will arise .now , we give the following asymptotic results from abramowit z and stegun where we recall that and are defined by . similar to the calculations of the previous case , we present the dominant terms of the left hand side of ( [ al2 ] ) .it is worth pointing out that due to , the derivative of the confluent hypergeometric function of the second kind is dominant . from the definition of the coefficients and given in and the asymptotic results presented in , we obtain we remark that and therefore the asymptotic result for follows from the definition of the coefficient and , see ( [ coef2 ] ) . from the forms of and , we get therefore similarly , we obtain using , , and , it follows that from the definition of and given in we obtain and therefore ,\ ] ] where and .it then follows that \lambda+\alpha_{2}(k).\ ] ] using this in equation ( [ al2 ] ) , we obtain \lambda+\alpha_{2}(k)=\beta_{1}(k)\lambda-\beta_{2}(k)\ ] ] which is equivalent to after substituting the values of , , and and simplifying we obtain therefore , which is the formula for the growth rate of an interface with surface tension , which is what should be expected in this limit . thus we recover the classical formula for the growth rate in this limit .we converted a non - standard eigenvalue problem arising in the linear stability analysis of a three - layer hele - shaw model of enhanced oil recovery to a boundary value problem for kummer s equation when the middle layer has a linear viscous profile .we presented the general solution in terms of frobenius series and discussed the convergence properties of these series solutions .we also formally gave the dispersion relation implicitly through the existence criterion for non - trivial solutions . in order to recover the well - known physical solutions for some limiting cases, we rewrote the general solutions using a different set of fundamental solutions and analyzed these for those limiting cases : ( i ) when the viscous profile of the middle layer approaches a constant viscosity , both in the case of a fixed - length middle layer and also as the length of the middle layer appraoches infinity ; and ( ii ) when the length of the middle layer approaches zero .we showed that we were thus able to recover the correct physical solutions .this paper was made possible by an nprp grant # 08 - 777 - 1 - 141 to one of the authors ( prabir daripa ) from the qatar national research fund ( a member of the qatar foundation ) .the second author ( oscar orellana ) acknowledges financial support through this grant for travel to tamuq , qatar for a two day workshop on international workshop on enhanced oil recovery and porous media flows " organized by the first author ( prabir daripa ) during july 31st and august 1 of 2013 .the work of the second author ( oscar orellana ) was also supported in part by fondo nacional de desarrollo centifico y technologico ( fondecyt ) under grant 1141260 and universidad tecnica federico santa maria , valparaiso , chile .the statements made herein are solely the responsibility of the authors .kummer s equation has the general form where and the two linearly independent solutions are and where the general expression for is given by where the linearly independent second solution is similarly given by a series which can be easily constructed by the method of frobenius .
we present a non - standard eigenvalue problem that arises in the linear stability of a three - layer hele - shaw model of enhanced oil recovery . a nonlinear transformation is introduced which allows reformulation of the non - standard eigenvalue problem as a boundary value problem for kummer s equation when the viscous profile of the middle layer is linear . using the existing body of works on kummer s equation , we construct an exact solution of the eigenvalue problem and provide the dispersion relation implicitly through the existence criterion for the non - trivial solution . we also discuss the convergence of the series solution . it is shown that this solution reduces to the physically relevant solutions in two asymptotic limits : ( i ) when the linear viscous profile approaches a constant viscous profile ; or ( ii ) when the length of the middle layer approaches zero . * mathematics subject classification ( 2010 ) : * 76e17 , 34l10 , 34l15 * keywords : * hele - shaw flows , non - standard eigenvalue problem , kummer s equation , linear stability
increasingly often , researchers are confronted with monitoring the states of nodes in large computer , social , or power networks where these states dynamically change due to viruses , rumors , or failures that propagate according to the graph topology . this class of network dynamics has been extensively modeled as a percolation phenomenon , where nodes on a graph can randomly `` infect '' their neighbors .percolation across networks has a rich history in the field of statistical physics , computer science , and mathematical epidemiology . here, researchers are typically confronted with a network , or a distribution over the network topology , and extract fixed point attractors of node configurations , thresholds for phase transitions in node states , or distributions of node state configurations . in the field of fault detection , the nodes or edges can `` fail '' , and the goal is to activate a subset of sensors in the network which yield high quality measurements that identify these failures . while the former field of research concerns itself with extracting _offline _ statistics about properties of the percolation phenomenon on networks , devoid of any measurements , the latter field addresses _ online _ measurement selection tasks . here, we propose a methodology that actively tracks a causal markov process across a complex network ( such as the one in figure [ sf ] ) , where measurements are adaptively selected .we extract conditions such that the updated posterior probability of all nodes `` infected '' is driven to one in the limit of large observation time .in other words , we derive conditions for the existence of an epidemic threshold on the updated posterior distribution over the states . the proposed percolation threshold should more accurately reflect the true conditions that cause a phase transition in a network , e.g. , node status changing from healthy / normal to infected / failed , than traditional thresholds derived from conditions on predictive distributions which are devoid of observations or controls . since most practical networks of interest are large , such as electrical grids , it is usually infeasible to sample all nodes continuously , as such measurements are either expensive or bandwidth is limited . given these , or other resource constraints, we present an information theoretic sampling strategy that selectively targets specific nodes that will yield the largest information gain , and thus , better detection performance . the proposed sampling strategy balances the trade - off between trusting the _ predictions _ from the known model dynamics ( from percolation theory ) and expending precious resources to select a set of nodes for measurement . we present the adaptive measurement selection problem and give two tractable approximations to this subset selection problem based upon the joint and marginal posterior distribution , respectively .a set of decomposable bayesian filtering equations are presented for this adaptive sampling framework and the necessary tractable inference algorithms for complex networks are discussed .we present analytical worst case performance bounds for our adaptive sampling performance , which can serve as sampling heuristics for the activation of sensors or trusting predictions generated from previous measurements . to the author s knowledge ,this is the first attempt to extract a percolation threshold of an actively monitored network using the updated posterior distribution instead of the observation independent predictive distributions .the objective of actively monitoring the node network is to recursively update the posterior distribution of each hidden node state given various measurements .specifically , the next set of measurement actions ( nodes to sample ) , , at next discrete time are chosen such that they yield the highest quality of _ information _ about the hidden states .the condition on simulates the reality of fixed resource constraints , where typically only a small subset of nodes in a large network can be observed at any one time . here , the hidden states are discrete random variables that correspond to the states encoded by the percolation process on the graph . here , the graph , with representing the set of nodes and corresponding to the set of edges .formally , we will assume a state - space representation of a discrete time , finite state , partially observed markov decision process ( pomdp ) .here , represents the joint hidden states , e.g. , healthy or infected represents the observed measurements obtained at time , e.g. , biological assays or pinging an ip address , and represents the actions taken at time , i.e. , which nodes to sample . here , , continuous / categorical valued vector of measurements , which is induced by action , , with confined to be the set of all individuals in the graph , and .since the topology of encodes the direction of `` flow '' for the process , the state equations may be modeled as a decomposable partially observed markov process : here , is the neighborhood of , is a non - random vector - valued function , is measurement noise , and is a stochastic equation encoding the transition dynamics of the markov process ( see figure [ 2tbn ] for a two node graphical model representation ) . and for ,width=168 ] in our proposed framework for actively monitoring the hidden node states in the network , the posterior distribution is the sufficient statistic for inferring these states .the general recursion for updating the joint posterior probability given all past and present observations is given by the standard bayes update formula : with the chapman - kolmogorov equations provide the connection between the posterior update ( [ intract ] ) and the distribution resulting from the standard percolation equations .in the former , the updates are conditional probabilities that are conditional on past observations , while in the latter , the updates are not dependent on observations .the local interactions in the graph imply the following conditional independence assumptions : where the likelihood term is defined in ( [ y ] ) and the transition dynamics are defined in ( [ z ] ) .this decomposable structure allows the belief state ( posterior excluding time observations ) update , for the node in , to be written as : with the parent set , .unfortunately , for highly connected nodes in , this marginal update becomes intractable .it thus must be approximated . in most real world situations ,acquiring measurements from all nodes at any time is unrealistic , and thus , a sampling policy must be exploited for measuring a subset of nodes . since we are concerned with monitoring the states of the nodes in the network , an appropriate reward is the expected information gain between the _ updated _ posterior , , and the belief state , : \ ] ] with -divergence \right)\ ] ] for distributions and with identical support . the reward in ( [ ig ] ) has been widely applied to multi - target , multi - sensor tracking for many problems including , sensor management and surveillance .note that , where is the kullback - leibler divergence between and .the expectation in ( [ ig ] ) is taken with respect to the conditional distribution given the previous measurements and actions . in practice , the expected information divergence in ( [ ig ] ) must be evaluated via monte - carlo methods .also , the maximization in ( [ ig ] ) requires enumeration over all actions ( for subsets of size ) , and therefore , we must resort to greedy approximations .we propose incrementally constructing the set of actions at time , , for , according to : .\ ] ] both ( [ ig ] ) and ( [ approxig ] ) are selecting the nodes to sample which yield maximal divergence between the percolation prediction distribution ( belief state ) and the updated posterior distribution , averaged over all possible observations .thus ( [ ig ] ) provides a metric to assess whether to trust the predictor and defer actions until a future time or choose to take action , sample a node , and update the posterior . since the expected -divergence in ( [ ig ] )is not closed form , we could resort to numerical methods for estimating this quantity .alternatively , one could specify an analytical lower - bound that could be used in - lieu of numerically computing the expected information gain in ( [ ig ] ) or ( [ approxig ] ) .we begin by noting that the expected divergence between the updated posterior and the predictive distribution ( conditioned on previous observations ) differ only through the measurement update factor , ( ( [ ig ] ) re - written ) : \nonumber\\ = \mathbb{e}_{g_{k|k-1 } } \left [ \frac{1}{\alpha-1 } \mbox{log } \\mathbb{e}_{p_{k|k-1 } } \left [ \left ( \frac { f_k } { g_{k|k-1 } } \right)^{\alpha } \right ] \right]\label{alphadiv}.\end{gathered}\ ] ] so , if there is significant overlap between the likelihood distributions of the observations , the expected divergence will tend to zero , implying that there is not much value - added in taking measurements , and thus , it is sufficient to use the percolation predictions for inferring the states. it would be convenient to interchange the order of the conditional expectations in ( [ alphadiv ] ) .it is easily seen that jensen s inequality yields the following lower bound for the expected information gain \nonumber\\ \ge \frac{1}{\alpha-1 } \mbox{log } \\mathbb{e}_{p_{k|k-1 } } \left [ \mathbb{e}_{g_{k|k-1 } } \left [ \left ( \frac { f_k } { g_{k|k-1 } } \right)^{\alpha } \right ] \right].\end{gathered}\ ] ] here , the inner conditional expectation can be obtained from , which has a closed form for common distributions ( e.g. , multivariate gaussians ) .for tracking the percolation process across , we have discussed recursive updating of the belief state . however , computing these updates exactly is in general intractable . for the remainder of the paper, we will use ( [ y ] ) and ( [ z ] ) to directly update the marginal posterior distribution using the following matrix representation : with updated marginal posterior ^t ] with .note that for , , and .given that we can find an efficient way of updating , according to the transition dynamics ( [ z ] ) , we can solve a modified version of ( [ approxig ] ) , for : \ ] ] one interesting property of the bayesian filtering equations is that the updated posterior can be written as a perturbation of the predictive percolation distribution through the following relationship ( omitted for clarity ) : hence , when the sensors do a poor job in discriminating the observations , , we have .it is of interest to determine when there is significant difference between the posterior update and the prior update specified by the standard percolation equations .recall that the updated posterior is , in the mean , equal to the predictive distribution , = \textbf{p}_{k|k-1} ] ( at element ) . without loss of generality, we can assume the eigenvalues of are listed in decreasing order , . now rewriting ( [ bs ] ), we have where and the variables corresponds to the higher order terms . inserting ( [ spectral ] ) into ( [ recursion ] ) , and matching the largest eigenvalues of with we obtain thus , at large , the dominant mode of the posterior goes as ( the modes in decay faster than the dominant mode presented above ) .we can see that if the spectral radius of is less than one , , then for large , , which is the unique absorbing state of the system .this epidemic threshold condition on has been previously established for unforced -percolation processes .however , in the tracking framework , the rate at which the posterior decays to the _ susceptible _ state is perturbed by an additional measurement dependent factor , .this measurement - dependent dominant mode of the posterior should more accurately model the true dynamic response of the node states better than that in since the posterior better tracks the truth than the unforced predictive distribution .additionally , this dominant mode of the updated posterior distribution allows one to simulate the response of the percolation threshold to intervention and control actions which are designed to increase the threshold , such that the probability of epidemics is minimized .markov chain for node interacting with the infected states of its neighbors , width=172 ] here , we present results of simulations of our adaptive sampling for the active tracking of a causal markov ground truth process across a random 200 node , scale - free network ( figure [ sf ] ) . since the goal in tracking is to accurately classify the states of each node , we are interested in exploring the detection performance as the likelihood of an epidemic increases through the percolation threshold for this graph. one would expect different phase transitions ( thresholds ) in detection performance for various sampling strategies , ranging from the lowest threshold for unforced percolation distributions to highest for a continuous monitoring of all nodes .we will present a few of these detection surfaces that depict these phase transitions for the unforced percolation distribution , random node sampling , and our proposed information theoretic adaptive sampling of . here, we will restrict our simulations to the two - state model of mathematical epidemiology described above ., width=240 ] the sensor models ( [ y ] ) , are of the form of two - dimensional multivariate guassians with common covariance and shifted mean vector .the transition dynamics of the individual ( [ z ] ) , for the model is given by : .\ ] ] where is the indicator function of being infected at time . the transmission term between and known the reed - frost model . since the tail of the degree distribution of our synthetic scale - free graph contains nodes with degree greater than 10 , updating ( [ marginal ] )exactly is unrealistic and we must resort to approximate algorithms . here, we will assume the mean field approximation used by for this model , resulting in the following marginal belief state update for the node of infected ( ) : .\ ] ] equation ( [ mfapprox ] ) allows us to efficiently update the marginal belief state directly for all nodes which are then used for estimating the best measurements using ( [ ig_marg ] ) .[ aur_perc ] and time , title="fig:",width=321 ] [ aur_rnd ] and time , title="fig:",width=321 ] [ aur_adaptive ] and time , title="fig:",width=321 ] as we are interested in detection performance as a function of time and epidemic intensity , the area under the roc curve ( aur ) is a natural statistic to quantify the detection power ( detection of the infected state ) .the aur is evaluated at each time , each percolation intensity parameter and over 500 random initial states of the network . for the model, is the single parameter ( aside from the topology of the graph ) that characterizes the intensity of the percolation / epidemic .it is useful to understand how the detection performance varies as a function of epidemic intensity , as it indicates how well the updated posteriors are playing `` catch - up '' in tracking the true dynamics on the network . for this model ,the percolation threshold is defined as where is the spectral radius of the graph adjacency matrix , .values of greater than imply that any infection tend to become an epidemic , whereas those values less than imply that small epidemics tend to die out . for the network under investigation ( figure [ sf ] ) , .we see from figure [ aur_perc ] that a phase transition in detection power ( aur ) for the unforced percolation distribution does indeed coincide with the epidemic threshold .while the epidemic threshold for the random and adaptive sampling policies is still , the measurements acquired allow the posterior to better track the truth , but only up to their respective phase transitions in detection power ( see figures [ aur_rnd ] and [ aur_adaptive ] ) .figure [ aur_adaptive ] confirms that the adaptive sampling better tracks the truth than randomly sampling nodes , while pushing the phase transition in detection performance to higher percolation intensities , .we see that the major benefit of the adaptive sampling is apparent when conditions of the network are changing moderately , at medium epidemic conditions . beyond a certain level of percolation intensity ,more resources will need to be allocated to sampling to maintain a high level of detection performance .a heuristic sampling strategy based on the topology of was also explored ( results not shown ) by sampling the `` hubs '' ( highly - connected nodes ) . however , detection performance was only slightly better than random sampling and poorer than our adaptive sampling method .-axis ) of a given degree ( -axis ) over time ( -axis ) for adaptive sampling strategy : a. ) , b. ) , and c. ) ,width=345 ] it is often useful for developing sampling heuristics and offline control / intervention policies to inspect what _ type _ of nodes , topologically speaking , is the adaptive sampling strategy targeting , under various network conditions ( different values of ) . in figure [ sampling ] , the relative frequency of nodes sampled with a particular degree is plotted against time ( under the adaptive sampling strategy ) for three different values of ( over 500 random initial conditions of the network ) . for the larger of the three values explored ( )we see that the sampling is approximately uniform across the nodes of each degree on the graph ( figure [ sampling](c ) ) .therefore , under extremely intense epidemic conditions , the adaptive sampling strategy is targeting all nodes of each degree equally , and therefore , it is sufficient to perform random sampling .for the two lower values of , figure [ sampling](a ) and figure [ sampling](b ) ( near ) , we see that adaptive policy targets highly connected nodes more frequently than those of lesser degree and thus , it is more advantageous to exploit such a strategy , as compared to random sampling ( see aur surface in figure [ aur_adaptive ] ) .in this paper , we have derived the conditions for a network specific percolation threshold using expressions for the updated posterior distribution resulting from actively tracking the process .these conditions recover the unforced percolation threshold derived in but with an additional factor involving sensor likelihood terms due to measurements obtained throughout the monitoring . a term of the form ( derived in ( [ bound ] ) )was shown to be the dominant mode of the updated posterior dynamic response to active intervention of immunizing the nodes ( holding node states constant ) .the conditions of the percolation using the updated posterior should more accurately model the phase transition corresponding to a particular disease trajectory and therefore , enable a better assessment of immunization strategies and any subsequent observations resulting from such actions .the framework presented above , along with the new posterior percolation threshold , should provide additional insight into active monitoring of large complex networks under resource constraints .in case ( 2 ) , when , we can re - write ( [ bayes_geo ] ) in terms of an _ alternating geometric series _ : p_{k|{k-1 } } \nonumber \\ & \le & \frac { f^{(1)}_k } { f^{(0)}_k } \left [ 1 + \frac { |\delta f_k| } { f^{(0)}_k } p_{k|{k-1 } } \right ] p_{k|{k-1 } } \end{aligned}\ ] ] where we have used the fact that . recalling that for , we have p_{k|{k-1}}.\ ] ] p_{k|{k-1 } } \nonumber \\ = \frac { f^{(1)}_k } { f^{(0)}_k } \left [ 1 + \frac { |\delta f_k| } { f^{(0)}_k } p_{k|{k-1 } } + \sum_{l=2}^{\infty } \left(\frac { |\delta f_k| } { f^{(0)}_k } p_{k|{k-1 } } \right)^l \right ] p_{k|{k-1}}. \end{gathered}\ ] ]
percolation on complex networks has been used to study computer viruses , epidemics , and other casual processes . here , we present conditions for the existence of a network specific , observation dependent , phase transition in the updated posterior of node states resulting from actively monitoring the network . since traditional percolation thresholds are derived using observation independent markov chains , the threshold of the posterior should more accurately model the true phase transition of a network , as the updated posterior more accurately tracks the process . these conditions should provide insight into modeling the dynamic response of the updated posterior to active intervention and control policies while monitoring large complex networks .
mercury is the target of two current space missions ( see e.g. ) .the first one , messenger ( nasa ) , already performed three flybys on january 14 , october 6 , 2008 , and september 29 , 2009 , before orbit insertion in march 2011 .the second one , bepicolombo ( esa / jaxa ) , is planned to be launched in 2014 and to reach mercury in 2020 .the preparation of these two missions motivated an in - depth study of the rotation of mercury .the rotation of mercury is a unique case in the solar system because of its spin - orbit resonance , mercury performing exactly 3 rotations during 2 revolutions about the sun .it corresponds to an equilibrium state known as cassini state 1 .recently , radar earth - based measurements by detected a 88-day longitudinal libration of mercury with an amplitude of arcsec .this amplitude being nearly twice too high to be consistent with a rigid mercury , it is the signature of an at least partially molten core .if we consider mercury as a 2-layered body with a rigid mantle and a spherical liquid core that does not follow the short - period ( days ) excitations and does not interact with the mantle , we can derive from this amplitude the inertia of the mantle plus crust .in particular , naming the inertial polar momentum of the mantle and the inertial momenta of mercury , we have : where is a second - degree coefficient of the gravitational potential of mercury , its mass , its radius , and its orbital eccentricity .it leads to with .if we take and we get .recent studies in one and two degrees of freedom have theoretically estimated the longitudinal librations of mercury .they highlighted in particular the possibility of a resonance with the jovian perturbation , whose period is years , that could potentially raise the amplitude of a long - term ( years ) libration .other periodic terms of a few arcsec have been estimated .this model also predicts that the latitudinal motion of mercury should adiabatically follow the cassini state 1 ( , ) , with short - period librations of about 10 milli - arcsec . in all these studies ,the core - mantle interactions are neglected .recently , explored the dynamics of the rotation of mercury , including core - mantle interactions in the sonyr model .we here propose an alternative study , starting from the hamiltonian formulation of and highlighting the dynamical implications of core - mantle interactions , by considering mercury as composed of a rigid mantle and a triaxial ellipsoidal cavity filled with inviscid fluid of constant uniform density and vorticity .the differential equations ruling the motion of a 2-layered body with a rigid mantle and a liquid non - spherical core have been derived by and .more recently , gave a hamiltonian formulation of this problem , that applied to the rotational dynamics of io , assuming that the core and the mantle were aligned and proportional .here , we generalize the model of henrard , allowing the core to be non - proportional and non - spherical . [ cols="^,^ " , ] we see on this table the overwhelming predominance of the -d contribution , i.e. the rotational period .as expected , it is excited by the proximity of the secondary resonance with the proper frequency .we can see that the amplitude associated is roughly the mean value of as can be read from figure [ fig : polarcore ] ( arcsec = arcmin ) .we also note some similarities with the precession of the rotation axis of the mantle ( table [ tab : polarmantle ] ) , the frequencies involved being the same ones .in this paper we have investigated the 4-degree of freedom behaviour of a rotating mercury composed of a rigid mantle and a fluid ellipsoidal core , using both analytical and numerical tools with good agreement .we have emphasized the influence of the proximity of a resonance with the spin of mercury , that can raise the velocity field of the fluid constituting the core .we can not exclude a possibility of indirect detection of this effect by measuring mercury s magnetic field .we have also shown the variations of the behaviour of the obliquity of the mantle with respect to the polar flattening of the core , this flattening being linked with the distance of the system from the resonance .these variations should be negligible unless the core is trapped into the resonance with the spin of mercury .we have also shown that neither the observations of the longitudinal motion of mercury , nor of its polar one , could be inverted to get the shape of the core .however , they will give information on its size ( i.e. the parameter ) .future works should take the viscosity of the fluid into account .it is assumed to alter the response of the core of mercury to slow ( i.e. -year period ) excitations , the planet being therefore assumed as rigid . as a consequence ,a study of the spectral response of the rotation of mercury on periodic solicitations with respect to the viscosity is worth studying .this study benefited from the financial support of the contract prodex c90253 `` romeo '' from belspo .benot noyelles was also supported by the agenzia spaziale italiana ( asi grant `` studi di esplorazione del sistema solare '' ) , and thanks alessandra celletti and luciano iess for their reception in rome .we also thank nicolas rambaux , marie yseboodt and tim van hoolst for fruitful discussions .seidelmann p.k . ,archinal b.a . ,ahearn m.f . ,conrad a. , consolmagno g.j . , hestroffer d. , hilton j.l . , krasinsky g.a . , neumann g. , oberst j. , stooke p. , tedesco e.f . , tholen d.j . ,thomas p.c . , williams i.p ., 2007 , celes .astron . , 98 , 155
mercury is the target of two space missions : messenger ( nasa ) which orbit insertion is planned for march 2011 , and esa / jaxa bepicolombo , that should be launched in 2014 . their instruments will observe the surface of the planet with a high accuracy ( about 1 arcsec for bepicolombo ) , what motivates studying its rotation . mercury is assumed to be composed of a rigid mantle and an at least partially molten core . we here study the influence of the core - mantle interactions on the rotation perturbed by the solar gravitational interaction , by modeling the core as an ellipsoidal cavity filled with inviscid fluid of constant uniform density and vorticity . we use both analytical ( lie transforms ) and numerical tools to study this rotation , with different shapes of the core . we express in particular the proper frequencies of the system , because they characterize the response of mercury to the different solicitations , due to the orbital motion of mercury around the sun . we show that , contrary to its size , the shape of the core can not be determined from observations of either longitudinal or polar motions . however , we highlight the strong influence of a resonance between the proper frequency of the core and the spin of mercury that raises the velocity field inside the core . we show that the key parameter is the polar flattening of the core . this effect can not be directly derived from observations of the surface of mercury , but we can not exclude the possibility of an indirect detection by measuring the magnetic field . [ firstpage ] planets and satellites : individual : mercury planets and satellites : interior
diverse physical phenomena such as traffic flow , force propagation in granular media , clustering of buses , aggregation and fragmentation of clusters , phase separation dynamics , shaken granular gases and sandpile dynamics share one common feature their microscopic dynamics involves stochastic transport of ` mass ' , or some conserved quantity , from one point in space to another . to simplify analysis ,continuous space is typically replaced by ( or `` binned '' into ) discrete sites .several such lattice models of stochastic mass transport have been introduced and studied , most notably the zero - range process ( zrp ) , and the asymmetric random average process ( arap ) .these models are defined by specifying the microscopic dynamics , i.e. the basic stochastic rules for mass transport .given the dynamics , there are two principal theoretical issues : ( i ) to identify the steady state if there is any , i.e. to find the invariant measure and ( ii ) once the steady state is known , to understand various physical properties in the steady state , e.g. the phenomenon of ` condensation ' that happens when a finite fraction of the total mass condenses onto a single site .it turns out that the step ( i ) itself is often very difficult and the exact steady state is known in only very few cases . in many of these known cases , the steady state is _factorisable_. this means that the steady state probability of finding the system with mass at site 1 , mass at site 2 etc is given by a product of ( scalar ) factors one factor for each of the sites of the system e.g. for a homogeneous system where all sites have equivalent connectivities where is a normalisation which ensures that the integral of the probability distribution over all configurations containing total mass is unity , hence \,\delta \!\left ( \sum_{i=1}^l m_i - m\right ) \;. \label{z}\ ] ] here , the -function has been introduced to guarantee that we only include those configurations containing mass in the integral .the single - site weights , are determined by the details of the mass transfer rules and for a heterogeneous system may depend on the site .the advantage of having a factorisable steady state is that the step ( ii ) mentioned above is often easier to carry out explicitly .this has been demonstrated recently by an exact analysis of the condensation phenomenon that occurs in a general class of mass transport models .this raises a natural question : when does the steady state in these mass transport models factorise ?this issue was recently addressed in the context of a sufficiently general ` mass transport model ' in one dimension , that includes , as special cases , the previously studied zrp , arap and the chipping model . in this modela mass resides at each site of a one dimensional lattice with periodic boundary conditions . at each time , a stochastic portion , of the mass at site , chosen from a distribution , is chipped off to site . the distribution was called the chipping kernel and by choosing its form appropriately one can recover the zrp , the arap and the chipping model of .even though the model above is defined in discrete time where all sites are updated in parallel , by appropriately choosing the chipping kernel it is easy to study the continuous time limit , which corresponds to a random sequential update sequence , as a special case .similarly , one can also recover , as a special case , the model with only discrete masses as in zrp .thus the discrete time dynamics generalises continuous time dynamics but a continuous mass variable generalises discrete mass .a natural question , first addressed in , is what should be form of the chipping kernel for the the final steady state to be factorisable . in that study, it was proved that the _ necessary and sufficient _condition for a factorised steady state in the one dimensional directed case defined above is that the chipping kernel is of the form where and are arbitrary non - negative functions .then the single - site weight in eq .( [ prob ] ) is given by furthermore , given a chipping kernel , sometimes it is hard to verify explicitly that it is of the form ( [ phi1 ] ) and thereby to identify the functions and in order to construct the weight in eq .( [ ffact ] ) .this problem was circumvented by devising a test to check if a given explicit satisfies the condition ( [ phi1 ] ) or not .further , if it `` passes this test , '' the weight can be found explicitly by a simple quadrature . finally , for any desired function , one can construct dynamical rules ( i.e. , ) that will yield in a factorised steady state .it was further demonstrated in ref . that the corresponding necessary and sufficient condition in the case of random sequential dynamics in continuous time can be easily obtained , by taking a suitable limit , from the condition for the parallel dynamics manifest in eq .( [ phi1 ] ) .this is done by choosing the chipping kernel as \ , \delta(\tilde m ) + \alpha(\tilde m|m)\ , dt \label{ct1}\ ] ] for small time increment and is the dirac delta function .the function denotes the ` rate ' at which a mass is transferred from a site with mass to its right neighbour . note that the form in eq .( [ ct1 ] ) ensures the normalization , .then , the necessary and the sufficient condition for factorisable steady state , derived from the more general condition in eq .( [ phi1 ] ) , is that the rate must be of the following form where and are arbitrary non - negative functions .the corresponding steady state weight is then simply , .the condition ( [ phi1 ] ) for factorisability in the mass transport model was derived only in one dimension and also only for unidirectional mass transport ( from site to site ) . a natural question ,therefore , is whether one can generalise this condition to higher dimensional lattices , or to arbitrary graphs where mass transport can take place , in general , between any pair of sites and .recently , greenblatt and lebowitz were able to derive a sufficiency condition for factorisability in the mass transport model with nearest neighbour mass transport on a regular -dimensional lattice with periodic boundary conditions , but considered only the case of random sequential dynamics .they showed that if is the probability of mass being chipped off a site with mass to a nearest neighbour in the direction ( there being nearest neighbours on a hypercubic lattice in dimensions ) , then the sufficient condition for factorisability is a direct generalization of the condition in eq .( [ ct2 ] ) , namely that the rate function must be of the form for each , where for each and are arbitrary functions .the steady state weight is simple , .however , it was not possible to prove that the condition ( [ gl1 ] ) is also necessary , except in the case of generalized zero - range processes .the purpose of this paper is to derive a more general sufficiency condition , valid for arbitrary graphs where the mass transport takes place not necessarily between nearest neighbours and for the more general case of parallel dynamics .our results boil down to equations ( [ phi ] ) and ( [ cond ] ) .the former yields a sufficiency condition and the latter an additional consistency condition which must be satisfied . in the special case of random sequential dynamics on a regular hypercubic lattice with nearest neighbour mass transport , our sufficiency condition reduces to the one derived by greenblatt and lebowitz .our results , however , are considerably more general . the paper is organized as follows . in section 2, we define the mass transport model on an arbitrary graph . in section 3 , we derive a sufficient condition on the chipping kernels , valid for parallel dynamics on an arbitrary graph , that would gaurantee that the steady state is factorisable .we show that there are some additional consistency conditions that need to be satisfied in general and we demonstrate explicitly how these conditions are satisfied in several examples . in section 4 , we extend our approach to random sequential dynamics on an arbitrary graph .we conclude with a summary and discussion in section 5 .we consider a fixed arbitrary graph consisting of nodes labelled and a set of directed links or channels between certain pairs of nodes . at a given time ,a node has mass where are continuous variables .we consider discrete time dynamics where at each step the masses at all the nodes are updated in parallel according to the following rules .we first define a mass - transfer matrix ] is identically zero at all times if there is no directed link from site to site on .if there is a directed link from to , then is a non - negative stochastic variable that represents the mass transferred from site to site during one update . in fig .1 we give an example which we refer to for illustrative purposes throughout this section . the diagonal element represents the mass that stays at site at the end of the single update .we assume that the dynamics of mass transport conserves the total mass .thus if represents the masses before the update , by virtue of mass conservation , the row sums of the matrix ] are permanently zero ( when there is no directed channel available for mass transfer between a pair of sites ) . on the other hand ,when there is an available channel from to , the matrix element is a stochastic variable which is chosen in the following way .for each node , we define a generalized ` chipping kernel ' that represents the joint distribution of the masses transported from site in a single update . here the set runs over only those sites which are connected to via a directed link , and in addition it includes the diagonal element . in other words , is simply the set of non - zero elements in the -th row of the mass - transfer matrix ] is shown , whose element denotes the stochastic mass transferred from site to site in one single update , provided there is a directed link between the two sites .if there is no directed link , the corresponding matrix element is always identically zero .the diagonal element is the amount of mass that stays at site during the update .the row sum and the column sum associated with any node , and , represent respectively the mass at before and after the update.,width=384 ] the chipping kernels thus specify the dynamics , i.e. the mass update rules .given these kernels , we next ask what is the steady state joint probability distribution of masses , i.e. .in particular , our goal is to determine the properties of the chipping kernels required in order to guarantee that the steady state joint probability distribution is factorisable , i.e. of the form \ ,\delta\left(\sum_{i=1}^l m_i -m\right ) \label{gpm1}\ ] ] where the normalization constant is given by \ , \delta \left(\sum_{i=1}^l m_i -m\right ) . \label{gpt1}\ ] ] besides , if the steady state factorises as in eq .( [ gpm1 ] ) , we would also like to know the single - site weights in terms of the prescribed chipping kernels .note that on an arbitrary graph , the single - site weights are , in general , different from site to site .hence there is an additional subscript in . on a homogeneous graph , where all sites have equivalent connectivities ,the weight function does not depend on the site explicitly as in eq .( [ prob ] ) .since the dynamics conserves the total mass , at any time we can write where is the total mass and is the unnormalized weight at time .below , we will first write down the general evolution equation of the weight under the mass transport rules prescribed by the chipping kernels . while the notations that we will use for a general graph may seem a bit complicated , it is instructive to keep the simple example in fig . 1 in mind and use it as a guide to the general notations .let us consider a single update from time to time .let denote the masses at time before the update and denote the masses at time after the update . in terms of the elements of the mass - transfer matrix ] . as in the general case , this will then lead to the sufficiency condition ( [ phi ] ) . to prove that is also the most general form of the solution that one can write down for eq .( [ cg1 ] ) , we take logarithm on both sides of eq .( [ cg1 ] ) and then take derivatives with respect to and with .this gives for any .it is then easy to see that the most general solution of the partial differential equation ( [ cg2 ] ) is of the form , where are arbitrary functions .since the graph is homogeneous , we also have independent on the site index .thus with being an arbitrary function , is the only solution of eq .( [ cg1 ] ) that respects homogeneity .since this solution finally leads to the sufficiency condition , the uniqueness of its form guarantees then that eq .( [ phi ] ) is both necessary and sufficient .note that the consistency condition ( [ cond ] ) is automatically satisfied in this case .the sufficiency condition in eq .( [ phi ] ) and the associated consistency condition in eq .( [ cond ] ) , derived above for parallel update dynamics in discrete time , can be easily extended to the case of random sequential dynamics .this can be achieved by letting the probability of the chipping event in a time step so that , to leading order in for small , at most one chipping event can occur in the whole graph per update , i.e. the chipping events occur sequentially one per update . in addition , taking one can obtain the continuous time limit where chipping events occur with ` rates ' per unit time .thus , the corresponding sufficiency condition for the random sequential dynamics will be specified in terms of the chipping ` rate ' kernels , rather than the chipping probablity kernels in parallel dynamics . to translate the sufficiency condition in eq .( [ phi ] ) valid for the probability kernels into one that is valid for ` rate ' kernels , we first redefine the functions , for all , in the following way where are arbitrary functions .the diagonal functions are left unchanged . with this re - definition of , the steady state weight in eq .( [ f ] ) becomes + o(dt^2 ) \label{wrs1}\ ] ] where the sum over in the second term runs over all sites that feed into site on . using the re - defined in eq .( [ vrs1 ] ) one can similarly rewrite the chipping kernels in eq .( [ phi ] ) .since the diagonal elements play a special role , it is convenient to redefine the chipping kernel only in terms of non - diagonal elements , i.e. , where denotes the set of matrix elements in the row without the diagonal element .we are allowed to get rid of the diagonal element using the row sum , .for example , for the graph in fig . 1, we will rewrite , . substituting eq .( [ vrs1 ] ) in eq .( [ phi ] ) and taking the limit , one gets (m_i)\right]\ , \prod_{j\ne i } \delta(\mu_{ij } ) + \nonumber \\ & + & dt\ , \left[\sum_{j\ne i } \frac{x_{ij}(\mu_{ij})\ , v_{ii}(m_i-\mu_{ij})}{v_{ii}(m_i ) } \prod_{k\ne j}\delta(\mu_{ik})\right ] + o({dt}^2 ) \label{phirs1}\end{aligned}\ ] ] where (m)= \int_0^m x(\sigma ) y(m-\sigma ) d\sigma ] is automatically satisfied due to the translational invariance , . it is easy to verify that the consistency condition in eq .( [ consrs1 ] ) is automatically satisfied in the four example cases of section [ sec : spec ] ( as it should be since random sequential dynamics is just a limit of the discrete time case ) .let us do this explicitly for the case where the graph is a homogeneous hypercubic lattice with periodic boundary conditions and mass transfer takes place only between nearest neighbours .for a hypercubic lattice , from each site there are outgoing links to the nearest neighbours of . similarly , there are exactly incoming links to site from its nearest neighbours . then the sufficiency condition in eq .( [ condf1 ] ) can be written in a simplified notation where the index runs over the directions , denotes the rate of transfer of mass from site with mass in the direction and denotes the function associated with the link .similarly , the the consistency condition in eq .( [ consrs1 ] ) can be rewritten as (m_i ) = \sum_{j\in { \rm neighbours\,\,\ , of}\,\,\ , i } [ x_{j ,- q}*v_{ii}](m_i ) .\label{conscubic}\ ] ] we next use the fact that the lattice is homogeneous , i.e. it is translationally invariant in all directions .clearly then due to translational invariance in the -th direction . in that casethe condition in eq .( [ conscubic ] ) is clearly satisfied at all .also , due to the translational invariance , the rate function does not depend on the site index .by the same requirement , .thus , the sufficiency condition in eq .( [ condf1 ] ) simply reads , where and are arbitrary non - negative functions .the consistency conditions are automatically satisfied as proved above .the steady state single site weight is simply , , and naturally it does not depend on the site index explicitly .the condition in eq .( [ conscubic2 ] ) is precisely that derived by greenblatt and lebowitz .thus , our general sufficiency condition , valid for an aribtrary graph , recovers this special case when is a homogeneous hypercubic lattice with nearest neighbour mass transport .we can also check that the steady state for the continuous time zero - range process on an arbitrary graph is recovered .the zero - range process involves discrete masses and is specified by the rates for a a unit of mass to hop from site to . in the case where these rates are given by where is the total rate for the unit mass leaving site and is the probability that the random destination for a hop from site is , the steady state factorises with single - site weight where is the steady state probability of a single random walker moving from site to with probability . to make the connection between the forms ( [ zrpag ] ) and ( [ condf1 ] ) we identify inverting the latter equality to express in terms of yields , by virtue of ( [ wrs2 ] ) , the single - site weight ( [ zrpagf ] ) .finally the consistency condition ( [ consrs1 ] ) becomes and is satisfied with since we have this work we have derived the sufficent condition ( [ phi ] ) along with a consistency condition ( [ cond ] ) for factorisation of the general ( discrete time , continuous mass ) mass transport model on an arbitrary graph . in this casethe single - site weight is given by ( [ f ] ) .we gave in section [ sec : spec ] specific examples of geometries where the additional consistency condition associated with the sufficient condition is automatically fulfilled .moreover on a complete graph with permutationally invariant chipping functions we showed that the sufficent condition is , in fact , also necessary .of course a significant improvement would be to generalize condition ( [ phi],[cond ] ) to a necessary and sufficient condition . to illustrate the challenges involved in accomplishing this task , we offer another simple example where we can derive a condition that is both necessary and sufficient .this example , a seemingly trivial generalization of the one in , involves a one - dimensional lattice with chipping to both nearest neighbour sites and we also assume that the chipping kernel is translationally invariant ( in example 1 of section [ sec : spec ] ) .then , due to the translational invariance , ( [ lt ] ) becomes using a similar procedure to that employed in example 4 of section [ sec : spec ] taking the logarithm of ( [ nesex ] ) then successive derivatives with respect to and we find that the general solution to ( [ nesex ] ) is where , , and are arbitrary functions , independent of .inverting the laplace transforms to give would yield a considerably more complicated form for the chipping kernel than ( [ phi ] ) . in principle , therefore , there could be a whole family of chipping kernels , generated by the choice of , which give rise to the same steady state ( i.e. the same single - site weight ) as that for .thus , could be thought of as a `` gauge function '' .in addition , for each choice of , one has to ensure the consistency condition , namely that is correctly normalized to unity . on the face of it, it may appear that this imposes a formidable constraint on the choice of .however , we now show that this consistency condition does not impose any additional constraint on .in other words , if the consistency condition is ensured for , then it is automatically satisfied for all other choices of . to see this ,consider the expression of the chipping kernel ( [ phiex ] ) .now , the denominator is , of course , independent of the choice of .so , to prove that the consistency condition of normalization of does not impose any additional constraint on , one has to prove that the integral is independent of the choice of .now , taking the laplace transform of eq .( [ h1 ] ) one obtains where we have used definition ( [ lt ] ) in going from the first line to the second and eq .( [ exgen ] ) in going from the second line to the third .but this expression on the rhs does not contain the `` gauge function '' thereby proving that the integral of is independent of the choice of .thus the consistency condition is automatically fulfilled for arbitrary choices of as long as it is ensured for , which is indeed the case as shown in sec .[ sec : spec ] example 1 .a more serious constraint on is that the inverse laplace transform of ( [ exgen ] ) must be _ non - negative_. the case in ( [ exgen ] ) obviously imposes the trivial constraint that the inverse laplace transform of the s be non - negative .it remains to be shown whether there is also a class of which lead to valid chipping kernels .if it could be shown that this class is non - empty , the one - dimensional example we have discussed would show explicitly that it is _ not _ necessary for to be of the form ( [ phi ] ) . mappingthe extent of this class of ( if indeed it is non - empty ) remains a daunting task .finally we note that having a factorised steady state opens the door for the study of condensation as in . thus one should be able to analyse condensation in various geometries or even on scale - free networks .
we study a general mass transport model on an arbitrary graph consisting of nodes each carrying a continuous mass . the graph also has a set of directed links between pairs of nodes through which a stochastic portion of mass , chosen from a site - dependent distribution , is transported between the nodes at each time step . the dynamics conserves the total mass and the system eventually reaches a steady state . this general model includes as special cases various previously studied models such as the zero - range process and the asymmetric random average process . we derive a general condition on the stochastic mass transport rules , valid for arbitrary graph and for both parallel and random sequential dynamics , that is sufficient to guarantee that the steady state is factorisable . we demonstrate how this condition can be achieved in several examples . we show that our generalized result contains as a special case the recent results derived by greenblatt and lebowitz for -dimensional hypercubic lattices with random sequential dynamics .
the problem of finding a highly connected subset of vertices in a large graph arises in a number of applications across science and engineering . within social network analysis ,a highly connected subset of nodes is interpreted as a community .many approaches to data clustering and dimensionality reduction construct a ` similarity graph ' over the data points .a highly connected subgraph corresponds to a cluster of similar data points .a closely related problem arises in the analysis of matrix data , e.g. in microarray data analysis . in this context ,researchers are often interested in a submatrix whose entries have an average value larger ( or lower ) than the rest .such an anomalous submatrix is interpreted as evidence of association between gene expression levels and phenotypes ( e.g. medical conditions ) . if we consider the graph adjacency matrix , a highly connected subset of vertices corresponds indeed to a principal submatrix with average value larger than the background .the special case of finding a completely connected subset of vertices ( a clique ) in a graph has been intensely studied within theoretical computer science .assuming p , the largest clique in a graph can not be found in polynomial time .even a very rough approximation to its size is hard to find . in particular , it is hard to detect the presence of a clique of size in a graph with vertices .such hardness results motivated the study of random instances .in particular , the so - called ` planted clique ' or ` hidden clique problem ' requires to find a clique of size that is added ( planted ) in a random graph with edge density .more precisely , for a subset of vertices ] is kept fixed in this limit , together with ] to denote the set of first integers , and to denote the size ( cardinality ) of set . ) for a set , we write to indicate that runs over all unordered pairs of distinct elements in . for instance , for a symmetric function , we have } f(i , j ) \equiv \prod_{1\le i < j\len } f(i , j)\ , . \ ] ] if instead is a set of edges over the vertex set ( unordered pairs with elements in ) we write to denote elements of .we use to denote the gaussian distribution with mean and variance .other classical probability distributions are denoted in a way that should be self - explanatory ( bernoulli , poisson , and so on ) .we consider a random graph with vertex set \equiv \{1,\dots , n\} ] ( if we are interested in maximizing ) or ( if we are interested in minimizing the expected number of incorrectly assigned vertices ) . fixing the depth parameter ,the distribution of converges ( as ) to for , and to for .mathematically , for any fixed in particular , for any fixed , the success probability \big ) + \p_1^{(t),\fr}\big(\xi\ge \log[\kappa/(1-\kappa)]\big)-1\ , \label{eq : psuccfree } \ ] ] is the maximum asymptotic success probability achieved by any test that is -local ( in the sense of being a function of depth- neighborhoods ) .it follows immediately from the definition that is monotone increasing in .its limit is the maximum success probability achieved by any local algorithm .this quantity was computed through population dynamics in the previous section , see figure [ fig : sparsepsucc ] . * plus boundary condition . *let be the complement of , i.e. the set of vertices of that have distance at least from .then has the interpretation of being the log - likelihood ratio , when information is revealed about the labels of vertices in .namely , if we define then we have in particular , is an upper bound on the performance of any estimator . in the previous section we computed this quantity numerically through population dynamics .let us finally comment on the relation ( [ eq : symmetry ] ) between and .this is an elementary consequence of bayes formula : consequences of this relation have been useful in statistical physics under the name of ` nishimori property ' .it is also known in coding theory as ` symmetry condition ' .consider the general setting of two random variables , with , , and let ] , we let denote the number of edges with both endpoints in .exhaustive search maximizes this quantity among all the sets that have the ` right size . 'namely , it outputs }\big\{\ , e(r)\ , : |r| = \lfloor\kappa n \rfloor\big\}\ , . \ ] ] ( if multiple maximizers exist , one of them is selected arbitrarily . ) we can also define a test function by letting for and otherwise .note that , for growing with , this algorithm is non - polynomial and hence can not be used in practice .it provides however a useful benchmark ..we have the following result showing that exhaustive search reconstructs accurately , for any constant and small .we refer to section [ sec : proofexhaustive ] for a proof .[ propo : exhaustive ] let be the asymptotic success probability of exhaustive search and assume . then in particular , we have the following large degree asymptotics as with fixed and as for any fixed .we next give a formal definition of -local algorithms .let is the space of unlabeled rooted graphs , i.e. the space of graphs with one distinguished vertex ( see for instance for more details ) .formally , an estimator for the hidden set problem is a function . since the pair is indeed a graph with one distinguished vertex ( and the vertices labels clearly do not matter ), we can view as a function on : the following definition formalizes the discussion in section [ sec : treeinterpretation ] ( where the definition of is also given ) .the key fact about this definition is that ( the ` locality radius ' ) is kept fixed , while the graph size can be arbitrarily large . given a non - negative integer , we say that a test is _-local _ if there exists a function such that , for all , we say that a test is local , if it is -local for some fixed .we denote by and the sets of -local and local tests .the next lemma is a well - known fact that we nevertheless state explicitly to formalize some of the remarks of section [ sec : treeinterpretation ] .recall that denotes the success probability of test , as per eq .( [ eq : psucct ] ) , and let be defined as in eq .( [ eq : psuccfree ] ) , with , , the laws of random variables , .we have in particular further , the maximal local success probability can be achieved using belief propagation with respect to the graphical model ( [ eq : approxmodel ] ) in time .we will therefore valuate the fundamental limits of local algorithms by analyzing the quantity .the following theorem establishes a phase transition for this quantity at .[ thm : main ] consider the hidden set problem with parameters , and let . then: * if , then all local algorithms have success probability uniformly bounded away from one .in particular , letting to be the smallest positive solution of , we have * if , then local algorithms can have success probability arbitrarily close to one .in particular , considering the large degree asymptotics with fixed we have as a useful technical tool in proving part of this theorem , we establish a normal approximation result in the spirit of eqs .( [ normalapprox1 ] ) , ( [ normalapprox2 ] ) .in order to state this result , we recall the definition of wasserstein distance of order , between two probability measures , on , with finite second moment , .namely , denoting by the family of couplings if it is a probability distribution on such that and for all . ] of and ,we have given a sequence of probability measures with finite second moment , we write if . [ lemma : gaussianapprox ] for , let be the random variables defined by the distributional recursion ( [ eq : generalcavity1 ] ) , ( [ eq : generalcavity2 ] ) , with initial condition ( [ eq : freebc ] ) , and denote by , the corresponding laws .further let be defined recursively by letting and then , considering the limit with fixed and , we have the proof of this lemma is presented in section [ sec : proofgaussian ] .as mentioned in the introduction , the problem of identifying a highly connected subgraph in an otherwise random graph has been studied across multiple communities . within statistical theory ,arias - castro and verzelen established necessary and sufficient conditions for distinguishing a purely random graph , from one with a hidden community . with the scaling adopted in our paper , this ` hypothesis testing ' problem requires to distinguish between the following two hypotheses : note that this problem is trivial in the present regime and can be solved for instance by counting the number of edges in .the sparse graph regime studied in the present paper was also recently considered in a series of papers that analyzes community detection problems using ideas from statistical physics .the focus of these papers is on a setting whereby the graph contains non - overlapping communities , each of equal size . using our notation , vertices within the same communityare connected with probability and vertices belonging to different communities are connected with probability .interestingly , the results of point at a similar phenomenon as the one studied here for .namely , for a range of parameters the community structure can be identified by exhaustive search , but low complexity algorithms appear to fail .let us mention that the very same phase transition structure arises in other inference problem , for instance in decoding sparse graph error correcting codes , or solving planted constraint satisfaction problems .a unified formalism for all of these problems is adopted in .all of these problems present a regime of model parameters whereby a large gap separates the optimal estimation accuracy , from the optimal accuracy achieved by known polynomial time algorithms .establishing that such a gap can not be closed under standard complexity - theoretic assumptions is an outstanding challenge .( see for partial evidence in this direction albeit in a different regime . )one can nevertheless gain useful insight by studying classes of algorithms with increasing sophistication .local algorithms : : are a natural starting point for sparse graph problems .the problem of finding a large independent set in a sparse random graph is closely related to the one studied here .indeed an independent set can be viewed as a subset of vertices that is ` less - connected ' than the background ( indeed is a subset of vertices such that the induced subgraph has no edge ) .+ the largest independent set in a uniformly random regular graph with vertices of degree has typical size where , for large bounded degree , .hatami , lovsz and szegedy conjectured that local algorithms can find independent sets of almost maximum size up to sublinear terms in .gamarnik and sudan recently disproved this conjectured and demonstrated a constant multiplicative gap for local algorithms .roughly speaking , for large degrees no local algorithm can produce an independent set of size larger than of the optimum .this factor of was later improved by rahman and virag to .this gap is analogous to the gap in estimation error established in the present paper .we refer to for a broader review of this line of work .+ as mentioned before , belief propagation ( when run for an arbitrary _ fixed _ number of iterations ) is a special type of local algorithm .further it is basically optimal ( among local algorithms ) for bayes estimation on locally tree like graphs .the gap between belief propagation decoding and optimal decoding is well studied in the context of coding .spectral algorithms .: : let be the adjacency matrix of the graph ( for simplicity we set for , and for ) .we then have this suggests that the principal eigenvector of should be localized on the set .indeed this approach succeeds in the dense case ( degree of order ) , allowing to reconstruct with high probability .+ in the sparse graph setting considered here , the approach fails because the operator norm is unbounded as .concretely , the sparse graph has large eigenvalues of order localized on the vertices of largest degree .this point was already discussed in several related problems .several techniques have been proposed to address this problem , the crudest one being to remove high - degree vertices .+ we do not expect spectral techniques to overcome the limitations of local algorithms in the present problem , even in their advanced forms that take into account degree heterogeneity .evidence for this claim is provided by studying the dense graph case , in which degree heterogeneity does not pose problems . in that casespectral techniques are known to fail for , and hence are strictly inferior to ( local ) message passing algorithms that succeed in the present paper correspond to in ( * ? ? ?* ; * ? ? ?] for any .semidefinite relaxations .: : convex relaxations provide a natural class of polynomial time algorithms that are more powerful than spectral approaches .feige and krauthgamer studied the lovsz - schrijver hierarchy of semidefinite programming ( sdp ) relaxations for the hidden clique problem . in that setting , each round of the hierarchy yields a constant factor improvement in clique size , at the price of increasing complexity .it would be interesting to extend their analysis to the sparse regime .it is unclear whether sdp hierarchies are more powerful than simple local algorithms in this case .let us finally mention that the probability measure ( [ eq : approxmodel ] ) can be interpreted as the boltzmann distribution for a system of particles on the graph , with fugacity , and interacting attractively ( for ) .statistical mechanics analogies were previously exploited in .( see also for the general community detection problem . )i am grateful to yash deshpande for carefully reading this manuscript and providing valuable feedback .this work was partially supported by the nsf grants ccf-1319979 and dms-1106627 , and the grant afosr fa9550 - 13 - 1 - 0036 .for the sake of simplicity , we shall assume a slightly modified model whereby the hidden set is uniformly random with size , with .recall that , under the independent model ( [ eq : kappadef ] ) and hence is tightly concentrated around its mean .hence , the result the independent model follows by a simple conditioning argument .let .by exchangeability of the graph vertices , we have where the last inequality follows since , without loss of generality , . setting , we will prove that for any there exists such that the claim the follows by using the inequality ( [ eq : proofpropo - simple ] ) together with the fact that . for two sets , ] satisfying the following conditions : * .* . * .indeed is such a set .this immediately implies eq .( [ eq : tobeexplained ] ) by noticing that and . by a union bound ( setting ) : in the last inequality we used union bound and the fact that edges contributing to and are independent . using chernoff bound on the tail of binomial random variables ( with the kullback - leibler divergence between two bernoulli random variables ), we get \cap \naturals } \prob\big(\binom(m;a / n)\le j\big ) \,\,\prob\big(\binom(m;b/ n)\ge j\big)\\ & \le ( m+1)\binom{k}{\ell}\binom{n - k}{k-\ell}\ , \exp\big\{-m\ , \min_{j\in [ bm / n , am / n ] } \big[d(j / m|| a / n)+d(j / m|| b / n)\big]\big\ } , .\label{eq : boundwkl } \ ] ] here , the first inequality follows because both probabilities are increasing for and decreasing for .we further note that , and therefore , for ] we let whence we therefore get for , the argument in parenthesis is smaller than and therefore summing over , we get which implies the claim ( [ eq : basicproofpropo ] ) , after eventually adjusting , since .recall that convergence in distance is equivalent to weak convergence , plus convergence of the first two moments ( * ? ? ?* theorem 6.9 ) .we will prove by the following by induction over : these claims obviously hold for . next assuming that they hold up to iteration , we need to prove them for iteration . for the sake of brevity, we will only present this calculation for , since the derivation for is completely analogous .let us start by considering eq .( [ eq : limitexpxi0 ] ) .first notice that the absolute value of right - hand side of eq .( [ eq : generalcavity1 ] ) is upped bounded by and hence follows from the induction hypothesis i and the fact that are poisson .next to prove eq .( [ eq : limitexpxi0 ] ) , we take expectation of eq .( [ eq : generalcavity1 ] ) , and let , for simplicity , : where the last equality follows from bounded convergence , since , for all , .note that the laws of and satisfy the symmetry property ( [ eq : symmetry ] ) .hence , for any measurable function such that the expectations below make sense , we have in particular applying this identity to and ^ 2 $ ] , we get substituting in eq .( [ eq : exiexpansion ] ) , and expressing in terms of we get where denotes a quantity vanishing as .the last equality follows from induction hypothesis iii and the fact that is bounded continuous , with . this yields the desired claim ( [ eq : limitexpxi0 ] ) after comparing with eq .( [ eq : mutrecursion ] ) .consider next eq .( [ eq : limitvarxi0 ] ) .the upper bound on the right - hand side of eq .( [ eq : generalcavity1 ] ) given by eq .( [ eq : upperboundrhs ] ) immediately imply that . in order to estabilish eq .( [ eq : limitvarxi0 ] ) , we recall an elementary formula for the variance of a poisson sum . if is a poisson random variable and are i.i.d . with finite second moment , then applying this to eq .( [ eq : generalcavity1 ] ) , and expanding for large thanks to the bounded convergence theorem , we get ^ 2\right\}+\kappa b \ , \e\left\{\big[\log\left(1+(\rho-1)\frac{e^{\xi_1^{(t ) } } } { 1+e^{\xi_1^{(t)}}}\right)\big]^2\right\}\\ & = ( 1-\kappa ) \frac{(a - b)^2}{b } \e\left\{\left(\frac{e^{\xi_0^{(t)}}}{1+e^{\xi_0^{(t)}}}\right)^2\right\ } + \kappa\ , \frac{(a - b)^2}{b}\e\left\{\left(\frac{e^{\xi_1^{(t)}}}{1+e^{\xi_1^{(t)}}}\right)^2\right\ } + o(b^{-1/2})\\ & = \kappa \frac{(a - b)^2}{b}\e\left(\frac{e^{\xi_1^{(t)}}}{1+e^{\xi_1^{(t)}}}\right ) + o(b^{-1/2})\ , , \ ] ] where the last equality follows by applying again eq .( [ eq : secondidentity ] ) . by using the induction hypothesis iii and the fact that is bounded lipschitz , which is eq .( [ eq : mutrecursion ] ) .we finally consider eq .( [ eq : limitweakp0 ] ) . by subtracting the mean , we can rewrite eq .( [ eq : generalcavity1 ] ) as where , .note that , have zero mean and , by the calculation above , they have variance . denoting the right hand side by : because ( for instance ) is a sum of order independent random variables with zero mean and variance of order .note that + \kappa b\e [ f(\xi^{(t)}_{1,1})^2 ] \big\ } = \mu^{(t+1)}\ , , \ ] ] where the last equality follows by the calculation above .hence , by applying the central limit theorem to each of the four terms in eq .( [ eq : sb ] ) and noting that they are independent , we conclude that converges in distribution to .define the event , and write for . from eq .( [ eq : psuccfree ] ) we have using eq .( [ eq : symmetry ] ) , and the fact that , we get call . by the initialization ( [ eq : freebc ] ) , . taking exponential moments of eq .( [ eq : generalcavity1 ] ) , we get + \kappa b \,\e\left[\left(\frac{1+\rho\ , e^{\xi_1^{(t ) } } } { 1 + e^{\xi_1^{(t ) } } } \right)^2\right ] \right\}\ , . \ ] ] note that by eq .( [ eq : symmetry ] ) , for any measurable function such that the expectations below make sense , we have applying this to , we get \right\}\ , . \ ] ] now we claim that , for , we have this can be checked , for instance , by multiplying both sides by and simplifying . using and , we get let be the solution of the above recursion with equality , i.e. and it is a straightforward exercise to see that is monotone increasing in and . further , for , the smallest positive solution of , and .hence which , together with eq . ([ eq : basictvbound ] ) finishes the proof .note that by monotonicity , and hence it is sufficient to lower bound the limit of the latter quantity . by lemma [ lemma : gaussianapprox ] , we have where is the gaussian distribution , and is defined recursively by eq .( [ eq : mutrecursion ] ) with .hence for all it is therefore sufficient to prove that now by monotone convergence , we have further increases monotonically towards its limit as .furthermore , is increasing in for any fixed .by induction over we prove that ( the limit being monotone from below ) , where and for all in order to prove this claim , note that the base case of the induction is trivial and ( writing explicitly the dependence on on the other hand for a fixed the claim follows since can be taken arbitrarily small .noga alon , michael krivelevich , and benny sudakov , _ finding a large hidden clique in a random graph _ ,proceedings of the ninth annual acm - siam symposium on discrete algorithms , society for industrial and applied mathematics , 1998 , pp . 594598 .emmanuel abbe and andrea montanari , _ conditional random fields , planted constraint satisfaction and entropy concentration _ , approximation , randomization , and combinatorial optimization .algorithms and techniques , springer , 2013 , pp .332346 .dimitris achlioptas and federico ricci - tersenghi , _ on the solution - space geometry of random constraint satisfaction problems _ , proceedings of the thirty - eighth annual acm symposium on theory of computing , acm , 2006 , pp . 130139 .aurelien decelle , florent krzakala , cristopher moore , and lenka zdeborov , _ asymptotic analysis of the stochastic block model for modular networks and its algorithmic applications _, physical review e * 84 * ( 2011 ) , no . 6 , 066106 .alexandre gaudillire , benedetto scoppola , elisabetta scoppola , and massimiliano viale , _ phase transitions for the cavity approach to the clique problem on random graphs _ , journal of statistical physics * 145 * ( 2011 ) , no . 5 , 11271155 .dandan hu , peter ronhovde , and zohar nussinov , _ phase transitions in random potts systems and the community detection problem : spin - glass type and dynamic perspectives _ , philosophical magazine * 92 * ( 2012 ) , no . 4 , 406445 .subhash khot , _ improved inapproximability results for maxclique , chromatic number and approximate graph coloring _ , foundations of computer science , 2001. proceedings .42nd ieee symposium on , ieee , 2001 , pp .600609 .florent krzakala , cristopher moore , elchanan mossel , joe neeman , allan sly , lenka zdeborov , and pan zhang , _ spectral redemption in clustering sparse networks _ , proceedings of the national academy of sciences * 110 * ( 2013 ) , no . 52 , 2093520940 .andrea montanari , daniel reichman , and ofer zeitouni , _ on the limitation of spectral methods : from the gaussian hidden clique problem to rank one perturbations of gaussian tensors _ , arxiv:1411.6149 ( 2014 ) .
we consider a random sparse graph with bounded average degree , in which a subset of vertices has higher connectivity than the background . in particular , the average degree inside this subset of vertices is larger than outside ( but still bounded ) . given a realization of such graph , we aim at identifying the hidden subset of vertices . this can be regarded as a model for the problem of finding a tightly knitted community in a social network , or a cluster in a relational dataset . in this paper we present two sets of contributions : we use the cavity method from spin glass theory to derive an exact phase diagram for the reconstruction problem . in particular , as the difference in edge probability increases , the problem undergoes two phase transitions , a static phase transition and a dynamic one . we establish rigorous bounds on the dynamic phase transition and prove that , above a certain threshold , a local algorithm ( belief propagation ) correctly identify most of the hidden set . below the same threshold _ no local algorithm _ can achieve this goal . however , in this regime the subset can be identified by exhaustive search . for small hidden sets and large average degree , the phase transition for local algorithms takes an intriguingly simple form . local algorithms succeed with high probability for and fail for ( with , the average degrees inside and outside the community ) . we argue that spectral algorithms are also ineffective in the latter regime . it is an open problem whether any polynomial time algorithms might succeed for .
the theory of ordinary and generalized hermite polynomials has largely benefited of the operational formalism .the two variable hermite - kamp de frit polynomials can be defined using the following identity : which involves the action of an exponential operator , containing a second order derivative , on a monomial .the explicit form of the polynomials can be obtained by means of a straightforward expansion of the exponential in eq . , which yields : } \ , \frac{x^{n - 2r}\,y^r}{(n - 2r)!\,r!}\;,\ ] ] wherethe variables independent each other as a parameter , the standard hermite form are recovered by the identities and . ] . by keeping , therefore , the derivative of both sides of eq . with respect to ,we find that the hermite polynomials can be viewed as the solution of the following heat equation for they can be written in terms of the gauss - weierstrass transform : which is a standard mean of solutions for the heat type problems . the higher order hermite polynomials , widely exploited in combinatorial quantum field theory , can be expressed as a generalization of the operational identity , and indeed they write , denoting the order of the polynomials , is omitted for hermite polynomials of order 2 . ] } \ , \frac{x^{n - mr}\,y^r}{(n - mr)!\,r!}\;.\ ] ] therefore , we can ask whether an integral transform , a sort of generalization of the gauss - weierstrass transform , also holds for the higher order case .we start discussing the case of hermite polynomials of even order with negative values of the parameter , namely : we express this family of polynomials in terms of a suitable transform following the procedure , put forward in , which considers the operator function where is a function admitting a fourier transform . with this assumptionwe find that the operator can be written as : and , therefore , we can express the action of the operator on a given function as the integral transform indicated below we can now apply the same procedure to express the exponential operator intervening in the definition of ( see eq . ) , thus obtaining the following integral transform yielding the even order hermite polynomials {|y|}\right)^n\ ] ] with which is motivated by the the request that the integrals defining is convergent .] , after a redefinition of the variable , can also be written as {|y|}}\ , \int_{- \infty}^{\infty}\ , \mathrm{d}\xi\ , \xi^n \,\tilde{e}_{2p}\ , \left(\frac{x - \xi}{\imath \sqrt[2p]{|y|}}\right)\;.\ ] ] it must remarked that the same procedure , applied to the case , does not lead to a gauss - weierstrass transform as in eq . , which holds only for . in the next sectionwe will extend the formalism developed in these introductory remarks , and prove that the airy transform and the associated polynomials can be framed within the same context .the higher order hermite polynomials satisfy the following recurrences : with the combination of the first and third recurrences yielding therefore , the higher order hermite polynomials satisfy a generalized heat equation , and this justifies the operational definition given in eq . .furthermore , by interpreting as a parameter , we can use the first two recurrences to prove tha they satisfy the -th order ode : in the previous section we have considered even order hermite polynomials only . herewe will discuss the third order case and their important relationship with the airy transform and the airy polynomials . before getting into this specific aspect of the problem, we note that the following identity holds : {\lambda } xt}\ , ai(t)\;,\ ] ] where is the airy function .let us now consider the third order pde whose formal solution writes by applying the identity given in eq . , and by limiting ourselves to the case , we find : {3}}\ , \int_{- \infty}^{\infty}\ , \mathrm{d}k\ , ai\left(\frac{k}{\sqrt[3]{3}}\right)\ , \mathrm{e}^{\sqrt[3]{y } k \partial_x}\ , f(x ) \nonumber \\\frac{1}{\sqrt[3]{3}}\ , \int_{- \infty}^{\infty}\ , \mathrm{d}k\ , ai\left(\frac{k}{\sqrt[3]{3}}\right)\ , f(x + \sqrt[3]{y } k ) \nonumber \\\!\!&=&\!\ !\frac{1}{\sqrt[3]{3 y}}\ , \int_{- \infty}^{\infty}\ , \mathrm{d}\xi\ , ai\left(- \frac{x - \xi}{\sqrt[3]{3 y}}\right)\ , f(\xi)\ ; , \qquad \qquad ( y > 0)\end{aligned}\ ] ] which is recognized as the airy transform of the function .the concept of the airy transform was introduced in ref . , and has found noticeable applications in classical and quantum physics .the airy transform of a monomial , namely {3 y}}\ , \int_{- \infty}^{\infty}\ , \mathrm{d}\xi\ , ai\left(- \frac{x - \xi}{\sqrt[3]{3 y}}\right)\ , \xi^n\ ; , \qquad \qquad ( y > 0)\ ] ] has been defined as airy polynomials , but , according to the discussion of the previous section , they are also recognized as the third order hermite polynomials .the characteristic recurrences ( specialized to the case ) , can also be directly inferred from eq . .for further convenience we will introduce the following two variable extension of the airy functions that , in terms of the ordinary airy function writes {3y}}\,ai\left(\frac{x}{\sqrt[3]{3y}}\right)\;.\ ] ] it is also easily shown that it satisfies the ode and that any other function linked to by satisfies the ode \,=\,0\ ] ] which on account of the identity can be written as if we assume , , and , we get the following generalization of the airy function satisfying the ode as an example a comparison between the ordinary airy function and its generalization of order 7 is shown in fig .[ fig : aicmp ] the point of view developed in sec .2 can be generalized to the case with .we consider the generalized hermite polynomials of odd order ( ) and note that the same procedure exploited in the previous sections leads to the following integral representation {(2p + 1 ) |y|}}\ , \int_{- \infty}^{\infty}\ , \mathrm{d}\xi\ , ai^{(2p + 1 ) } \left(\frac{\xi - x}{\sqrt[2p + 1]{(2p + 1 ) |y|}}\right)\,\xi^n\;,\ ] ] the use of a further variable or parameter in the theory of special functions and/or polynomials offers a further degree of freedom , which may be helpful to derive new properties , otherwise hidden by the loose of symmetry deriving from the fact that a specific choice of the variable has been done .this is indeed the case of the hermite polynomials , which acquire a complete new flavor by the use of the variable and the case of the airy function given in eq . which is easily shown to be the natural solution of the pde and the translation property according to eq ., we can write the solution of the equation as follows it is evident that the further generalization satisfies a higher order heat equation and can be exploited to obtain the solution of the same family of equation in terms of an appropriate integral transform . before concluding this note let us consider the following schrdinger equation describing the motion of a particle in a linear potential ( is a constant with the dimension of a force ) .this equation can be written in the more convenient form where has the dimensions of a squared length and of an inverse cube length , so that has the dimensions of a length . ] as discussed in refs . , the previous equation can be treated using different means , a very simple solution being offered by the use of the following auxiliary function : which satisfies the equation the solution of eq . can therefore be written as where the operator , once written in an integral form , is recognized as the airy transform .the solution of the equation can be obtained in an analogous way , and it is given by where the operator can be expressed in terms of integral transforms , linked to the higher order airy functions discussed in these sections .further examples of generalization of the airy function are due to watson , who introduced a complete class of functions with interesting properties for physical applications . as an example we consider the function expressed as which is compared to the ordinary airy function in fig .[ fig : wacmp ] , and satisfies the ode the use of the unitary transformation shows that it can be associated to the first ode the transformation has removed the second order derivative and has simplified the underlying group of symmetry , which has been reduced from to the dilatation group .we have mentioned this example because it may open a new interesting point of view to schrdinger type equations involving quadratic potentials . before concluding this paper we discuss whether the airy polynomials can be exploited to obtain a series expansion of a given function .the problems associated with the orthogonal properties of ordinary and higher order hermite polynomials has been thoroughly considered in a previous publications ( see refs . ) ; here we will use the point of view developed in ref .we consider indeed the following expansion and we will show that the use of the operational tools developed in this paper provides , in a fairly natural way , the coefficients , and we will display the orthogonal properties of this family of polynomials . on account of the definition of the higher orders hermite polynomials ( see eq . ) , eq . yields : the use of the identity allows to specify the action of the exponential operator on the function as : {3 |y|}}\,\int_{- \infty}^{\infty}\ , \mathrm{d}\xi\ , ai\left(- \frac{x - \xi}{\sqrt[3]{3 |y|}}\right)\,f(\xi ) \,=\ , \sum_{n = 0}^\infty\,a_n\,x^n\;,\ ] ] and the insertion of the explicit expression of the airy function ( see eq . ) yields : {3 |y|}}\,\frac{1}{2 \pi}\,\int_{- \infty}^{\infty}\ , \mathrm{d}\xi\ , \int_{- \infty}^{\infty}\ , \mathrm{d}t\,\exp{\left\{\frac{\imath}{3}\,t^3\,+\,\imath\ , \frac{\xi - x}{\sqrt[3]{3 |y|}}\,t\right\}}\,f(\xi ) \,=\ , \sum_{n = 0}^\infty\,a_n\,x^n\;.\ ] ] the expansion of the -dependent part of the exponential on the lhs of the previous equation yields : {3 |y|}}\,\int_{- \infty}^{\infty}\ , \mathrm{d}t\ , f(\xi)\,\partial_\xi^n\,ai\left(\frac{\xi}{\sqrt[3]{3 |y|}}\right)\;,\ ] ] which holds only if the integral on the rhs of this equation converges . in these concluding remarkswe have touched on several problems , which are worth to be explored thoroughly .the link between the watson and the functions , the possibility of obtaining more general airy - type transforms and their relevance to bell - type polynomials , and a more rigorous formulation of the orthogonality properties of the airy polynomials will be the topic of a forthcoming investigation .g. dattoli and e. sabia , `` the binomial transform '' , submitted to journal of integral transforms and special functions .o. valle and m. soares , _ airy functions and application to physics _ , world scientific , london ( 2004 ) .
the airy transform is an ideally suited tool to treat problem in classical and quantum optics . even though the relevant mathematical aspects have been thoroughly investigated , the possibility it offers are wide and some aspects , as the link with special functions and polynomials , still contains unexplored aspects . in this note we will show that the so called airy polynomials are essentially the third order hermite polynomials . we will also prove that this identification opens the possibility of developing new conjectures on the properties of this family of polynomials .
the notion of a trapped surface was introduced by penrose . in -dimensional spacetime the spacelike boundary , , of a -dimensional spatial region is called a future trapped surface if gravity is so strong there that even the future and ` outwards ' directed normal null rays starting at are dragged back so much that their expansion is non - positive everywhere on .careful analysis justified that trapped surfaces necessarily occur whenever sufficient amount of energy is concentrated in a small spacetime region .intuitively a black hole region is considered to be a part of a spacetime from which nothing can escape .therefore a black hole region is supposed to be a future set comprised by events that individually belong to some future trapped surface .the boundary of such a black hole region , referred to usually as the `` apparent horizon '' , , is then supposed to be comprised by marginal future trapped surfaces . as one of the most important recent results in black hole physicsthe existence of an `` apparent horizon '' was proved in .more specifically , it was shown there that given a strictly stable _ marginally outer trapped surface _ ( mots ) in a spacetime with reference foliation , then , there exists an open _ tube _ , , foliated by marginally outer trapped surfaces , with , through .let us merely mention , without getting into details here , that the applied strict stability assumption is to exclude the appearance of future trapped surfaces in the complementer of a black hole region . hawking s black hole topology theorem is proven by demonstrating that whenever the dominant energy condition ( _ dec _ ) holds a mots can be deformed , along the family of null geodesics transverse to the apparent horizon , yielding thereby on contrary to the fact that is a mots a future trapped surface in the complementer of the black hole region , unless the euler characteristic of is positive .whenever is a codimension two surface in a -dimensional spacetime the euler characteristic and the `` genus '' , , of can be given , in virtue of the gauss - bonnet theorem , via the integral of the scalar curvature of the metric induced on as the main difficulty in generalising hawking s argument to the higher dimensional case originates from the fact that whenever is of dimension in an -dimensional spacetime , the integral of the scalar curvature by itself is not informative , as opposed to the case of , therefore the notion of euler characteristic has to be replaced by the yamabe invariant .the latter is known to be a fundamental topological invariant and it is defined as follows .denote by ] , associated with the conformal class ] can be given as , and , moreover , and denote the covariant derivative operator and the scalar curvature associated with the metric on .the yamabe invariant is defined then as }y(\mathscr{s},[q]).\ ] ] some of the recent generalisation of hawking s black hole topology theorem , and also that of gibbons and woolgar s results , proved by galloway , schoen , omurchadha and cai , that are covered by refs. may be formulatedthen as . [ gs ] let be a spacetime of dimension satisfying the einstein equations with cosmological constant , and with matter subject to _ dec_. suppose , furthermore , that is a strictly stable mots in a regular spacelike hypersurface . * if then is of positive yamabe type , i.e. , . * if and then for the `` area '' of the inequality holds . the significance of these results get to be transparent if one recalls that in the first case , i.e. , when , can not carry a metric of non - positive sectional curvature which immediately restricts the topological properties of . whereas , in the second case the a lower bound on the `` entropy '' of a black hole , that is considered to be proportional to the area , is provided by ( [ gib ] ) . before proceeding we would like to stress on an important conceptual point .most of the quoted investigations of black holes , see refs. , starts by assuming the existence of a reference foliation of the spacetime by ( partial ) cauchy surfaces . in this respectit is worth recalling that by a non - optimal choice of one might completely miss a black hole region as it follows from the results of where it was demonstrated that even in the extended schwarzschild spacetime one may find a sequence of cauchy surfaces which get arbitrarily close to the singularity such that neither of these cauchy surfaces contains a future trapped surface .hence , one of the motivations for the present work besides providing a reduction of the complexity of the proof of theorem[gs ] , and also a simultaneous widening of its range of applicability was to carry out a discussion without making use of any reference foliation .as it will be seen below the simplicity of the presented argument allows the investigation of black holes essentially in arbitrary metric theory of gravity .thereby , we do not restrict our considerations to either of the specific theories .accordingly , a spacetime is assumed to be represented by a pair , where is an -dimensional ( ) , smooth , paracompact , connected , orientable manifold endowed with a smooth lorentzian metric of signature .it is assumed that is time orientable and that a time orientation has been chosen .the only restriction , concerning the geometry of the allowed spacetimes , is the following generalised version of _dec_. a spacetime is said to satisfy the _ generalised dominant energy condition _ if there exists some smooth real function on such that for all future directed timelike vector the combination ] is future directed and causal , the inequality holds on . finally ,since was assumed to be a strictly stable mots , in virtue of ( [ tk ] ) , the null normals and may be assumed , without loss of generality , to be such that , and also that somewhere on . to see that the stability condition applied here is equivalent to the one used in note that and it transforms under the rescaling ( [ boost ] ) of the vector fields and on as . by making use of the notation and , it can be verified then that \,\ ] ] holds , which is exactly the expression ` ' given in lemma3.1 of whenever is a mots and the variation vector field is chosen to be ` ' .this justifies then that the strict stability conditions applied here and in ( see , e.g. , definition 5.1 and the discussion at the end of section 5 of for more details ) are equivalent . in returning to the main stream of our argument note that whenever is a strictly stable mots and the generalised _dec _ holds then , in virtue of ( [ gdec2 ] ) , so that the inequality is strict somewhere on .since is positive definite we also have that for any smooth function on thus , multiplying ( [ gdec3 ] ) by , where is arbitrary , we get , in virtue of ( [ gdec4 ] ) , that so that the inequality is strict somewhere on . to get the analogue of the first part of theorem[gs ]assume now that is such that throughout .then , by taking into account the inequality , which holds for any value of , we get from ( [ gdec5 ] ) that \,\eeepsilon_{q } } { \left(\int_{\mathscr{s}}u^{\frac{2s}{s-2}}\eeepsilon_{q}\right)^{\,\,\frac{s-2}{s } } } > 0\,,\ ] ] for any smooth , i.e. , )>0 ] .this , along with ( [ c3 ] ) , implies then )|= -y(\mathscr{s},[q ] ) \leq 2 |f^{_{\mathscr{s}}}_{min}| \left [ { \mathcal{a}}({\mathscr{s } } ) \right]^{\,\,\frac{2}{s}}\,,\ ] ] which leads to the variant of the inequality ( [ gib ] ) yielded by the replacement of by .what has been proven in the previous section can be summarised as .[ gsn ] let be a spacetime of dimension in a metric theory of gravity .assume that the generalised dominant energy condition , with smooth real function , holds and that is a strictly stable mots in .we would like to mention that the argument of the previous section does also provide an immediate reduction of the complexity of the original proof of hawking and that of gibbons and woolgar . to see this recall that , in virtue of ( [ tipp2e ] ) , ( [ y1 ] ) and ( [ y2 ] ) , whenever is of dimension , and also that , if attention is restricted to einstein s theory with matter satisfying the dominant energy condition .thereby , as an immediate consequence of theorem[gsn ] , we have that whenever on , while whenever both and are negative .clearly the above justification of theorem[gsn ] is free of the use of any particular reference foliation of the spacetime .note also that in the topological characterisation of an -dimensional strictly stable mots only the quasi - local properties of the real function are important .in particular , as the conditions of theorem[gsn ] do merely refer to the behaviour of on it need not to be bounded or to have a characteristic sign throughout .similarly , it would suffice to require the generalised dominant energy condition to be satisfied only on .finally , we would also like to emphasise that theorem[gsn ] provides a considerable widening of the range of applicability of the generalisation of hawking s black hole topology theorem , and also that of the results of gibbons and woolgar .as its conditions indicate , theorem[gsn ] applies to any metric theory of gravity and the only restriction concerning the spacetime metric is manifested by the generalised dominant energy condition and by the assumption requiring the existence of a strictly stable mots .accordingly , theorem[gsn ] may immediately be applied in string theory or in various other higher dimensional generalisations of general relativity .
a key result in four dimensional black hole physics , since the early 1970s , is hawking s topology theorem asserting that the cross - sections of an `` apparent horizon '' , separating the black hole region from the rest of the spacetime , are topologically two - spheres . later , during the 1990s , by applying a variant of hawking s argument , gibbons and woolgar could also show the existence of a genus dependent lower bound for the entropy of topological black holes with negative cosmological constant . recently hawking s black hole topology theorem , along with the results of gibbons and woolgar , has been generalised to the case of black holes in higher dimensions . our aim here is to give a simple self - contained proof of these generalisations which also makes their range of applicability transparent . [ section ] [ section ] [ section ] [ section ] [ section ] [ section ] [ section ] [ section ] [ section ] [ section ]
when high - energy cosmic rays bombard the earth , muons are produced as part of particle showers within the upper atmosphere .these highly - penetrating particles are observed at sea level with a flux of approximately one per square centimetre per minute and momenta of several gevc . as charged particles ,they interact with matter primarily through ionising interactions with atomic electrons and via coulomb scattering from nuclei .both of these mechanisms have been exploited in recent years in the field of muon tomography ( mt ) to probe the internal composition of shielded structures which can not be probed using convential forms of imaging radiation _ e.g._x - rays . since e.p .george measured the thickness of the ice burden above the guthega - munyang tunnel in australia in the 1950s and l.w .alvarez conducted his search for hidden chambers in the second pyramid of chephren in egypt a decade later , there has been a wealth of wide - ranging applications using cosmic - ray muons for imaging purposes , such as in the field of volcanology and nuclear contraband detection for national security . the seminal work outlined by borozdin _et al._in ref . revealed the potential to locate and characterise materials within shielded containers using the coulomb scattering of cosmic - ray muons .this approach relies on precision reconstruction of the initial and scattered muon trajectories to determine the scattering location within the container and the scattering density , denoted .this scattering density is known to exhibit an inherent dependence on the atomic number of the material _ i.e._larger scattering angles are typically observed for objects with larger values .this work presents the design and fabrication processes of a small prototype mt system for use in the identification and characterisation of legacy nuclear waste materials stored in highly - engineered industrial waste containment structures . for this purpose, a detector with high spatial resolution was required .it was essential that the design of the system , and the materials and fabrication processes used , be scaleable to allow the future construction of an industrially deployable detector system .the requirement of industrial deployability mandated that the detection medium be radiation - hard , robust and capable of performing to a high degree of stability over prolonged periods of time for operation in the typical high radiation , high stress industrial environment found within a nuclear waste processing plant .a modular design based on plastic scintillating fibres was chosen to satisfy these criteria .the individual components and modular construction are described in section [ sec : components ] with the assembly process outlined in section [ sec : assembly ] .the readout electronics , data acquisition ( daq ) and complex fibre multiplexing requirements are described in section [ sec : mapping ] .results showing the performance of the constructed , commissioned system are presented in section [ sec : commissioning ] and first results from the image reconstruction of a test configuration of low , medium and high- materials are presented in section [ sec : images ] .each detector module comprised orthogonal detection planes of plastic scintillating fibres which were supported and positioned on a low - density rigid foam and aluminium structure .the two planes of fibres were bonded within this structure and further supported at the extremities by plastic polymer distribution blocks .fibres were coupled to photon detectors ( here , multi - anode photomultiplier tubes ) housed within lightproof boxes .the active area of each module was encased within a lightproof vinyl film .the design of the individual detector modules and all components and materials used in the fabrication process are detailed in the following section . prior to the design and fabrication processes ,dedicated geant4 simulation studies were performed to assess the fibre pitch required for this system with regards to the anticipated light output and reconstructed image resolution .a design based on 2mm - pitch plastic scintillating fibres was chosen .the fibres used in the production of this muon tracker were saint - gobain bcf-10 round fibres comprising a polystyrene - based core ( 97% by cross - sectional width ) with polymethylmethacrylate ( pmma ) optical cladding ( 3% ) .the light emission output of this formulation , which peaked at 432 nm , provided excellent overlap with the sensitivity of the chosen photon detector , the hamamatsu h8500 mapmt .feasibility studies ( outlined in section [ sec : mapping ] ) revealed the potential to reduce the required number of readout channels by 50% through the coupling of two scintillating fibres to a single pixel on the 64-pixel segmented anode of the pmt .a single detection layer comprising 128 fibres coupled to a single pmt was therefore chosen for the final prototype design .the design for the complete detector system comprised four modules , two situated above , and two below the volume under interrogation _ i.e._the assay volume . with this modular design and the identification of a space point in each module ,the initial and scattered muon trajectories could be reconstructed allowing the determination of the scattering position and magnitude required for image reconstruction .within each module , two orthogonal detection planes , comprising a single layer of 128 fibres , yielded an active area of 256 mm x 256 mm . to allow for minimal transmission losses and manageable strain as a result of the fibre bending required for pmt coupling , fibres of 860mm length were used . in total , for eight detection layers , 1024 fibres and 8 pmts were required . . ] for each fibre layer , accurate fibre positioning and support was provided by a low- , precision - machined structure of rohacell^^ sheeting , a closed - cell rigid polymethacrylimide foam .this provided sufficiently high tensile strength in comparatively small thicknesses to support the individual fibre layers whilst providing only a negligible coulomb scattering effect on transient muons .a layered configuration of rohacell^^ support sheets of varying thicknesses and dimensions was fabricated to support the orthogonal layers of fibres .the assembly of this structure is described in detail in section [ sub : module ] . shown in figure [ fig : rohacell ] and measuring 300 mm in length ,the central square sheet that supported the fibres was machined with shallow , parallel _ v _grooves to position each of the 128 fibres per layer .grooves were cut in the x ( y ) direction on the top ( bottom ) side of the support sheet . on this central sheet ,narrow non - grooved regions were located at the sides parallel to the fibre direction ( shown in figure [ fig : rohacell ] ) to accommodate the holes which allowed the sheet to be fixed onto locating pins on the aluminium base plate ( see section [ sub : aluminiumbaseplate ] ) .each sheet of rohacell^^ had locating holes in each corner to fix the sheet to these locating pins to ensure accurate alignment and uniformity across all four modules .a 3mm - thick flat sheet of aluminium was used to provide a robust support base for the multiple layers of rohacell^^ and scintillating fibres which formed the active area of each module .this square sheet of aluminium , shown in figure [ fig : al - plate ] , measured 460 mm with a central square region of 270 mm removed to minimise the material contributing towards coulomb scattering within the active volume .four stainless - steel locating pins were fixed at each corner of the inner hole in the base plate .these were used to secure the sheets of rohacell^^ and to provide fixed reference points which ensured precision alignment was maintained both internally ( _ i.e._with the fibres within the corresponding module ) and externally ( _ i.e._with the other three modules ) . at the sides of the aluminium plate ,two locating holes were bored .these holes ( the innermost holes on figure [ fig : al - plate ] ) allowed fibre distribution blocks to be located on each axis and screwed into place .each side of the base plate had two further holes bored at the outer corners .these ensured the aluminium sheet could be fixed to an external aluminium profile stand . to provide extra support to the fibres, custom - built distribution blocks were fabricated from polyoxymethylene ( pom ) plastic .these also maintained the uniform distribution of fibres at the extremes of the base plate and , more importantly , at the pixelated surface of the pmt .this was chosen over other potentially abrasive candidate materials _e.g._aluminium , to prevent causing damage to the fibres .the detector benefitted from the low cost , opacity and precision machining capability of this polymer .the first set of distribution blocks were rectangular pom pieces shown in figure [ fig : fibredistblock ] .these were located at the extreme of the base plate and in the same plane as the fibre layers .they contained two rows of 64 holes and measured 300 mm in length .the rows were both offset slightly , one above and one below , from the level of the fibres to provide support and stability to the fibres and to prevent unwanted strain causing them to lift off from the rohacell^^ surface after bonding .this also ensured that a constant tension was maintained across all fibres .the fibre distribution blocks were fixed in place on the aluminium base by screws through the holes in the plate . the holes which supported the fibres were large enough to allow the application of black nylon tubing to the fibres for lightproofing .this tubing covered the length of each individual fibre outwith the area covered by the base plate and mitigated the risk of external sources of light incident on the fibre from producing signals within the pmt .this tubing , and additional lightproofing measures , are outlined in section [ sub : lighttight ] . ) . ] the second pom block shown in figure [ fig : pmtholder ] measured 110mmx82mmx42 mm .it served to hold the pmt in place in an inner square cavity ( of equal dimensions to the pmt ) at the rear side , whilst at the front , it provided precision contact between the pixelated surface of the pmt and each coupled fibre pair .this latter characteristic was achieved via an array of 128 holes , positioned such that two fibres were in optical contact with a single pixel . once inserted into the holes of the pmt holder ,the fibres protruded a short distance due to a series of stainless steel collars which were bonded to the end of the fibres ( see section [ sub : fibreassembly ] ) .these prevented the fibres extending further into the pmt holder and thus ensured that the small lengths of fibre that protruded uniformly contacted the pmt surface .this process also ensured that all the fibres exerted the same pressure on the surface of the pmt .a thin silicone gel protective pad , which had negligible effect on the transmission of light , was placed between the fibres and pmt to mitigate potential damage to the surface of the pmt by the pressure of the fibres .four threaded holes were positioned on the rear surface to allow a printed circuit board ( pcb ) to be connected for signal readout purposes . surrounding the edge of the pmt holder , a raised border allowed the holder to sit firmly within a specially - constructed housing on the aluminium profile frame , shown in figure [ fig : frame ] .all components outlined in the previous subsections were housed in an aluminium profile frame ( with 20 mm square cross - sectional area ) shown schematically in figure [ fig : frame ] .aluminium profile was chosen because of its lightweight structure and strength .this was constructed to support the base plate ( onto which it was screwed ) , rohacell^^ and orthogonal layers of fibres , in addition to preventing unwanted strain on the fibres in the non - active regions which could result in fibre breakages and/or loss of contact with the pmt surface . to minimise these risks , aluminium sheets were attached to the frame structure to provide further support to the fibres in this region . at the end of these two arms ,three small aluminium profile struts were assembled to fix the pmt holders firmly in position .these struts were designed to accommodate the raised borders of the holders .the design of the module frame benefitted from the capability of aluminium profile to attach to an external support frame at any position and orientation along its length .this allowed complete freedom for future experimental studies on alignment and module positioning .custom - made pcbs were printed to couple to the h8500 pmt connector pins and allow the connection of 16-channel ribbon cables to facilitate the readout of signals from the pixels .the readout system will be described in detail in section [ sec : mapping ] .these pcbs were tightly - screwed to the pmt holder via the holes on either side .this helped provide uniform contact between the fibres and the face of the pmt . a final , rectangular pom box measuring 140mmx113mmx55 mm was fabricated and bolted to the aluminium profile to fully encase and lightproof the pmt and the associated voltage supply cables , the pcb and readout signal cables .this pom box is shown in figure [ fig : pmtbox ] .this was bolted to the aluminium profile of the module via four oblique holes in the corners .a rectangular slit at one side of the box allowed the four ribbon signal cables per pmt to be connected .a small groove at each end of the box provided access for the high voltage cable and ensured it was not damaged during assembly .a circular hole was drilled out at the back of the pmt cover to allow the signal cable to be inserted .this section will detail the preparatory and assembly processes associated with the fabrication of the detector modules , and the lightproofing measures taken .prior to assembly , both ends of the scintillating fibres were polished to optimise light transmission and to provide good optical contact with the pixelated surface of the pmts .a three - stage polishing process was undertaken using a series of lapping films of decreasing coarseness . a vital component of the assembly process was to maintain a uniform contact and pressure distribution between the fibres and the pmt .this was achieved using a series of metal tubes ( or collars ) fixed to one end of the fibres at precise positions . at this end of each fibre ,a 30mm - long stainless steel collar was bonded with optical epoxy ensuring that a small length of fibre protruded from the collar .once the epoxy and collar had been applied , the epoxy was left to cure . after curing , a further lapping stage removed this protruding length of fibre and , with it , any epoxy covering the polished end .a smaller 5mm - long stainless steel collar was bonded onto the larger collar using a uv - curing adhesive .removable caps were placed on the fibres during this process to ensure that the small collars were all attached at a pre - set position along the larger collar .this ensured that every collared fibre protruded the same distance through the pmt holder , providing a uniform pressure on the pmt surface .this process of attaching the smaller collars was performed after the fibres had been bonded to the rohacell^^ sheets and covered in black nylon tubing , which are detailed in the following sections .figure [ fig : modulelayers ] illustrates the final layered structure of scintillating fibres , rohacell^^ and aluminium which comprised each detector module .the scintillating fibres were first bonded to the central , grooved rohacell^^ sheet using the same optical epoxy used for the bonding of the large collars .the grooved sheet of rohacell^^ was first attached to the aluminium base plate via the four locating pins .this base plate was clamped to a breadboard table with flat blocks of pom abutting the end of the rohacell^^ sheet , as shown in figure [ fig : fibres ] .this ensured that , when bonded , the open end of the fibres rested level with the edge of their support sheet .the repellent properties of pom ensured that the epoxy did not bond to these blocks during this process .epoxy was then applied to the top side of the sheet before fibres ( these would become the bottom plane ) were carefully positioned into the grooves . a flat , 4mm - thick sheet of rohacell^^ of otherwise equal dimensions to the grooved central sheet , was then placed over the locating pins and bonded on top of the layer of fibres .this was weighted evenly to ensure the fibres cured in their desired positions within the grooves . with this layer of fibres set in place , the configuration was then overturned to allow the top x plane of fibres to be bonded .this was placed onto the base plate once again , over a 6mm - thick piece of rohacell^^ of equal dimensions as the aluminium plate . to minimise the material in the active volume , the central square region of this piece , measuring 270 mm ,was also removed . for the x plane of fibres ,the same gluing procedure was performed with another 4mm - thick sheet of rohacell^^ bonded and weighted on top during the curing process . a 6mm - thick flat piece of rohacell^^ , the same dimensions as the other central pieces , was secured on top of the support pins .this sheet was necessary to ensure a level height with the pom distribution blocks screwed to the edge of the base plate .the fibres were then fed through these blocks with care taken to avoid damage .a final sheet with the central region removed was pinned to the structure to fully enclose the fibres .this was secured tightly with washers over the four locating pins .once the layered configuration had been fabricated , a rohacell^^ skirting , shown in figure [ fig : modulelayers ] , was pinned around the open edges of the module to seal off the active area and to provide a smooth - edged support for the lightproof covering described in the following subsection .this whole assembly was then secured in the aluminium profile module frame using purpose - built bolts placed in the troughs of profile ( shown previously in figure [ fig : frame ] ) which were then bolted to the holes in the base plate .each module was then prepared for final lightproofing measures . prior to the optical connection of the fibres to the pmts , the exposed regions outwith the encased active area , and the areas covered by the porous rohacell^^ , required lightproofing to prevent light from entering the active area and producing noise signals during data collection . to achieve this ,lengths of black nylon tubing were applied over the collared ends ( prior to the bonding of the smaller collar ) forming a tight seal against the large aluminium collar , while leaving sufficient space for the small collar to be bonded . at the opposite end , the tubing was positioned tightly inside the holes of the fibre distribution block providing a firm seal .when all the fibres were covered in the tubing , each fibre was labelled sequentially to allow the fibre multiplexing schemes outlined in section [ sec : mapping ] to be implemented .the rohacell^^ structure around the active area was covered in sheets of tedlar^^ , a radiation - durable polyvinyl fluoride film with excellent light absorption properties , which ensured the fibres were not exposed to any external light .two layers of this film were wrapped around the central area of the module with four holes pierced in the film to allow the locating pins to protrude through . once in place, black electrical tape was applied around all the edges to hold the tedlar^^ in position and a black foam seal was placed over the locating pins and taped in place to restrict any light penetrating into the active area .the fully lightproof , and pmt - connected , detector module is shown in figure [ fig : tedlar ] . with the construction of the four detector modules completed , and lightproof testing successfully performed , the modules were assembled in an aluminium profile vertical support stand .the modules were precisely aligned , to make certain that all four were parallel , and positioned at predetermined , optimum positions in the vertical direction .these positions were extensively studied using detailed geant4 simulations of the detector geometry to optimise the tracked muon flux and reconstructed image resolution .a maximum separation of 900 mm between the outermost modules was imposed to negate any destabilising effects introduced by extending to longer lengths of aluminium profile for the vertical stand . limiting this to 900 mm ensured the stability and alignment of the system .the separations between the top and bottom module pairs were fixed at 250 mm leaving a 400 mm spacing in the assay volume to accommodate test objects for future image reconstruction .the individual modules were attached to the support stand via a series of clamps around the central region .these ensured firm and stable attachment to the vertical stand , and allowed the modules to be connected at any position in the vertical direction if required .the clamps were designed with two holes on the outside and one on the raised , middle section .this middle hole provided the ability for the clamp to be bolted onto the vertical struts of the external detector stand , while the outside holes bolted onto each module frame .the four clamps were positioned at the active area of each module with two on the outside of the frame and two on the inner side . a model of the detector , fully assembled in the vertical support structure ,is shown in figure [ fig : gmtdetector ] .the square base of the vertical support stand was firmly secured to a breadboard surface to provide additional stability .alignment studies , presented in section [ sub : alignment ] , revealed the high degree of precision resulting from the various support measures taken during the assembly of this detector system .the hamamatsu pmts offer one negative - voltage read out channel per pixel and a unified signal from the second - to - last dynode ( dynode-12 or dy12 ) in the amplification chain with positive polarity that is sensitive to a signal in any pixel .the dy12 signals were used in this system to trigger the read out and recording of the pmt signal . in order to reduce noise ,a coincidence requirement was imposed ; data were only recorded when three separate pmts recorded signals in any pixel .the system made use of a nim electronics crate housing coincidence gates , dual gate generators , discriminators and fan - in / fan - out units .data collection was performed via a custom c++ data acquisition program ( daq ) running on a commodity laptop .this was connected via usb to a caen vme crate and the pixel signals were fed isomorphically into the signal inputs of caen charge to digital converter units ( qdcs ) . the vme crate also housed a scaler unit that was used to monitor signal peaks above a pre - set threshold voltage .the positive polarity dy12 signals were fed into the nim electronics crate , inverted , amplified and shaped before being passed to a set of coincidence units that produced the final physics trigger .when the trigger condition was satisfied and the qdcs had recorded the analog signal , the daq software transferred the data from the qdc registers over the vme bus .the data were zipped and written to disk before the next event was recorded .this potential delay had no significant impact on the detection efficiency due to the relatively slow event rate .meanwhile , the scalers monitored the number of signals received from each pmt ( each dy12 signal was fed to an individual scalar channel ) , the number of read out gates generated by the nim hardware , the number of trigger gates generated by the daq software and the number of events that were read out and recorded . for each event, the data and scaler signals were packaged together with a timestamp and status information on the qdc modules .the information from the scaler was used for both hardware and software debugging purposes ; any light leaks are reflected in the number of signals generated by the pmts while inefficiencies would be found if the number of physics triggers and the number of final triggers diverged . during normal data collection , events were recorded in runs of 500 . at the beginning of every data run ,pedestal runs were recorded to monitor the amount of non - signal charge in the qdcs .as explained in section 5 , these pedestal files were used to identify any fluctuations in the electronics performance and were used during data quality monitoring to correct the corresponding data signals from the system prior to pmt gain correction . in order to minimise the number of read - out channels required for the experimental setup ,the signals from all 128 scintillating fibres per detection layer were read out by a single hamamatsu h8500 mapmt . herethe anode is segmented into a 64-pixel square array , with each pixel measuring 5.8 mm in width .two fibres were coupled to each pixel via dedicated pairing ( or multiplexing ) schemes , an example of which is shown in figure [ fig : mapping ] .simulation studies performed using a geant4 simulation of the detector geometry prior to the construction of the prototype system showed that these multiplexing schemes allowed the correct identification of the eight struck fibres per event , thus maintaining the high spatial resolution provided by the 2 mm pitch .this was achieved using a likelihood - based demultiplexing algorithm which relied on a small scattering angle assumption and the use of four unique multiplexing schemes , one in each of the four detection layers in the top and bottom pairs of modules .these had the effect of exaggerating the detected incident angles of the two muon trajectories for incorrectly - identified fibres such that the two partial tracks do not combine , or if so , are significantly less likely to have a scattering angle below a given threshold . from these simulation studies ,the correct eight fibres struck per event were successfully identified in over 98% of events .further studies have shown this small misidentification of fibres which remains in the imaged data to have a negligible detrimental effect on the images reconstructed .once construction of the prototype detector system was completed and all daq systems had been tested , data were collected for commissioning and alignment purposes . for the studies performed throughout this work , excluding the pmt gain normalisation and signal transmission studies( the experimental setups for these tests are described in sections [ sub : gain ] and [ sub : fibretest ] respectively ) , the three - fold daq trigger described previously was implemented . to recap , the top and bottommost layers were included in coincidence with one of the innermost layers to ensure confidence that the trigger originated from a single muon passing through the complete detector acceptance . for studies performed using the complete system ( studies in section [ sub : qdc ] onwards ) , dedicated pedestal - only anddata runs were recorded at regular intervals during data collection .the identification of the struck fibre was made by first requiring events in which the five layers outwith the trigger conditions registered a signal , in at least one channel , with a qdc signal 3 ( from a gaussian fit to the pedestal - only data ) above the pedestal mean .the pixel ( or pixel cluster ) coupled to the struck fibre was identified by selecting the highest pedestal - subtracted , gain - normalised qdc signal per pmt .this yielded two potential fibres per detection layer as a consequence of the fibre multiplexing schemes .the eight struck fibres per event , and therefore the four space points , were reconstructed via the likelihood - based demultiplexing algorithm described previously . prior to data collection on the full prototype system ,each of the eight hamamatsu h8500 mapmts to be used required detailed characterisation to assess the anticipated effects from cross - talk , clustering and local gain variations across the pixelated surface .results from independent , detailed studies on the cross - talk and clustering characteristics of the h8500 pmt in ref . by montgomery _et al._revealed these effects to be small .observations made during data collection consolidated these findings , revealing the influence of cross - talk on the surface of the pmt to be small in relation to the desired signal .typical cluster sizes of 2 to 3 pixels , and mean cluster multiplicities of around 1.5 per pmt were observed .although coupled on diagonally - opposite corners of the pixel and not the centre , the fibres ( with collars ) only occupied 23% of the active area of the pixel .this ensured that the majority of the signal was deposited within a single , distinct pixel .the effect of cross - talk resulting from light transmission to neighbouring fibres was also investigated and found to have a negligible influence .there was also no significant contribution arising from any dispersive effects introduced by the protective silicone pad applied to the face of each pmt .as a consequence of this small cross - talk effect , the pixel corresponding to the qdc channel with the highest gain - normalised , pedestal - subtracted signal per pmt was considered as being coupled to the struck fibre . despite this , in approximately 5% of events , there existed a secondary ( or in even fewer cases , a tertiary ) cluster with a larger integrated signal than the cluster containing the pixel with the highest individual signal .assignment of the corresponding fibres in such instances ( in the vast majority of cases this equated to a single layer ) manifested clear discrepancies in alignment data which were not observed when the pixel with the highest signal was selected .these clusters were subsequently attributed to noise within the system .it was concluded from these observations , that only the pixel with the highest signal per pmt should be considered for selection .this was the case for all studies presented in this work .negligence of any local gain variation effects across the pmt surface could , if their extent was large enough , result in the misidentification of the struck fibres through the selection of the pmt pixel with the highest signal . to minimise the risk of this occurring , low and high resolution scans of each pmt were performed to determine the relative variations in gain across their surfaces .these were performed using a laser tuned to the expected light output level of the scintillating fibres via a combination of filters .scans were undertaken at the operating voltages of the pmts , with the low resolution scans stepping from the centre of each pmt pixel to the next , and the high resolution studies employing a step size of 1 mm .the efficiency and uniformity of the dynode-12 signal were also studied during these laser scans to assess their potential use within the experimental trigger condition .results from the low resolution laser scans of one of the pmts used in the prototype system are shown in figure [ fig : gain ] .these show the relative pixel gains normalised to the highest gain per pmt . in rare , extreme cases ,pixels are shown to drop to 40% of the maximum gain of the pmt , with uniformities ( defined here as the ratio of the maximum pixel gain to the mean gain ) ranging from 1.2 to 1.6 across the eight pmts . with only a single pmt per detection layer on which to identify the hit pixel , it was not necessary to extend this localised normalisation across all eight pmts .the light transmission through the fibres was studied using a single detector module prior to assembly in the vertical support stand .small scintillator paddles , covering an area of approximately 50mmx50 mm , were placed above and below the active area to provide an external muon trigger which removed any internal thresholds from biasing the efficiency measurement .the entire length of the active area was scanned with selected positions along the remaining length of fibres to assess the extent of the anticipated degradation .all measurements are presented normalised to the value ( this is set to 100 ) obtained at the face of the pmt .these studies confirmed the expected decrease in signal strength at increasing distances from the pmt due to the light transmission properties of the fibre , loss of light through the open end of the fibre , and the bending of the fibres at the pmt .figure [ fig : fibretest ] reveals the extent of this degradation in the observed signal in the region of this bending .the signal remains relatively constant across the active area of the module at a value of approximately 35% that of the signal observed at the pmt face .the corresponding detection efficiencies ( defined in section [ sub : eff ] ) also remain constant at values in the region of 75% except in the final efficiency measurement .it was subsequently discovered that the scintillator paddles did not fully cover the active area and as such , a larger uncertainty was attributed to account for this .all results were obtained using the same 3 selection criteria for the signal size above pedestal , and confirmed that the degraded signals observed were still sufficiently large enough to avoid failing this requirement .values observed from pedestal - only data for all 512 qdc channels prior to ( red filled circles ) and after ( blue open circles ) optimisation of the detector and readout setup . in particular , channels in the qdc channel ranges [ 463 , 478 ] and [ 479 , 494 ] corresponded to two 16-channel ribbon cables which were later found to be severely kinked .this induced noise on the corresponding channels . in both sets of data , the characteristic parabolic distribution in valuesis observed across the cable . here , the bold ( dashed ) lines separate regions belonging to the same detector layer ( readout cable ) . ] throughout the commissioning process of the detector system , minor adjustments and improvements were made to the setup in preparation for image reconstruction studies .dedicated pedestal - only runs were recorded at regular intervals to allow stability assessment .the widths of the pedestal distribution in each qdc channel , characterised by the ( alternatively , in figure [ fig : sigmas ] ) from an applied gaussian fit , were extracted . across all , the mean was observed to be less than one qdc channel .however , in channels read out by connectors at the edges of the ribbon cables , characteristically - broader pedestals were found with values in the region of two channels .this is an understood effect relating to the placement of the grounding pins .the values for each of the 512 read out channels are shown in figure [ fig : sigmas ] for the optimised setup in comparison with initial pedestal data collected using what later transpired to be damaged ribbon cables which introduced noise into the system . in each of the five fibre layers that did not form part of the trigger scheme ,the detection efficiencies were determined as the percentage of recorded events in which the layer registered at least one qdc signal more than 3 ( from a gaussian fit to the pedestal - only data ) above the pedestal mean .the detected efficiencies varied between 70% and 86% across the five layers with a stable mean operating efficiency of 34% shown in figure [ fig : stability ] .this value is defined as the percentage of events within the 3-fold trigger which contained at least one qdc signal above pedestal in each of the other detection layers and indicated a mean detection efficiency of 80% per layer .the deficiencies observed relate to fibres which were damaged during the construction or assembly processes and/or dead or noisy channels .these channels included those qdc channels connected at the extremes of the ribbon cables whose pedestals exhibited a characteristically - broader distribution . here, the 3 upper threshold imposed could restrict the detection of muons which deposited low signals within the fibre .this effect was observed in figure [ fig : sigmas ] which compares the values for each channel before and after optimisation of the setup .a further source of signal loss could have been a potentially weak signal originating from the muon interacting with a small volume of scintillating material close to the edge of the fibre .this 80% efficiency could therefore translate to the detection of a muon signal only when it had interacted with the central 80% ( by diameter _ i.e._1.55 mm ) of the active circular cross - section of the fibre .however , this assumption can not be verified with current experimental data .further studies were performed by changing the innermost detection layer which was included in the trigger scheme , thus allowing the efficiency of the layer regularly used for triggering to be determined .this allowed the efficiency of this layer to be determined , and it was found to be consistent with the range identified previously .position - sensitive studies , similar to those shown in figure [ fig : fibretest ] , were performed using small external scintillator paddles on the outermost detector layers which confirmed these layers consistency with the six internal layers .t , the time between successive triggers using the three - fold condition described in the text .the black line through the data points represents a second - order polynomial fit to the data .a stable rate of 0.15hz was observed over the entire data collection period , corresponding to a mean t of approximately 6.57s . ]x and axes arise from the small degree of misidentification of the hit fibre from the demultiplexing algorithm and/or from selection of a noise signal in one or more of the pmts.,title="fig : " ] x and axes arise from the small degree of misidentification of the hit fibre from the demultiplexing algorithm and/or from selection of a noise signal in one or more of the pmts.,title="fig : " ] throughout data collection , the prototype mt system exhibited high levels of stability which was observed in various facets of the data including the detection rates and efficiencies shown in figure [ fig : stability ] for a one month period .a steady 3-fold trigger rate of 0.15hz was recorded which equated to approximately 6000 candidate tracking events ( _ i.e._with a signal in every layer ) per day .the interaction rate of cosmic muons with matter follows a poisson - distribution .thus the time of arrival between two events t exhibits an exponential decay . from data collected ,the semi - logarithmic distribution of t is shown in figure [ fig : rates ] with a mean t of 6.57s .nevertheless , the time between two events might exceed several tens of seconds .data were collected in 2012 with the assay volume empty of material to establish the extent of any misalignment between detector modules . despite the precision fabrication of the aluminium profile frame and the individual detector modules , minor misalignmentswere observed from the analysed data .these were determined via an iterative process of constructing and projecting the track from one module pair to the four other detector layers and minimising the residuals and , defined as the difference between the measured and projected positions in each plane .once the misalignments in these layers were accounted for , the process was repeated using the track formed by these modules .this process continued until the residuals in all eight layers were no greater than the fibre pitch .the largest misalignment observed was approximately 5 mm in both the x and y planes of one module .this is shown before and after alignment correction for this particular module in figure [ fig : alignment ] .the small misalignments identified revealed the internal alignment of the fibre planes within the module to be negligible in relation to the external alignment of the modules themselves .the small fraction of misidentified fibres arise from the demultiplexing algorithm and from noise hits which manifest as tails along the and axes in figure [ fig : alignment ] . with a small scattering angle assumption in place for data taken with material within the assay volume , placing a restriction on these tailsfurther reduced the influence from fibre misidentification .-direction ) surrounding a 12 mm diameter stainless steel cylindrical rod and a 30 mm long , 20 mm diameter cylinder of machined uranium metal ( right _ i.e._the positive -direction ) suspended below . the aluminium profile frame , ribbon cables , tedlar^^ and locating pin coverings described in the text are also observed . ]in preparation for image reconstruction studies , data collection commenced in late-2012 with a test configuration of objects placed within the assay volume .this setup is shown in figure [ fig : testsetup ] .this consisted of a stainless - steel cylindrical bar measuring 12 mm in diameter positioned through a 40 mm cube of lead .a machined cylinder of uranium metal , 20 mm in diameter and 30 mm in length , was suspended beneath the bar .this bar was fixed to the aluminium profile frame along the y - direction , centred on z=0 mm and x= mm in the coordinate frame used throughout this work .an image reconstruction algorithm based on the probabilistic maximum likelihood expectation maximisation ( mlem ) method introduced in ref . by schultz _et al._was further developed for this application .the assay volume was chosen as a cube of dimension 300 mm in the central region of the detector assembly . prior to the imaging analysis, this volume was divided into small volume elements ( alternatively , voxels ) .voxel dimensions are predetermined by the analyser and are at the heart of the trade - off between image resolution and measurement time ; smaller voxels provide potentially greater definition up to the resolution of the detector system but require correspondingly greater measurement times .cubic voxels , 10 mm in dimension , were used throughout this work as simulation showed that this size provided the best compromise between resolution and data collection time . for each muon ,the incoming and scattered vectors were first projected to their point of closest approach ( poca ) .the mlem method then requires the calculation of a normalised scattering probability in every voxel that the muon was determined to have passed through .after many muons , the most likely scattering density in each voxel was determined .this was used as the imaging metric in the final results presented in this work .first image reconstruction results from experimental data taken using this prototype detector system are presented in figure [ fig : image ] .shown are two 10 mm slices ( or tomograms ) , one through the xy - plane ( _ i.e._parallel with the detection planes ) which encompassed the stainless - steel bar , and the other in the yz - plane .these images have been reconstructed from several weeks of exposure to cosmic - ray muons .sensitivity to atomic number and discrimination between the values of the stainless - steel bar , the two high- material blocks , and the surrounding air is observed .the non - uniformity of the values of the reconstructed high- objects is attributed to a combination of possible factors ; a spread in values for high - density materials as a result of increased coulomb scattering ( here , the poca input to the mlem method reconstructed only the average scatter ) and non - uniform voxel coverage of the objects ( _ e.g._for the uranium object , the central voxel with a reconstructed in excess of 70mrad was assumed to fully occupy the uranium , whereas the surrounding voxels occupy a combination of uranium and air which acted to dilute the reconstructed value ) . in the presented tomograms , the high image - resolution of this detector system in the xy - planeis observed .smearing and stretching of the image in the z - direction is also noted .this is an inherent effect associated with the reconstruction of the scattering location in the principle axis of two near - parallel tracks , and is artificially exaggerated in this work by the small angular acceptance of the prototype system . studies which address this issue , and dedicated simulation studies using this detector geometry , are the subject of ongoing work and as such will not be described here , other than to highlight the consistency with expectation .this effect was observed with images reconstructed from dedicated simulation studies of this test configuration of objects which also provided excellent agreement with the image shown in figure [ fig : image ] .the design , fabrication and assembly processes undertaken to develop a modular tracker system for use in the cosmic - ray muon tomography of legacy nuclear waste containers has been presented . the system comprised four modules with orthogonal layers of 2mm - pitch round scintillating fibres .these were supported within a low- structure fabricated using thin sheets of rohacell^^ and aluminium , and covered with a lightproof tedlar^^ film and black nylon tubing .the fibre signals were read out to qdc modules and hamamatsu h8500 mapmts with two fibres coupled to a single pixel , and one pmt ( _ i.e._128 fibres ) per detection layer .the modules were supported in an adjustable vertical support stand made from aluminium profile .performance studies revealed a high level of stability , both structurally and in relation to the data collected over prolonged periods of time .high muon detection efficiencies of up to 86% per layer were recorded with low levels ( less than 5% ) of misidentification and misalignment ( less than 5 mm in the worst case prior to correction in software ) .first images reconstructed from data collected with a test configuration of materials within the assay volume verified the high- detection capabilities of this system and revealed promising low , medium and high- discrimination which will be investigated further in future work .all these studies will directly influence the design and construction of a large - scale detector system which will assay an industrial waste container in preparation for the industrial deployment of this technology .the authors gratefully acknowledge sellafield ltd . , on behalf of the uk nuclear decommissioning authority , for their continued funding of this project . 00 e. p. george , commonwealth engineer , 455 ( 1955 ) l. w. alvarez , science * 167 * , 832 ( 1970 ) h. k. m ._ , nuclear instruments & methods in physics research a * 555 * , 164 ( 2005 ) g. ambrosi_ , nuclear instruments & methods in physics research a * 628 * , 120 ( 2011 ) k. borozdin _ et al . _ ,nature * 422 * , 277 ( 2003 ) k. gnanvo _et al . _ , ieee nuclear science symposium conference record , 1278 ( 2008 )l. schultz __ , nuclear instruments & methods in physics research a * 519 * , 687 ( 2004 ) s. agostinelli _ et al . _ , nuclear instruments & methods in physics research a * 506 * , 250 ( 2003 ) a. clarkson_ et al . _ , arxiv:1309.3400 ( 2013 ) r. montgomery __ , nuclear instruments & methods in physics research a * 695 * , 326 ( 2012 ) l. schultz_ et al . _ ,ieee transactions on image processing * 16 * , 1985 ( 2007 ) d. f. mahon _ et al ._ , nuclear instruments & methods in physics research a ( 2013 ) , http://dx.doi.org/10.1016/j.nima.2013.05.119
tomographic imaging techniques using the coulomb scattering of cosmic - ray muons are increasingly being exploited for the non - destructive assay of shielded containers in a wide range of applications . one such application is the characterisation of legacy nuclear waste materials stored within industrial containers . the design , assembly and performance of a prototype muon tomography system developed for this purpose are detailed in this work . this muon tracker comprises four detection modules , each containing orthogonal layers of saint - gobain bcf-10 2mm - pitch plastic scintillating fibres . identification of the two struck fibres per module allows the reconstruction of a space point , and subsequently , the incoming and coulomb - scattered muon trajectories . these allow the container content , with respect to the atomic number of the scattering material , to be determined through reconstruction of the scattering location and magnitude . on each detection layer , the light emitted by the fibre is detected by a single hamamatsu h8500 mapmt with two fibres coupled to each pixel via dedicated pairing schemes developed to ensure the identification of the struck fibre . the pmt signals are read out to qdcs and interpreted via custom data acquisition and analysis software . the design and assembly of the detector system are detailed and presented alongside results from performance studies with data collected after construction . these results reveal high stability during extended collection periods with detection efficiencies in the region of 80% per layer . minor misalignments of millimetre order have been identified and corrected in software . a first image reconstructed from a test configuration of materials has been obtained using software based on the maximum likelihood expectation maximisation algorithm . the results highlight the high spatial resolution provided by the detector system . clear discrimination between the low , medium and high- materials assayed is also observed . muon tomography , scintillator detectors , nuclear waste 96.50.s- , 29.40.mc , 89.20.bb