article
stringlengths 0
456k
| abstract
stringlengths 0
65.5k
|
---|---|
this paper is concerned with the problem of estimating stationary ergodic processes with finite alphabet from a sample , an observed length realization of the process , with the -distance being considered between the process and the estimated one .the -distance was introduced by ornstein and became one of the most widely used metrics over stationary processes .two stationary processes are close in -distance if there is a joint distribution whose marginals are the distributions of the processes such that the marginal processes are close with high probability ( see section [ secappl ] for the formal definition ) .the class of ergodic processes is -closed and entropy is -continuous , which properties do not hold for the weak topology .ornstein and weiss proved that for stationary processes isomorphic to i.i.d .processes , the empirical distribution of the -length blocks is a strongly consistent estimator of the -length parts of the process in -distance if and only if , where denotes the entropy of the process .csiszr and talata estimated the -length part of a stationary ergodic process by a markov process of order .the transition probabilities of this markov estimator process are the empirical conditional probabilities , and the order does not depend on the sample .they obtained a rate of convergence of the markov estimator to the process in -distance , which consists of two terms .the first one is the bias due to the error of the approximation of the process by a markov chain .the second term is the variation due to the error of the estimation of the parameters of the markov chain from a sample . in this paper , the order of the markov estimator process is estimated from the sample .for the order estimation , penalized maximum likelihood ( pml ) with general penalty term is used .the resulted markov estimator process finds a tradeoff between the bias and the variation as it uses shorter memory for faster memory decays of the process .if the process is a markov chain , the pml order estimation recovers its order asymptotically with a wide range of penalty terms. not only an asymptotic rate of convergence result is obtained but also an explicit bound on the probability that the -distance of the above markov estimator from the process is greater than .it is assumed that the process is non - null , that is , the conditional probabilities of the symbols given the pasts are separated from zero , and that the continuity rate of the process is summable and the restricted continuity rate is uniformly convergent .these conditions are usually assumed in this area .the summability of the continuity rate implies that the process is isomorphic to an i.i.d .process .the above result on statistical estimation of stationary ergodic processes requires a non - asymptotic analysis of the markov order estimation for not necessarily finite memory processes . in this paper, this problem is also investigated in more generality : under milder conditions than it would be needed for the above bound and not only for the pml method .a popular approach to the markov order estimation is the minimum description length ( mdl ) principle .this method evaluates an information criterion for each candidate order based on the sample and the estimator takes the order for which the value is minimal . the normalized maximum likelihood ( nml ) and the krichevsky trofimov ( kt ) code lengths are natural information criteria because the former minimizes the worst case maximum redundancy for the model class of -order markov chains , while the latter does so , up to an additive constant , with the average redundancy . the bayesian information criterion ( bic ) can be regarded as an approximation of the nml and kt code lengths .the pml is a generalization of bic ; special settings of the penalty term yield the bic and other well - known information criteria , such as the akaike information criterion ( aic ) .there are other methods for markov order estimation , see and references there , and the problem can also be formulated in the setting of hypothesis testing .if a process is a markov chain , the nml and kt markov order estimators are strongly consistent if the candidate orders have an upper bound .without such a bound , they fail to be consistent .the bic markov order estimator is strongly consistent without any bound on the candidate orders .if a process has infinite memory , the markov order estimators are expected to tend to infinity as .the concept of context trees of arbitrary stationary ergodic processes is a model more complex than markov chains .recent results in that area imply that this expectation holds true for the bic and kt markov order estimators but they provide no information about the asymptotics of the divergence . in this paper ,the divergence of the pml , nml and kt markov order estimators for not necessarily finite memory processes is investigated .not only asymptotic rates of divergence are obtained but also explicit bounds on the probability that the estimators are greater and less , respectively , than some order . instead of the usual assumption of non - nullness , it is assumed only that the conditional probabilities of one of the symbols given the pasts are separated from zero .this property is called weakly non - nullness and is `` noticeably weaker '' than non - nullness .first , the process is assumed to be weakly non - null and -summable .the -summability is a condition weaker than the summability of the continuity rate . under these conditions ,a bound on the probability that the estimators are greater than some order is obtained , that yields an upper bound on the estimated order eventually almost surely as . then , a bound on the probability that the estimators are less than some order is obtained assuming that the process is weakly non - null and the decay of its continuity rates is in some exponential range .this bound implies that the estimators satisfying the conditions attain a divergence rate eventually almost surely as , where the coefficient depends on the range of the continuity rates . the class of processes with exponentially decaying continuity rate is considered in various problems . fast divergence rate of the estimators are expected only for a certain range of continuity rates .clearly , the estimators do not have a fast divergence rate if the memory decay of the process is too fast . on the other hand , too slow memory decayis also not favored to a fast divergence rate because then the empirical probabilities do not necessarily converge to the true probabilities . to provide additional insight into the asymptotics of markov order estimators ,the notion of consistent markov order estimation is generalized for infinite memory processes .a markov order estimator is compared to its oracle version , which is calculated based on the true distribution of the process instead of the empirical distribution .the oracle concept is used in various problems , see , for example , .if the decay of the continuity rate of the process is faster than exponential , the ratio of the pml markov order estimator with sufficiently large penalty term to its oracle version is shown to converge to in probability .the structure of the paper is the following . in section [ secnotation ] ,notation and definitions are introduced for stationary ergodic processes with finite alphabets . in section [ secic ] , the pml , nml and kt information criteria are introduced .section [ secmain ] contains the results on divergence of the information - criterion based markov order estimators . in section [ secappl ] ,the problem of estimating stationary ergodic process in -distance is formulated and our results are presented .the results require bounds on empirical entropies , which are stated in section [ secmain ] and are proved in section [ secent ] .section [ secproof ] contains the proof of the divergence results , and section [ secproofappl ] the proof of the process estimation results .let be a stationary ergodic stochastic process with finite alphabet .we write and for .if , is the empty string . for two strings and , denotes their concatenation .write and , if , for , .the process is called _ weakly non - null _ if letting we say that the process is _-summable _ if the _ continuity rates _ of the process are and obviously , .if , then the process is said to have _ summable continuity rate_. [ remgammaeq ] since for any and , , the above definition of continuity rate is equivalent to [ remgammaalpha ] the process is -summable if it has summable continuity rate because the -order _ entropy _ of the process is and the -order _ conditional entropy _ is logarithms are to the base .it is well known for stationary processes that the conditional entropy is a non - negative decreasing function of , therefore its limit exists as .the _ entropy rate _ of the process is note that for any .the process is a _ markov chain _ of order if for each and where is called initial distribution and is called transition probability matrix .the case corresponds to i.i.d .processes . the process is of _ infinite memory _ if it is not a markov chain for any order . for infinite memory processes , for any . in this paper, we consider statistical estimates based on a sample , an -length part of the process .let denote the number of occurrences of the string in the sample for , the empirical probability of the string is and the empirical conditional probability of given is for , .the -order _ empirical entropy _ is and the -order _ empirical conditional entropy _ is the likelihood of the sample with respect to a -order markov chain model of the process with some transition probability matrix , by ( [ eqmcdef ] ) , is for , the _ maximum likelihood _ is the maximum in of the second factor above , which equals note that .an information criterion assigns a score to each hypothetical model ( here , markov chain order ) based on a sample , and the estimator will be that model whose score is minimal .[ defic ] for an information criterion the markov order estimator is here , the number of candidate markov chain orders based on a sample is finite , therefore the minimum is attained . if the minimizer is not unique , the smallest one will be taken as .we consider three , the most frequently used information criteria , namely , the bayesian information criterion and its generalization , the family of penalized maximum likelihood ( pml ) , the normalized maximum likelihood ( nml ) code length , and the krichevsky trofimov ( kt ) code length . [ defpml ] given a penalty function , a non - decreasing function of the sample size , for a candidate order the pml criterion is the -order markov chain model of the process is described by the conditional probabilities , and of these are free parameters . the second term of the pml criterion , which is proportional to the number of free parameters of the -order markov chain model , is increasing in . the first term , for a given sample ,is known to be decreasing in .hence , minimizing the criterion yields a tradeoff between the goodness of fit of the sample to the model and the complexity of the model . if , the pml criterion is called _ bayesian information criterion _ ( bic ) , and if , _ akaike information criterion _ ( aic ) .the minimum description length ( mdl ) principle minimizes the length of a code of the sample tailored to the model class . strictly speaking, the information criterion would have an additive term , the length of a code of the structure parameter .this additional term , the length of a code of , is omitted since it does not affect the results .[ defnml ] for a candidate order , the nml criterion is where is the -order nml - probability of .writing the nml criterion can be regarded as a pml criterion in a broader sense .[ defkt ] for a candidate order , the kt criterion is where } { ( n_{n-1}(a_1^k ) - 1 + { |a|}/{2 } ) ( n_{n-1}(a_1^k ) - 2 + { |a|}/{2 } ) \cdots ( { |a|}/{2 } ) } \ ] ] is the -order kt - probability of .( for , . )the -order kt - probability of the sample is equal to a mixture of the probabilities of the sample with respect to all -order markov chains with uniform initial distribution , where the mixture distribution over the transition probability matrices is independent for the rows , , and has dirichlet distribution in the rows .hence , the kt markov order estimator can be regarded as a bayes ( maximum a posteriori ) estimator .the -order nml and kt coding distributions are nearly optimal among the -order markov chains , in the sense that the code lengths and minimize the worst case maximum and average , respectively , redundancy for this class ( up to an additive constant in the latter case ) .the bic markov order estimator is strongly consistent , that is , if the process is a markov chain of order , then eventually almost surely as .`` eventually almost surely '' means that with probability , there exists a threshold ( depending on the infinite realization ) such that the claim holds for all . increasing the penalty term , up to , where is a sufficiently small constant , does not affect the strong consistency .it is not known whether or not the strong consistency holds for smaller penalty terms but it is known that if the candidate orders are upper bounded by , where is a sufficiently small constant , that is , the estimator minimizes the pml over the orders only , then still provides the strong consistency , where is a sufficiently large constant .the nml and kt markov order estimators fail to be strongly consistent because for i.i.d .processes with uniform distribution , they converge to infinity at a rate . however , if the candidate orders are upper bounded by , the strong consistency holds true .if the process is of infinite memory , the bic and kt markov order estimators diverge to infinity . in this section ,results on the divergence rate of the pml , nml and kt markov order estimators are presented .bounds on the probability that the estimators are greater and less , respectively , than some order are obtained , with explicit constants .the first implies that under mild conditions , the estimators do not exceed the rate eventually almost surely as .the second bound implies that the rate is attained eventually almost surely as for the processes whose continuity rates decay in some exponential range . at the end of the section ,the notion of consistent markov order estimation is generalized for infinite memory processes .if the continuity rates decay faster than exponential , the pml markov order estimator is shown to be consistent with the oracle - type order estimate .the proofs use bounds on the simultaneous convergence of empirical entropies of orders in an increasing set .these bounds are obtained for finite sample sizes with explicit constants under mild conditions so they are of independent interest and are also presented here .[ thentsmp ] for any weakly non - null and -summable stationary ergodic process , for any and where are constants depending only on the distribution of the process . the proof including the explicit expression of the constants is in section [ secent ] .the convergence of and , , to the entropy rate of the process could be investigated using theorem [ thentsmp ]. however , good estimates of the entropy rate are known from the theory of universal codes . in particular , mixtures of the kt distributions over all possible orders provide universal codes in the class of all stationary ergodic processes , therefore the corresponding code length is a suitable estimate of the entropy rate . an application of the borel cantelli lemma in theorem [ thentsmp ] yields the following asymptotic result . for any weakly non - null and -summable stationary ergodic process , for any simultaneously for all , eventually almost surely as . by , under much stronger conditions on the process, the convergence rate of and to is for some fixed .hence , the rate in theorem [ thentsmp ] can not be improved significantly .the first divergence result of the paper is the following .[ thpmllarge ] for any weakly non - null and -summable stationary ergodic process there exist depending only on the distribution of the process , such that for the markov order estimator for any sequence , , where ic is either the pml with arbitrary or the nml or the kt criterion . the proof including the explicit expression of the constants is in section [ secproof ] .an application of the borel cantelli lemma in theorem [ thpmllarge ] yields the following asymptotic result .[ copmllarge ] for any weakly non - null and -summable stationary ergodic process there exists a constant such that for the markov order estimator eventually almost surely as , where ic is either the pml with arbitrary or the nml or the kt criterion .the second divergence result is the following .[ thpmllogsmp ] for any weakly non - null stationary ergodic process with continuity rates and for some ( ) , if the markov order estimator satisfies that if , where ic is either the pml with or the nml or the kt criterion , and , are constants depending only on the distribution of the process and . the proof including the explicit expression of the constants is in section [ secproof ] .an application of the borel cantelli lemma in theorem [ thpmllogsmp ] yields the following asymptotic result .[ corpmllog ] for any weakly non - null stationary ergodic process with continuity rates and for some with , the markov order estimator satisfies that eventually almost surely as , where ic is either the pml with or the nml or the kt criterion , and is a constant depending only on the distribution of the process .the section concludes with the consistency result .[ defopml ] for a candidate order the oracle pml criterion is for markov chains of order , if is sufficiently large , with any .[ thoracle ] for any weakly non - null stationary ergodic process with the pml markov order estimator with , , is consistent in the sense that in probability as .the proof is in section [ secproof ] .in the results of this section , the divergence rate of markov order estimators will play a central role .the problem of statistical estimation of stationary ergodic processes by finite memory processes is considered , and the following distance is used .the per - letter hamming distance between two strings and is and the _ -distance _ between two random sequences and is defined by where the minimum is taken over all the joint distributions of and whose marginals are equal to the distributions of and .the process is estimated by a markov chain of order from the sample in the following way . the of a process based on the sample is the stationary markov chain , denoted by ] is estimated from the sample , using the pml criterion .the estimated order needs to be bounded to guarantee an accurate assessment of the memory decay of the process .[ deficr ] for an information criterion ic , the markov order estimator bounded by , , is the optimal order can be smaller than the upper bound if the memory decay of the process is sufficiently fast .define where and . since is a decreasing function, increases in but does not exceed .it is less than if vanishes sufficiently fast , and then the faster vanishes , the slower increases .the process estimation result of the paper is the following .[ thqminsmp ] for any non - null stationary ergodic process with summable continuity rate and uniformly convergent restricted continuity rate with parameters , , , and for any , the empirical markov estimator of the process with the order estimated by the bounded pml markov order estimator , , with satisfies ^n \bigr ) > \frac{\beta_2}{{p_{\mathrm{inf}}}^{2 } } \max\biggl\ { \bar{\gamma } \biggl ( \biggl\lfloor\frac{\eta}{\theta_2 } \log n \biggr \rfloor\biggr ) , n^{- ( 1 - 4\eta\log({|a|^4}/{{p_{\mathrm{inf } } } } ) ) /({4\theta_1 } ) } \biggr\ } + \frac{1}{n^{1/2-\mu_n } } \biggr ) \\ & & \quad\le\exp\bigl ( -c_4 4^ { \mu_n\log n - |\log{p_{\mathrm{inf}}}| ( k_n ( \eta\log n , \bar{\gamma } , { c}\operatorname { pen}(n)/{n } ) + { \log\log n}/{\log|a| } ) } \bigr ) \\ & & \qquad { } + \exp\biggl ( -\frac{c_5 \eta^3}{\log n } n^{\eta2\log|a| } \biggr ) + 2^{-s_n \operatorname{pen}(n)},\end{aligned}\ ] ] if , where is an arbitrary constant , and are constants depending only on the distribution of the process . the proof including the explicit expression of the constants is in section [ secproofappl ] . if the process is a markov chain of order , then the restricted continuity rate is uniformly convergent with parameters , arbitrary ( arbitrarily close to ) , , and if is sufficiently large , and an application of the borel cantelli lemma in theorem [ thqminsmp ] yields the following asymptotic result .[ cormain ] for any non - null stationary ergodic process with summable continuity rate and uniformly convergent restricted continuity rate with parameters , , , the empirical markov estimator of the process with the order estimated by the bounded pml markov order estimator with and satisfies ^n \bigr ) & \le&\frac{\beta_2}{{p_{\mathrm{inf}}}^{2 } } \max\biggl\ { \bar { \gamma } \biggl ( \biggl\lfloor\frac{r_n}{\theta_2 } \biggr\rfloor\biggr ) , n^{- { 1}/({4\theta_1 } ) } \biggr\}\\ & & { } + \frac { ( \log n)^{c_6 } } { \sqrt{n } } 2^ { \bar{\gamma } , { c}\operatorname{pen}(n)/{n } ) } \end{aligned}\ ] ] eventually almost surely as , where is an arbitrary constant , and are constants depending only on the distribution of the process . if the memory decay of the process is slow , the first term in the bound in corollary [ cormain ] , the bias , is essentially , and the second term , the variance , is maximal . if the memory decay is sufficiently fast , then the rate of the estimated order and the rate of are smaller , therefore the variance term is smaller , while the bias term is smaller as well .the result , however , shows the optimality of the pml markov order estimator in the sense that it selects an order which is small enough to allow the variance to decrease but large enough to keep the bias below a polynomial threshold .in this section , we consider the problem of simultaneous convergence of empirical entropies of orders in an increasing set , and prove the following theorem that formulates theorem [ thentsmp ] with explicit constants .[ thent ] for any weakly non - null and -summable stationary ergodic process , for any and first , we show the following bounds .[ prent ] for any weakly non - null and -summable stationary ergodic process , for any and , and fix . applying lemma [ lemsch ] in the to the distributions and , d_{{\mathrm{tv } } } ( \hat{p}_k , p_k ) , \ ] ] if . for any ,the right of ( [ eqsch ] ) can be written as \\ & & \quad\le\frac{k \log|a|}{\log { \mathrm{e } } } d_{{\mathrm{tv } } } ( \hat{p}_k , p_k ) + \frac{1}{{\mathrm{e } } } \frac{1+\nu}{\nu } d_{{\mathrm{tv}}}^{{1}/({1+\nu } ) } ( \hat{p}_k , p_k ) , \nonumber\end{aligned}\ ] ] where we used the bound , . by , for any string and , where is positive for any weakly non - null and -summable stationary ergodic process .( [ eqgl ] ) implies that \\[-8pt ] & \le & { \mathrm{e}}^{1/{\mathrm{e } } } |a|^k \exp\biggl ( \frac{-c_{\alpha}(n - k+1 ) t^2 } { k |a|^{2k } } \biggr).\nonumber\end{aligned}\ ] ] applying ( [ eqgl2 ] ) to ( [ eqsch2 ] ) , this completes the proof of the first claimed bound as the second claimed bound follows using and as now , the theorem follows from the proposition with special settings .proof of theorem [ thent ] we use proposition [ prent ] setting , , and . then , in the exponent of the first inequality of the proposition , where we used that .this gives the lower bound on the exponent and completes the proof of the first claimed bound .the second claimed bound follows similarly from the second inequality of the proposition with the same settings .in this section , we consider the divergence of the pml , nml and kt markov order estimators and prove theorems [ thpmllarge ] , [ thpmllogsmp ] and [ thoracle ] .proof of theorem [ thpmllarge ] by , any weakly non - null and -summable process is -mixing with a coefficient related to and .namely , there exists a sequence , , satisfying such that for each , , and each , , with , this implies that for any since and , for sufficiently large .then holds with and .thus , for any , for any information criterion ic , we can write \\[-8pt ] & & \quad\subseteq\bigl\ { \mathrm{ic}_{x_1^n } ( m )< \mathrm { ic}_{x_1^n } ( k_n ) \mbox { for some } m > k_n \bigr\ } \cap\bigl\ { n_n\bigl(a_1^{k_n}\bigr ) \le1 \mbox { for all } a_1^{k_n } \bigr\ } \nonumber \\ & & \qquad { } \cup\bigl\ { n_n\bigl(a_1^{k_n } \bigr ) \ge2 \mbox { for some } a_1^{k_n } \bigr\}.\nonumber\end{aligned}\ ] ] here , for all implies that for all for all , which further implies that for all ( i ) and therefore and and ( ii ) .then all the three information criteria do not depend on the sample and are non - decreasing in .hence , in ( [ eqsplit ] ) is an empty set .thus , ( [ eqsplit ] ) gives and using ( [ eqexp ] ) completes the proof . to prove theorem [ thpmllogsmp ], first we show the following bounds .[ thpmlh ] for any weakly non - null and -summable stationary ergodic process with for some , if the pml markov order estimator satisfies that if , where the markov order estimator , where ic is either nml or kt , satisfies that if , where for markov chains of order , in proposition [ thpmlh ] if is sufficiently large .proof of proposition [ thpmlh ] let be arbitrary and for any information criterion ic , we can write for any \\[-8pt ] & & \quad\subseteq\biggl ( \biggl\ { \mathrm{ic}_{x_1^n } ( m ) \le \mathrm{ic}_{x_1^n } \biggl ( \biggl\lfloor { \frac { \varepsilon\log n}{4\log|a| } } \biggr \rfloor\biggr ) \mbox { for some } m < k_n \biggr\ } \cap b_n \biggl ( { \frac{\varepsilon\log n}{4\log|a| } } \biggr ) \biggr)\nonumber\\ & & \qquad { } \cup \overline{b_n \biggl ( { \frac { \varepsilon \log n}{4\log|a| } } \biggr)}.\nonumber\end{aligned}\ ] ] \(i ) if , by the definition of the pml information criterion , see definition [ defpml ] , & & \hspace*{17pt}\quad\le\bigl(|a|-1\bigr ) \bigl ( |a|^ { \lfloor { ( { \varepsilon \log n})/({4\log|a| } ) } \rfloor } - |a|^m \bigr ) \operatorname{pen}(n ) \mbox { for some } m < k_n \biggr\ } \cap b_n \biggl ( { \frac{\varepsilon\log n}{4\log|a| } } \biggr ) \nonumber \\[-2pt ] & & \quad\subseteq\biggl\ { \hat{h}_{m}\bigl(x_1^n \bigr ) - \hat{h } _ { \lfloor ( { \varepsilon\log n})/({4\log|a| } ) \rfloor}\bigl(x_1^n\bigr ) \nonumber\\[-9pt]\\[-9pt ] & & \hspace*{25.5pt}\le\bigl(|a|-1\bigr ) |a|^ { \lfloor { ( { \varepsilon\log n})/({4\log|a| } ) } \rfloor } \frac { \operatorname{pen}(n)}{n- \lfloor(\varepsilon\log n)/(4\log|a| ) \rfloor } \mbox { for some } m < k_n \biggr\ } \nonumber\\[-2pt ] & & \qquad{}\cap b_n \biggl ( { \frac{\varepsilon\log n}{4\log|a| } } \biggr ) \nonumber \\[-2pt ] & & \quad\subseteq\biggl\ { h_m - h _ { \lfloor({\varepsilon\log n})/({4\log \frac{(|a|-1 ) |a|^{({\varepsilon\log n})/({4\log|a| } ) } \operatorname{pen}(n)}{n- ( \varepsilon\log n)/(4\log & & \hspace*{23.5pt}\mbox { for some } m <k_n \biggr\}.\nonumber\end{aligned}\ ] ] since for any we have now , let and be as in the claim of the proposition . using the conditions and , thus ,if , it follows that , and for any \\[-9pt ] & \ge&(h_{k_n-1 } - \bar{h } ) - \frac{1}{\sqrt{n } } \ge\frac { 3\max(\sqrt { n},(|a|-1 ) \operatorname{pen}(n ) ) } { n^{1-\varepsilon } } , \quad \nonumber\end{aligned}\ ] ] where we used that is non - increasing . comparing ( [ eqhpmllb ] ) to ( [ eqhpmlub ] ) , the right of ( [ eqsplitpml ] ) is an empty set , and ( [ eqsplit2 ] ) yields if , according to theorem [ thent ] .\(ii ) if , by the definition of the nml information criterion , see definition [ defnml ] , \\[-8pt ] & & \qquad\hspace*{4pt}\mbox { for some } m < k_n \biggr\ } \nonumber \\ & & \qquad{}\cap b_n \biggl ( { \frac{\varepsilon\log n}{4\log|a| } } \biggr ) \nonumber \\ & & \quad\subseteq\biggl\ { h_m - h _ { \lfloor({\varepsilon\log n})/({4\log \frac { \log\sigma ( n , { \lfloor({\varepsilon\log n})/({4\log|a| } ) \rfloor } ) } { n- ( \varepsilon\log n)/(4\log|a| ) } + \frac{2}{n^ { 1/2-\varepsilon } } \nonumber\\ & & \qquad\hspace*{4pt}\mbox { for some } m < k_n \biggr \},\nonumber\end{aligned}\ ] ] where in the second relation we used that for any . by lemma [ lemktml ] in the , that gives the upper bound using ( [ eqsigmaub ] ) and ( [ eqatom ] ) , using , , it follows that if , which implies that thus , the expression in ( [ eqsplitnml ] ) can be bounded as \\[-8pt ] & & \quad < \frac { 3}{n^{1/2-\varepsilon } } \qquad\mbox{if } n\ge\max\bigl\ { 24\bigl(\log^4 { \mathrm{e}}\bigr ) \bigl(|a|-1\bigr)^4 , 4c_{\mathrm { kt}}^2 \bigr\}.\nonumber\end{aligned}\ ] ] now , let and be as in the claim of the proposition .then the conditions and imply ( [ eqdhpmlub ] ) , thus , if , it follows that , and for any where we used that is non - increasing . comparing ( [ eqhnmllb ] ) to ( [ eqhnmlub ] ), the right of ( [ eqsplitnml ] ) is an empty set , and ( [ eqsplit2 ] ) yields if , according to theorem [ thent ] .\(iii ) if , by the definition of the kt information criterion , see definition [ defkt ] , and using that for any , & & \quad\subseteq\biggl\ { ( n - m)\hat{h}_{m}\bigl(x_1^n \bigr ) - \biggl ( n- \biggl\lfloor { \frac{\varepsilon\log n}{4\log|a| } } \biggr\rfloor\biggr ) \hat { h } _ { \lfloor({\varepsilon\log n})/({4\log|a| } ) \rfloor}\bigl(x_1^n\bigr ) \nonumber \\[-0.8pt ] & & \hspace*{16.4pt}\quad\le\log\mathrm{ml } _ { \lfloor({\varepsilon\log n})/({4\log \bigr ) - \log p_{\mathrm{kt } , \lfloor ( { \varepsilon\log n})/({4\log|a| } ) \rfloor } \bigl(x_1^n\bigr ) \mbox { for some } m < k_n \biggr\ } \nonumber\\[-0.8pt ] & & \qquad{}\cap b_n \biggl ( { \frac{\varepsilon\log n}{4\log|a| } } \biggr ) \nonumber \\[-0.8pt ] & & \quad\subseteq\biggl\{\hat{h}_{m}\bigl(x_1^n \bigr ) - \hat{h } _ { \lfloor({\varepsilon\log n})/({4\log|a| } ) \rfloor}\bigl(x_1^n\bigr ) \nonumber\\[-0.8pt ] & & \hspace*{15.6pt}\quad\le\frac { \log\mathrm{ml } _ { \lfloor({\varepsilon\log n})/({4\log|a| } ) \rfloor}(x_1^n ) - \log p_{\mathrm{kt } , \lfloor({\varepsilon\log n})/({4\log|a| } ) \rfloor } ( x_1^n ) } { n- \lfloor(\varepsilon\log n)/(4\log|a| ) \rfloor } \\[-0.8pt ] & & \qquad\hspace*{4.5pt}\mbox { for some } m < k_n \biggr\ } \nonumber\\[-0.8pt ] & & \qquad{}\cap b_n \biggl ( { \frac{\varepsilon\log n}{4\log|a| } } \biggr ) \nonumber \\[-0.8pt ] & & \quad\subseteq\biggl\ { h_m - h _ { \lfloor({\varepsilon\log n})/({4\log & & \hspace*{15.6pt}\quad\le \frac { \log\mathrm{ml } _ { \lfloor ( { \varepsilon\log n})/({4\log|a| } )\rfloor}(x_1^n ) - \log p_{\mathrm{kt } , \lfloor({\varepsilon\log n})/({4\log|a| } ) \rfloor } ( x_1^n ) } { n- ( \varepsilon\log n)/(4\log|a| ) } + \frac{2}{n^{1/2-\varepsilon } } \nonumber \\[-0.8pt ] & & \qquad\hspace*{4.5pt } \mbox { for some } m < k_n \biggr\}.\nonumber\end{aligned}\ ] ] by lemma [ lemktml ] in the , & & \quad\le c_{\mathrm{kt } } |a|^{({\varepsilon\log n})/({4\log|a| } ) } + \frac { |a|-1}{2 } { n}{|a|^ { \lfloor({\varepsilon\log n})/({4\log|a| } ) \rfloor}},\end{aligned}\ ] ] and the proof continues in the same way as in the nml case ( ii ) .now , we are ready to prove theorem [ thpmllogsmp ] .we prove the following theorem that formulates theorem [ thpmllogsmp ] with explicit constants .[ thpmllog ] for any weakly non - null stationary ergodic process with continuity rates and for some ( ) , if the pml markov order estimator satisfies that if , where the markov order estimator , where ic is either nml or kt , satisfies that if , where [ here , is the constant in the well - known bound of , see lemma in the . ] by remark [ remgammaalpha ] , implies the -summability .the deviation of the conditional entropies from the entropy rate will also be controlled by the continuity rates of the process , and proposition [ thpmlh ] will yield the claim of the theorem .first , for any , on the right of ( [ eqgsplit1 ] ) , the difference of entropies of the conditional distributions and appears . by remark [ remgammaeq ] ,the total variation of these conditional distributions can be upper bounded as hence , applying lemma [ lemsch ] in the it follows , similar to the bound ( [ eqsch ] ) and ( [ eqsch2 ] ) in the proof of proposition [ prent ] , that \\[-8pt ] & & \quad\le\frac{\log|a|}{\log { \mathrm{e } } } \bar{\gamma}(k ) + \frac{1}{{\mathrm{e } } } \frac{1+\nu}{\nu } \bar{\gamma}(k)^ { { 1}/({1+\nu } ) } \nonumber \\ & & \quad\le\frac{2\log|a|}{\log { \mathrm{e } } } \frac{1+\nu}{\nu } \bar{\gamma } ( k)^{{1}/({1+\nu})}\nonumber\end{aligned}\ ] ] for any , if . setting , combining ( [ eqgsplit11 ] ) with ( [ eqgsplit1 ] ) and taking yield the bound if .since , the bound ( [ eqklb ] ) is trivial if .hence , using the assumption of the theorem , and the assumption of proposition [ thpmlh ] is satisfied with thus , the constraint in proposition [ thpmlh ] becomes , and becomes next , for any , where denotes the kullback leibler divergence . using pinsker s inequality , ( [ eqgsplit2 ] )can be lower bounded by where in the last inequality we used the assumption of the theorem .hence , in case ( i ) while in case ( ii ) and the proof is completed . finally , we prove the following proposition that directly implies theorem [ thoracle ] .[ proporacle ] for any weakly non - null stationary ergodic process with continuity rate , , and for any , if is so small and is so large that and the pml markov order estimator with satisfies that if is sufficiently large , where is a constant depending only on the distribution of the process .the proof of theorem [ thpmllog ] begins with the observation that the summability of the continuity rate implies the -summability .hence , the conditions of theorem [ thent ] are satisfied now . moreover , according to ( [ eqhg ] ) , also implies that set , and as in the conditions of the proposition , and define a sequence such that for sufficiently large due to ( [ eqhg2 ] ) , such a sequence exists . since is non - negative decreasing , it is sufficient to show this when . then , writing in the form , if is sufficiently large , that implies ( i ) and ( ii ) if such exists because it follows from the condition that . moreover , the condition in ( [ eqbndef ] ) in the proof of proposition [ thpmlh ] . similar to ( [ eqsplit2 ] ) and ( [ eqsplitpml ] ) , we can write that that is empty set by ( i ) , if is large enough and .the latter is satisfied because of . on the other hand , & & \quad\subseteq\biggl\ { \mathrm{pml}_{x_1^n } ( m )< \mathrm { pml}_{x_1^n } ( k_n ) \mbox { for some } ( 1+\xi/2 ) k_n < m\le\frac{\varepsilon\log n}{4\log|a| } \biggr\}\nonumber\\[-0.5pt ] & & \qquad { } \cap b_n \biggl ( { \frac{\varepsilon\log n}{4\log|a| } } \biggr ) \nonumber \\[-0.5pt ] \label{eqoupperhalf } & & \quad\subseteq\biggl\ { \mathrm{pml}_{o , n } ( m ) - \frac { 2n}{n^{1/2-\varepsilon } } < \mathrm{pml}_{o , n } ( k_n ) \mbox { for some } m>(1+\xi/2 ) k_n \biggr\ } \\[-0.5pt ] & & \quad\subseteq\biggl\ { \bigl(|a|-1\bigr ) \bigl ( |a|^m - |a|^{k_n } \bigr ) \operatorname{pen}(n ) - \frac{2n}{n^{1/2-\varepsilon } } < ( n- k_n ) h_{k_n } - ( n - m ) h_{m } \nonumber \\[-0.5pt ] & & \hspace*{25pt } \mbox { for some } m>(1+\xi/2 ) k_n \biggr\ } \nonumber \\[-0.5pt ] & & \quad\subseteq\biggl\ { \bigl(|a|-1\bigr ) \bigl ( |a|^m - |a|^{k_n } \bigr ) \frac { \operatorname{pen}(n)}{n } - \frac{2}{n^{1/2-\varepsilon } } < h_{k_n } - \biggl(1- \frac{m}{n } \biggr ) h_{m } \nonumber \\[-0.5pt ] & & \hspace*{25pt } \mbox { for some } m>(1+\xi/2 ) k_n \biggr\ } \nonumber \\[-0.5pt ] & & \quad\subseteq\biggl\ { \bigl(|a|-1\bigr ) \bigl ( |a|^m - |a|^{k_n } \bigr ) \frac { \operatorname{pen}(n)}{n } - \frac{2}{n^{1/2-\varepsilon } } - \frac { m}{n}\bar{h } \nonumber\\[-0.5pt ] & & \hspace*{26.5pt } < ( h_{k_n } - \bar{h } ) - \biggl(1-\frac{m}{n } \biggr ) ( h_{k_n } - \bar{h } ) \nonumber \\[-0.5pt ] & & \hspace*{25pt } \mbox { for some } m>(1+\xi/2 ) k_n \biggr\ } \nonumber \\[-0.5pt ] \label{eqoupper1 } & & \quad\subseteq\biggl\ { h_{k_n } - \bar{h } > \frac { ( |a|-1)^2 } { 2 } n^{-1+\kappa } \biggr\}\end{aligned}\ ] ] that is empty set by ( ii ) , if is large enough. observe that if is sufficiently large .indeed , on indirect way the following sequence of implications can be written that does not hold by ( [ eqolowerhalf ] ) and ( [ eqolower1 ] ) if is large enough , and that does not hold either by ( [ eqoupperhalf ] ) and ( [ eqoupper1 ] ) if is large enough . finally , using ( [ eqknko ] ) , we get where the first two terms are zero if is large enough by ( [ eqolowerb])([eqolower1 ] ) and ( [ eqoupperb])([eqoupper1 ] ) . using proposition [ propqminupper ] with , and , because but according to the condition . then the claim of the proposition follows from theorem [ thent ] .in this section , we consider the estimation of stationary ergodic processes by finite memory processes .first , define and clearly , if , then .now we prove the following theorem that formulates theorem [ thqminsmp ] with explicit constants .[ thqmin ] for any non - null stationary ergodic process with summable continuity rate and uniformly convergent restricted continuity rate with parameters , , , for any , the empirical markov estimator of the process with the order estimated by the bounded pml markov order estimator , , with penalty function satisfies ^n \bigr ) > \frac{\beta_2}{{p_{\mathrm{inf}}}^{2 } } g_n + \frac{1}{n^{1/2-\mu_n } } \biggr ) \\ & & \quad\le2 { \mathrm{e}}^{1/{\mathrm{e } } } |a|^{k_n+h_n+2 } \exp\biggl\ { - \frac { { p_{\mathrm{inf}}}^2 } { 16 { \mathrm{e}}|a|^3 ( \alpha+ { p_{\mathrm{inf } } } ) ( \beta_1 + 1)^2 } \frac { ( n - k_n - h_n ) } { ( 1+k_n+h_n ) n } \\ & & \hspace*{87.4pt}\qquad{}\times 4^{-(k_n+h_n ) |\log{p_{\mathrm{inf}}}| } \biggl [ 4^{\mu_n\log n } - \frac { ( k_n+h_n)|\log{p_{\mathrm{inf}}}|(\beta_1 + 1)^2}{2 } \biggr ] \biggr\ } \\ & & \qquad{}+ 12 { \mathrm{e}}^{1/{\mathrm{e } } } \exp\biggl ( -\frac{7\alpha_0 ( \log|a|)^3 \eta^3}{4{\mathrm{e}}(\alpha+\alpha_0 ) } \frac { n^{\eta2\log|a| } } { \log n } + \bigl(\eta\log|a|\bigr)\log n \biggr ) \\ & & \qquad{}+ \exp\biggl ( -\bigl(|a|-1\bigr ) |a|^{k_n+h_n+1 } \\ & & \hspace*{32pt}\qquad{}\times\operatorname{pen}(n ) \biggl [ 1 - \frac { 1}{|a|^{1+h_n } } - \frac{1}{2\operatorname{pen}(n ) } \bigl ( \log n - ( k_n+h_n ) \log|a| \bigr ) \biggr ] \\ & & \hspace*{32pt}\qquad { } + \frac{c \operatorname{pen}(n)}{{p_{\mathrm{inf}}}/\log { \mathrm{e } } } + \eta\log n ) \biggr),\end{aligned}\ ] ] if is so large that where and is an arbitrary constant and is an arbitrary sequence . the proof is based on the following two propositions .[ propqmin ] for any non - null and -summable stationary ergodic process with uniformly convergent restricted continuity rate with parameters , , , the bounded pml markov order estimator with penalty function satisfies that if is so large that , where the bonded markov order estimator if is so large that and , where first , define similar to ( [ eqbndef ] ) in the proof of proposition [ thpmlh ] .similar to ( [ eqsplit2])([eqhpmlub ] ) , we can write for any that now , the difference in ( [ eqsplit22 ] ) is controlled as follows . for any , \\[-8pt ] & & \quad=\sum_{a_1^k \in a^k } p\bigl(a_1^k \bigr ) \sum_{a\in a } p\bigl(a| a_1^k \bigr ) \log\frac{p(a| a_1^k ) } { p(a| a_{k - m+1}^k ) } \nonumber \\ & & \quad=\sum_{a_1^k \in a^k } p\bigl(a_1^k \bigr ) d \bigl ( p\bigl ( \cdot| a_1^k \bigr ) \| p\bigl ( \cdot| a_{k - m+1}^k \bigr ) \bigr).\nonumber\end{aligned}\ ] ] using pinsker s inequality , ( [ eqgsplit22a ] ) can be lower bounded by using ( [ eqgsplit222 ] ) and the assumption if ( ) , it follows that hence , we can write let be as in the claim of the proposition and suppose that .then , since is non - increasing , for any ) to ( [ eqsplit22 ] ) , the first term on the right in ( [ eqsplit22 ] ) equals zero , therefore by theorem [ thent ] with . in cases and , the proofs deviate from the above similar to as ( ii ) and ( iii ) deviate from ( i ) in the proof of proposition [ thpmlh ] .now , instead of ( [ eqhnmlub ] ) we have [ propqminupper ] for any non - null stationary ergodic process , the bounded pml markov order estimator satisfies that \biggr)\end{aligned}\ ] ] for any . for any , using and , ( [ eqpfact1 ] ) can be upper bounded by now , let . by the definition of the pml information criterion , see definition [ defpml ] , for any \\[-8pt ] & & { } + \bigl(|a|-1\bigr)|a|^{m_n } \operatorname{pen}(n ) \qquad\mbox{if } x_1^n \in c_{n , k } .\nonumber\end{aligned}\ ] ] by lemma [ lemktml ] in the , combining ( [ eqpfact2 ] ) , ( [ eqsh1 ] ) and ( [ eqktml ] ) , that implies \biggr),\nonumber\end{aligned}\ ] ] where in the last inequality we used , . in the exponent of ( [ eqsh2 ] ) , it may be assumed that is multiplied by a negative number otherwise the bound is trivial .then , the claim of the lemma follows from ( [ eqsh2 ] ) as now , we are ready to prove theorem [ thqmin ] .proof of theorem [ thqmin ] letting and ^n \bigr ) > \frac{\beta_2}{{p_{\mathrm{inf}}}^{2 } } g_n + \frac{1}{n^{1/2-\mu_n } } \biggr ) \nonumber \\ & & \quad\le{\operatorname{pr}}\biggl ( \biggl\ { \bar{d } \bigl ( x_1^n , \hat{x } \bigl [ \hat{k}_{\mathrm{pml } } \bigl(x_1^n | \eta\log n\bigr ) \bigr]_1^n \bigr ) > \frac{\beta_2}{{p_{\mathrm{inf}}}^{2 } } g_n + \frac{1}{n^{1/2-\mu_n } } \biggr\ } \cap g_n\cap h_n \biggr ) \nonumber \\ & & \qquad { } + { \operatorname{pr}}(\bar{g}_n ) + { \operatorname{pr}}(\bar { h}_n ) \\ & & \quad\le{\operatorname{pr}}\biggl ( \biggl\ { \bar{d } \bigl ( x_1^n , \hat{x } \bigl [ \hat{k}_{\mathrm{pml } } \bigl(x_1^n | \eta\log n\bigr ) \bigr]_1^n \bigr ) > \frac { \beta_2}{{p_{\mathrm{inf}}}^{2 } } \bar{\gamma } \bigl ( \hat{k}_{\mathrm{pml } } \bigl(x_1^n \biggr ) \nonumber \\ & & \qquad { } + { \operatorname{pr}}(\bar{g}_n ) + { \operatorname{pr}}(\bar { h}_n ) .\nonumber\end{aligned}\ ] ] the three terms on the right of ( [ eqfinal ] ) is bounded as follows .since the process is non - null with summable continuity rate , lemma [ lemapprox ] in the with , and gives ^n \bigr ) > \frac { \beta_2}{{p_{\mathrm{inf}}}^{2 } } \bar{\gamma } \bigl ( \hat{k}_{\mathrm{pml } } \bigl(x_1^n \biggr ) \nonumber \\ & & \quad\le2 { \mathrm{e}}^{1/{\mathrm{e } } } |a|^{k_n+2 } \exp\biggl\ { - \frac { { p_{\mathrm{inf}}}^2 } { 16 { \mathrm{e}}|a|^3 ( \alpha+ { p_{\mathrm{inf } } } ) ( \beta_1 + 1)^2 } \frac { ( n - k_n ) 4^{-k_n |\log{p_{\mathrm{inf}}}| } } { ( 1+k_n ) n } \\ & & \hspace*{90pt}{}\times \biggl [ 4^{\mu_n\log n } - \frac{k_n|\log{p_{\mathrm{inf}}}|(\beta_1 + 1)^2}{2 } \biggr ] \biggr\ } .\nonumber\end{aligned}\ ] ] by remark [ remgammaalpha ] , the summability of the continuity rate implies the -summability .hence , for the non - null process with summable continuity rate and uniformly convergent restricted continuity rate with parameters , , , proposition [ propqmin ] implies that if ( [ eqnbound ] ) holds because applying proposition [ propqminupper ] with , and , it follows that \\ & & \hspace*{63pt}{}+ \frac{c \operatorname{pen}(n)}{{p_{\mathrm{inf}}}/\log { \mathrm{e } } } + |a|^{k_n+1 } c_{\mathrm { kt } } + \log(\eta\log n ) \biggr ) .\nonumber\end{aligned}\ ] ] finally , applying the bounds ( [ eqf1 ] ) , ( [ eqgbound ] ) and ( [ eqhbound ] ) to the right of ( [ eqfinal ] ) , the proof is complete .[ lemsch ] for two probability distributions and on , d_{{\mathrm{tv } } } ( p_1 , p_2 ) , \ ] ] if , where is the entropy of , , and is the total variation distance of and .see lemma 3.1 of .[ lemktml ] there exists a constant depending only on , such that for any the bound , see , for example , ( 27 ) in , where depends only on , implies the claim using see proof of theorem 6 in .[ lemapprox ] let be a non - null stationary ergodic process with summable continuity rate .then , for any and , , the empirical -order markov estimator of the process satisfies ^n \bigr ) > \beta_2 { p_{\mathrm{inf}}}^{-2 } \bar{\gamma}(k ) + \frac{1}{n^{1/2-\mu } } \biggr\ } \\ & & \quad\le2 { \mathrm{e}}^{1/{\mathrm{e } } } |a|^{2+\nu\log n } \\ & & \qquad\hspace*{0pt}{}\times \exp\biggl\ { - \frac { { p_{\mathrm{inf}}}^2 } { 16 { \mathrm{e}}|a|^3 ( \alpha+ { p_{\mathrm{inf } } } ) ( \beta_1 + 1)^2 } \frac { ( n-\nu\log n ) n^{-2\nu|\log{p_{\mathrm{inf}}}| } } { ( 1+\nu\log n ) n } \\ & & \qquad\hspace*{30pt}{}\times \biggl [ n^{2\mu } - \frac{\nu|\log{p_{\mathrm{inf}}}|(\beta _ 1 + 1)^2\log n}{2 } \biggr ] \biggr\ } .\end{aligned}\ ] ] see the proof of theorem 2 and lemma 3 in .the author would like to thank the referees for their comments that helped improving the presentation of the results and generalizing the consistency concept .the research of the author was supported in part by nsf grant dms-09 - 06929 . | stationary ergodic processes with finite alphabets are estimated by finite memory processes from a sample , an -length realization of the process , where the memory depth of the estimator process is also estimated from the sample using penalized maximum likelihood ( pml ) . under some assumptions on the continuity rate and the assumption of non - nullness , a rate of convergence in -distance is obtained , with explicit constants . the result requires an analysis of the divergence of pml markov order estimators for not necessarily finite memory processes . this divergence problem is investigated in more generality for three information criteria : the bayesian information criterion with generalized penalty term yielding the pml , and the normalized maximum likelihood and the krichevsky trofimov code lengths . lower and upper bounds on the estimated order are obtained . the notion of consistent markov order estimation is generalized for infinite memory processes using the concept of oracle order estimates , and generalized consistency of the pml markov order estimator is presented . |
multistabity is a common dynamical feature of many natural systems .although it appears in diverse forms , a very frequently occurred variant is bistability .there are three main manifestations of bistability : the coexistence of ( i ) two stable steady states , ( ii ) one stable limit cycle and one stable steady state , and ( iii ) two stable limit cycles .the third form of bistability , i.e. , the coexistence of two stable limit cycles of different amplitude and frequency , generally separated by an unstable limit cycle , is called birhythmicity and oscillators showing this behavior are called birhythmic oscillators .apart from two coexisting periodic limit cycles , birhythmicity may appear in a much more complex form , e.g. , coexistence of two chaotic attractors .birhythmic oscillators are very common , particularly , in physics ( e.g. , energy harvesting system , see ref . and references therein ) biology ( e.g. glycolytic oscillator and enzymatic reactions ) and chemistry .most of the biochemical oscillations that govern the organization of cell cycle , brain dynamics or chemical oscillations are birhythmic ; examples include , birhythmicity in the p53-mdm2 network , which is the key protein module that controls proliferation of abnormal cells in mammals , intracellular ca oscillations , oscillatory generation of cyclic amp ( camp ) during the aggregation of the slime mold _ dictyostelium discoideum _ and circadian oscillations of the per and tim proteins in _ . in physical and engineering systemsbirhythmicity plays a negative role in limiting the efficiency of a certain application .take the practical example of an energy harvesting system that converts wind - induced vibrational energy into electrical energy .this type of energy harvesting systems show birhythmicity , but for an efficient harvesting it is desirable that the system always resides on the large amplitude limit cycle because that produces a significant mechanical deformation , which , in turn results in larger amount of harvested electric power .further , the presence of birhythmicity makes a system vulnerable to noise : depending upon the noise intensity the system may end up in any of the two limit cycles , which results in an unpredictable system dynamics .therefore , monorhythmicity is of practical importance in most of the physical systems . on the other hand , in networks of neuronal oscillatorsthe occurrence of birhythmicity is often desirable to generate and maintain different modes of oscillations that organize various biochemical processes in response to variations in their environment .therefore , identifying an efficient control technique is of importance that can tame birhythmicity to yield monorhythmic oscillation or can retain its character intact where ever needed .although several mechanisms are proposed for controlling bistability consisting of oscillation and steady state ( for an elaborate recent review on the control of multistability see and references therein ) , only a few exist to control birhythmicity . reported an effective control mechanism of birhythmic behavior in a modified van der pol system by using a variant of pyragas technique of time delay control and they showed that depending upon the time - delay one can induce monorhythmic oscillation out of birhythmicity .but , due to the presence of time delay the system becomes infinite dimensional and thus a detailed bifurcation analysis for a wide parameter space is difficult and was not reported there .further , the authors of established that their technique can _ suppress _ the effective birhythmic zone but can not eliminate it completely for all possible sets of nonlinear damping parameter values . in this context , another interesting control technique has been reported recently by , where the authors demonstrated that multistable systems with coexisting either periodic or chaotic attractors can be converted into a monostable one by applying an external harmonic modulation and a positive feedback to a proper _ accessible _ system parameter . in the present paperwe propose an effective and much more general control technique , that we call the conjugate self - feedback control , which is able to eliminate birhythmicity and induce monorhythmic behavior .we consider a modified van der pol equation that has been proposed to model enzyme reactions in some biosystems and also has been studied earlier as a prototypical model that exhibits birhthmicity . with a detailed bifurcation studywe establish the effectiveness of the proposed control technique in taming birhthmicity and inducing monorhythmicity . depending upon the value of the self - feedback strength it also offers freedom to select one of the desired dynamics .we also demonstrate our results experimentally with an electronic circuit and verify that our results are robust enough in a real - world setup where the presence of parameter fluctuation and noise are inevitable .first we describe the model used in the following . consider a birhythmic van der pol oscillator given by here , , and are parameters that determine the nonlinear damping . in ref . , kadji _ et al ._ considered , , and by using the harmonic decomposition method they arrived at the following amplitude equation : equation is the generic form of the codimension - two saddle - node ( sn ) bifurcation . note that eq .is independent of the parameter .the two parameter bifurcation diagram in the parameter space is shown in fig .[ ab ] ( a ) that exhibits a cusp type of codimension - two bifurcation .the exchange of rhythmicity is through the saddle - node bifurcation of the limit cycle ( snlc ) ( shown by the solid black line in the figure ) .[ ab](b ) shows the controlled case with ( is the control parameter to be discussed later ) , where birhythmicity is completely removed and only monorhythmicity exists .space with .bifurcation diagram for ( a ) eq .( i.e. , without control ) , snlc : saddle - node bifurcation of limit cycle ; ( b ) with control ( for of eq.[vdpeqcoup ] ) : the control eliminates birhthmicity and only one limit cycle ( lc ) exists . , scaledwidth=40.0% ]next we introduce a conjugate self - feedback term in eq . which contains the variable of our interest , , and its canonical conjugate , ; here controls the strength of the self - feedback .further , a close inspection reveals that the self - feedback mechanism effectively controls the damping of the system through the variable and the effective frequency through the variable .however , understanding of their collective effect on the dynamics needs a detailed analysis that we will address next . to unravel the underlying dynamics of the controlled system we use the harmonic decomposition method .let us assume the approximate solution of be given by with being the amplitude and the frequency of the oscillator with feedback . substituting this in yields the following expression but according to ref . , we can ignore the higher harmonics regarding them as forcing term , which diminish with increasing harmonics .thus , eq . can be reduced to the equation suggests the following frequency and amplitude equations , respectively , and it is interesting to note that , eq . is equivalent to eq .when , i.e. , in the absence of any feedback .also , it may be noted that the amplitude of the system depends on when , contrary to eq . .the frequency in the harmonic limit corresponds to .further , eq . imposes an upper limit on the strength of the feedback , namely otherwise the frequency becomes imaginary , which is non - physical .the three roots ( actually six roots , with , . ) correspond to the amplitudes of three limit cycles ( two stable , one unstable ) . for the parameter set , , for different values of the coupling parameter .the solid dark ( red ) curve for represents single limit cycle with large amplitude and the solid gray ( green ) curve for represents that with small amplitude . in between the curve for for birhythmic oscillation .the lower curve for represents stable steady state.,scaledwidth=40.0% ] now let us discuss how to determine the presence of limit cycles and their stability out of the above analytical results .the amplitude equation eq . may be solved by graphical method .the solutions are those for which the function crosses the horizontal zero line .we consider the parameter set , and for which exhibits birhythmicity in the absence of self - feedback ; next , we vary the coupling strength to get different solutions .the number of limit cycles is determined by the number of solutions of the amplitude equation .the number provides the information of the steady state solution ( i.e. , no solution ) , existence of a single limit cycle ( monorhythmicity ) or three limit cycles ( birhythmicity , one of the lcs is unstable ) . from fig .[ ava ] we find that for there is no zero crossing of the curve , i.e. , there is no lc and the system is in a steady state . as we decrease , the curve crosses the horizontal zero line from below and gives rise to a stable lc .this is shown for with solid gray ( green ) line , here the system has only one stable lc of small amplitude .further decrease in brings it to the birhythmic regime where the curve crosses the horizontal zero line at three different values of indicating three lcs ( shown for ) .the stability of three lcs are determined by eq . , which suggests that the middle zero point of the curve in fig .[ ava ] represents the unstable lc .further increase in brings the system to a monorhythmic region with the large lc . the case of large single lc for shown in the upper solid dark ( red ) line .the original birhythmic van der pol oscillator given by exhibits only global snlc type of bifurcation .however , due to the presence of the feedback term in the controlled case ( i.e. , eq.[vdpeqcoup ] ) , eq . is modified to eq . , and thus the system additionally exhibits local bifurcation , namely hopf bifurcation .we derive the value of for which hopf bifurcation occurs from the eigenvalues of the jacobian of eq . around the steady state .the eigenvalues are given by equation gives the condition of hopf bifurcation as where is the value of for which hopf bifurcation occurs .space for , ,( b ) bifurcation diagram with for ( the horizontal broken line in fig .[ dmm](a ) ) .sss : stable steady state .( ) is the width of birhythmic zone.,scaledwidth=45.0% ] : large amplitude single lc .( c , d ) : birhythmic oscillations , the blue trajectory in ( d ) shows unstable lc .( e , f ) : small amplitude single lc .( g , h ) : stable steady state .the solid ( red ) line is for initial conditions , ; the dotted ( black ) line with initial condition , .other parameters are : , , .,scaledwidth=41.0% ]in this section we investigate the possible bifurcation scenarios of the system using the continuation package xppaut .we explore the nature of the bifurcation with the variation of the feedback parameter for different system parameters ( e.g. , , and ) . the bifurcation structure in the space is computed and shown in fig .[ dmm](a ) .the value of and are kept in the birhythmic zone of the uncontrolled system ( cf .we find that the two - parameter space is divided by global bifurcations , namely saddle node bifurcation of limit cycle ( snlc ) and a local bifurcation , namely the supercritical hopf bifurcation ( hb ) . in between two snlc curvesbirhythmic behavior exists [ purple ( gray ) zone ] : in this zone three lcs exist , of which two are stable ( one with smaller amplitude and the other with larger amplitude ) and an unstable lc . the transition from birhythmic to monorhythmic dynamics [ indicated by green ( light gray ) zone ] is governed by these snlc curves .whereas the hb curve governs the transition between single stable limit cycle and stable steady state ( sss ) [ blue ( dark ) zone ] ; note that the occurrence of the hopf bifurcation agrees with our analytically predicted value of in .for a clearer understanding of the bifurcation scenario we take an exemplary value and vary the feedback term [ along the broken ( yellow ) horizontal line in fig .[ dmm](a ) ] .the one parameter bifurcation diagram corresponding to this variation is shown in fig .[ dmm](b ) . in the absence of the self - feedback ,i.e. , for , the system is in a birhythmic zone for any ( in the present parametric set up ) .if we increase , for , the system enters into the monorythmic zone via snlc bifurcation . herewe observe that the sole limit cycle in the system is the small amplitude lc .this small lc looses its stability through an inverse hopf bifurcation and gives birth to a stable steady state . in the negative side of , for , we again have a monorhythmic region but with a large amplitude limit cycle . therefore , with a proper choice of the self - feedback strength one can induce monorhythmic oscillation of smaller ( ) or larger ( ) amplitude .interestingly , a hysteresis appears around having a width of [ light gray ( purple ) of fig .[ dmm](b ) ] . in this range of system may end up showing lc of large or small amplitude depending upon initial conditions . also , the two lcs are separated by an unstable lc [ shown in dark ( blue ) line ] .it is worth noting that the width of the hysteresis zone increases with increasing .typical time series with the variation of are shown in fig .[ vd ] ( , , ) . to detect the presence or absence of birhythmicity, we consider a large number of initial conditions of .however , here we present the results for two different initial conditions only : one around the origin ( targeting the small amplitude lc ) and the other far from the origin ( targeting the large amplitude lc ) .the red line ( solid ) indicates the oscillation corresponding to the initial condition and the black line ( dotted ) indicates the oscillation for the initial condition .we start from a negative with .[ vd](a ) ( time series ) and [ vd](b ) ( phase plane plot ) show the scenario for .both initial conditions result in the large amplitude lc indicating monorhythmicity .next , we choose , i.e. , the birhythmic region .figure [ vd](c ) and [ vd](d ) show this scenario for .the blue trajectory in fig . [ vd](d ) indicates the unstable lc that separates the basin of attraction of two lcs , i.e. , the small lc resulted from and the large lc resulted from .figure [ vd](e ) and [ vd](f ) show monorhtyhmic oscillation for ( i.e. , ) . hereall the initial conditions go to the smaller amplitude lc .finally , further increase in results in the stable steady state [ fig .[ vd](g ) and [ vd](h ) for ] . therefore ,with the variation of we can effectively control the birhythmic nature of the system and can induce monorhythmic oscillation of preferred amplitude .space for , .the yellow broken line indicates where snlc and hb curves intersect . is the cusp point .sss : stable steady state , lc : bistable zone with one stable steady state and one stable limit cycle .( b ) bifurcation diagram obtained by sweeping along the yellow broken line of fig .[ daa](a ) . ,scaledwidth=43.0% ] space for , . is the cusp point .( b ) bifurcation diagram with for [ along the yellow broken line of fig .[ dbb](a)].,scaledwidth=41.0% ] next , we investigate the effectiveness of the control over the whole nonlinear damping parameter space .significantly , we find that one can indeed induce monorhythmicity for any set of ( , ) by choosing a proper value of . to systematically understand the scenario , we study the dynamics in the and space , separately . figure [ daa ] shows the two - parameter bifurcation in the space for andfigure [ dbb](a ) shows the same in the space for ( in both the cases we take ) . from thesetwo bifurcation diagrams it is seen that for the system has only a single lc for any choice of ( , ) ( is the cusp bifurcation point ) . the hb curve and the snlc curve intersect at ( say ) in fig.[daa ] ( a ) and at ( say ) in fig.[dbb](a ) .figure [ daa](b ) shows the bifurcation scenario with the variation of along the horizontal broken yellow line of fig .[ daa](a ) ( i.e. , for ) .an interesting transition occurs for ( ) : if is increased from below , the system generates a transition from birhythmicity to another type of bistability , namely the _ coexistence of stable lc and stable steady state_. the genesis of this transition is also quite interesting .normally , in a hysteric transition , the transition from stable steady state to stable lc occurs through a _ subcritical _hopf bifurcation and the reverse transition occurs through a snlc , but here two snlc and one supercritical hopf bifurcation govern the hysteric transition .this is shown in fig .[ dbb](b ) for by sweeping along the yellow broken line of fig .[ dbb](a ) . also note that the hopf bifurcation occurs at and independent of and as predicted in eq .[ hopf ] . finally , we summarize our results in the parameter space . for the uncontrolled system ,i.e. , , birhyhmicity occurs in a broad zone of ( ) values as shown in fig .[ ab](a ) .but , for the birhythmic zone is completely eliminated and the only possible dynamics is essentially monorhythmic [ fig . [ ab](a ) for ] .therefore , our study reveals that a proper choice of the control parameter can effectively eliminate birhythmicity to establish monorhythmic oscillation and at the same time its variation may give rise to transitions between several interesting dynamical states ; by controlling one can achieve any of these states in a deterministic way .experimental observation of birhythmicity is subtle due to the presence of inherent noise and parameter fluctuation in a real system and also owing to the fact that , in experiments one can record only one oscillation at a time .the first experimental observation of birhythmicity was made by decroly and goldbeter in a chemical system , namely the parallel - coupled bromate - chlorite - iodide system . in their experimentthe time scale was of the order of few minutes .in biological experimental setups the time scale is usually of the order of few hours , e.g. , birhythmic oscillation in the p53 system has two time scales of six and ten hours . in this context ,the experimental observation of birhythmic oscillation in electronic circuit possesses two distinct advantages : first , the time scale is much reduced , of the order of mili second and the second one is the controllability of electronic circuits . to demonstrate birhythmicity and verify the robustness of our proposed control scheme , we realize the system given by eq . in the electronic circuit . the detailed circuit diagram is shown in fig .[ ckt_init ] . herem1-m4 are analog multiplier ics ( ad633jn ) and a1-a9 are opamps ( tl074 ) .the resulting circuit equation takes the following form [ sec : expt ] the above equation becomes dimensionless for the following substitutions : , , , , , v , v , and ; with these eq . is reduced to eq . . : large amplitude single lc .( c , d ) : birhythmic oscillations .( e , f ) : small amplitude single lc .( g , h ) : stable steady state .the large amplitude lc is for initial conditions volt , volt and the small amplitude lc is for initial condition volt , volt.,scaledwidth=47.0% ] we consider the following values of the used circuit components : k , , v and mv throughout the experiment .the initial conditions are controlled through the data acquisition system ( daq ) in labview environment through a computer . to have a selected initial conditions , the capacitors ( ) in the integrators ( a5 and a7 )are charged with external voltages ( and ) .these voltages are controlled by the daq .the voltages are connected to relays ( s1 and s2 ) to be on for a particular time period .the on time of the relays are controlled by a microcontroller ( arduino uno ) , which is programmed to keep the relays on for a time interval of seconds . during this timethe capacitors of the integrators get charged to the desired input voltages ( and ) which are taken and controlled from the computer through the daq .then the relays are made off and the circuit operates in its normal action .the experimental time series and phase plane plots are shown in fig .[ expt ] . to observe the large amplitude single lc shown in figure [ expt](a ,b ) we add an inverter in the output terminal of a9 of fig .[ ckt_init ] ( not shown in the figure ) and take .[ expt](c ) and ( d ) show the scenario of birhythmicity for .the presence of oscillations of two different amplitudes and frequencies confirms the occurrence of birhythmicity in the circuit .the increasing brings the system to a monorhythmic one .the situation for is shown in fig .[ expt](e ) and ( f ) . with further increase in the oscillationis quenched and the system rests in the stable steady state .[ expt](g ) and ( h ) shows the case for . note the qualitative resemblance between the experimental scenarios and the numerical results of fig.[vd ] .in summary , we have proposed a scheme to control birhythmic behavior in nonlinear oscillators .our control scheme incorporates a self - feedback term that is governed by the variable to be controlled and its canonical conjugate .we have considered a prototypical model that shows birhythmic oscillation and has relevance in modeling biochemical processes .our study has revealed that a proper choice of the control parameter can effectively eliminate birhythmicity for any choice of nonlinear damping parameters and at the same time its variation may give rise to transitions between several interesting dynamical behaviors .physical implementation of our control scheme is very much feasible , since feedback through conjugate variables is quite natural in many experimental setups .we can realize the control scheme if we have access to at least one of the variables of interest ; from that we can always generate its time derivative via real time signal processing .we believe that our study may have potential applications in controlling birhythmicity in several mechanical and biochemical processes as well as in other fields .d.b . acknowledges the financial support from csir , new delhi , india , t.b .acknowledges the financial support from serb , department of science and technology ( dst ) , india [ project grant no .: sb / ftp / ps-005/2013 ] .[ 1][1]#1 32ifxundefined [ 1 ] ifx#1 ifnum [ 1 ]# 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] _ _ ( , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) in _ _ , , ( , , ) pp . * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , , ) _ _ ( , , ) * * , ( ) _ _ ( , , ) * * , ( ) * * , ( ) https://books.google.co.in/books?id=2vi3rgeacaaj[__ ] ( , ) _ _ , ed .( , , ) * * , ( ) | birhythmicity arises in several physical , biological and chemical systems . although , many control schemes are proposed for various forms of multistability , only a few exist for controlling birhythmicity . in this paper we investigate the control of birhythmic oscillation by introducing a self - feedback mechanism that incorporates the variable to be controlled and its canonical conjugate . using a detailed analytical treatment , bifurcation analysis and experimental demonstrations we establish that the proposed technique is capable of eliminating birhythmicity and generates monorhythmic oscillation . further , the detailed parameter space study reveals that , apart from monorhythmicity , the system shows transition between birhythmicity and other dynamical forms of bistability . this study may have practical applications in controlling birhythmic behavior of several systems , in particular in biochemical and mechanical processes . |
quantum control is aimed at designing external pulses in order to achieve efficient transfers between the states of the quantum system under study .this task is crucial in atomic and molecular physics , and has many applications extending from photochemistry to quantum computation .quantum control has attracted attention among the physics and chemistry communities , but also in applied mathematics for the development of new theoretical methods . in this context ,optimal control theory ( oct ) can be viewed as the most accomplished way of designing control fields .several modifications of the standard optimal control algorithms have been brought forward to account for experimental constraints , such as the non linear interaction of the system with the field , the question of spectral constraints and the robustness with respect to one or several model parameters .recently , we have shown how optimal control strategies can be extended to enforce the constraint of time - integrated zero - area on the control field .this constraint is a fundamental requirement in laser physics , as shown in different experimental and theoretical studies .basically this effect can be related to the fact that the dc component of the control field is not a solution of maxwell s equation .we refer the reader to the first part of the paper for a complete discussion .note that this point is particularly crucial in the thz regime , or with laser pulses accommodating only few optical cycles .the corresponding laser sources are by now commonly used in quantum control . up to now, the majority of the theoretical papers on quantum control does not consider this zero - area requirement , which may lead to non - physical electromagnetic control fields and is problematic in view of experimental implementations .in addition , imposing such a constraint to the control scheme may force the optimization algorithm to reach more efficient external fields achieving better transfer .these arguments show the importance of the methods and of the results presented in this work , in particular to fill the existing gap between theory and experiment .our preliminary study on the subject is a methodology - oriented paper , focusing mainly on the technical aspects of the optimization algorithms ( local and optimal approaches ) as briefly illustrated on two typical molecular processes ( orientation and dissociation ) .the present paper thoroughly expands this initial work by providing an extended numerical investigation and a detailed physical analysis of the dynamics of two specific molecular systems .the article is organized as follows .the physical origin of the time - integrated zero - area constraint is presented in sec.[sec2 ] .for completeness the principles of the optimization algorithms are also briefly outlined , full technical details being referred to ref. .section [ sec3 ] focuses on the control of molecular orientation , with the co molecule as an illustrative example .section [ sec4 ] is devoted to the dissociation of heh involving the control of a given fragmentation channel .conclusions and prospective views are given in a final section [ sec5 ] .the zero - area constraint for laser pulses although well - known , is rarely given a thorough and clear argumentation . for completeness and pedagogical purposes ,this section is devoted to the presentation of such a proof .the body of the proof is made of two parts : the calculation of the time integrated electromagnetic field and the physical interpretation of the result .we refer to the spatio - temporal electric field amplitude and its fourier transform , where the corresponding fourier conjugate variables are time and frequency , on the one hand , and space and momentum vectors , on the other hand .these quantities are related through : together with the usual relation between the frequency and the wave vector : using these notations , the time integrated field area is given by : which , upon the inversion of the integration order , leads to : the last summation on the right - hand - side is nothing but the dirac function : equation ( [ a.4 ] ) can then be simplified after integration over time and frequency as : it is clear from eq.([a.2 ] ) that a null frequency has as a consequence a null momentum ( ) .we finally obtain the physical interpretation of eq .( [ a.7 ] ) is as follows .the term actually represents a non - oscillating ( ) , non - propagating ( ) dc stark field . assuming a non - zero value for such a field requires , from the corresponding maxwell equation , the necessary existence of a spatial electronic charge distribution .in particular , finite charges placed at finite distances may create the dc stark field in consideration .one may also invoke a charge distribution set at infinite distance , as limiting asymptotic conditions .but , of course , within this hypothesis , a non - zero stark field would necessitate an overall distributed infinite electric charge .ultimately , in systems where electric charges are not introduced on purpose , a dc stark field can not be created , and the goal of this section is to show how zero - area control fields can be designed .the three proposed methods are based on the optimization of the parameters of a functional form associated with the control problem .we consider a quantum system interacting with an electromagnetic field whose dynamics is described by the following time - dependent schrdinger equation : where and are the field - free hamiltonian and the interaction term , respectively , and the control field .the units used throughout this paper are atomic units .let be the initial state and the total duration of the control .the goal of the control problem is to maximize the expectation value of a given observable at time .the first approach consists in introducing a closed - form expression for the control field depending on a finite number of parameters denoted : with .the ensemble is chosen such that the constraint on the time - integrated area , , is satisfied .the optimal values of the parameters are determined in a second step by using gradient or global optimization procedures such as genetic algorithms .this approach , which can be very efficient in some cases , is however highly dependent on the parametrization used to describe the control field .a more general method is based on an extension of the optimal and local optimization algorithms , which enforces the zero - area constraint through the introduction of a lagrange multiplier .such algorithms are proposed and investigated in ref .for the sake of completeness of the paper , we briefly outline below the principles of the different optimization procedures and we refer the interested reader to our preceding work for technical details on the numerical algorithms . the optimal control problem is defined through a cost functional : ^ 2.\ ] ] which allows us to maximize the expectation value at time of a given observable , while penalizing the total energy of the control field and enforcing the zero - area constraint .the novelty of the computational scheme resides in the introduction of a new lagrangian multiplier to account for the zero - area constraint . in eq .( [ cost ] ) , the positive parameters and , expressed in a.u . ,weight the different parts of with respect to the expectation value of , and penalize the area of the field ( - term ) and its energy ( - term ) .larger is and closer is the time - area of the control field to zero . in eq .( [ cost ] ) , is a reference pulse and an envelope shape , which can be chosen as . as usual , note that the function is introduced in the cost functional in order to ensure the smooth switch on and off of the field at the beginning and at end of the control . starting from this new cost functional , it is straightforward to derive a standard iterative algorithm based on krotov ( as used in this work ) or gradient procedure .the basic ingredient of the optimization procedure is the definition , at each step , of a new control field . at step ,we get : }{2\lambda}-\frac{\mu}{\lambda}s(t)a_k , \label{eq : newfield}\ ] ] where is the state of the system at the iteration , the adjoint state obtained from backward propagation of the target taken as an initial state for eq .( [ eqschrodinger ] ) , and the control fields at steps and , respectively , and the corresponding time - integrated area . only the last term of the right - hand side of eq .( [ eq : newfield ] ) is different from a standard procedure .the local control theory ( lct ) can also be extended along the same lines by considering the following lyapunov function which accounts for the zero - area constraint : where is any operator such that .\label{eq : comuto}\ ] ] to ensure the monotonic increase of at any time , i.e. , the control field is defined as follows : |\psi(t)\rangle-2\mu a(t)\big),\ ] ] where is a positive parameter used to limit the amplitude of .we will also use in the following the parameter .finally , note that the two proposed optimization procedures can only reduce the time - integrated area without completely removing it .a filtering process can further be used to accurately design a zero - area field .this section is devoted to the control of the orientation dynamics of polar diatomic molecules with the constraint of zero - area fields .the two different approaches introduced in sec .[ sec2 ] will be considered and discussed .we introduce a family of pulses characterized by a closed - loop expression depending on two parameters , which can be adjusted to enhance the degree of orientation .a second option is based on the optimization algorithms of sec .[ sec2 ] which enforce the zero - area constraint through a lagrange multiplier .no analytic expression of the optimal field can be derived in this case .we consider a molecule described in a rigid - rotor approximation interacting with , a linearly polarized electromagnetic pulse along the - axis of the laboratory frame .the electric field is assumed to be in the thz regime .the co molecule in its ground vibronic state is taken as an illustrative example . at zero temperature , the dynamicsis governed by the time - dependent schrdinger equation ( [ eqschrodinger ] ) where , in a linear approximation , and .the parameter is the rotational constant of the molecule , the angular momentum operator and the molecular permanent dipole moment .the spatial position of the diatomic molecule is given in the laboratory frame by its spherical coordinates , being the angle between the molecular axis and the polarization vector , and the corresponding azimuthal angle . the hilbert space associated with the dynamical system is spanned by the spherical harmonics , with and .we also recall that , due to the cylindrical symmetry of the problem , the projection of the total angular momentum on the field polarization axis is a good quantum number , so that does not depend on the angle . when the laser is switched on at , the initial condition is given by .the expectation value is usually taken as a quantitative measure of orientation . in the non - zero temperature case ,the time evolution of the molecular system is described by the density operator solution of the von neumann equation : .\ ] ] the initial condition is a boltzmann distribution at temperature and the degree of orientation is given by the expectation value expressed in terms of : ] and $ ] thz , respectively . the maximum orientation achieved during the field - free evolutionis indicated in fig . 1 by a circle .we observe that the global degree of orientation is generally low , except for a zone of high orientation around thz and .the maximum of obtained is of the order of for thz and .the size of the high orientation region shows the robustness of the control pulse with respect to experimental imperfections while setting the parameters and .( vertical color code ) for the co molecule as a function of the parameters and defined in eq.([family ] ) .the circle indicates the pulse for which the maximum degree of orientation is achieved . ] in a second step of the investigation , we use the best control field derived from the results of fig .[ fig1 ] as a guess field for the optimal control algorithm .to be applied , this algorithm requires the definition of a target state . here, we introduce the target operator defined as where is taken as the delay between the end of the guess pulse and the time where reaches its first maximum during the field - free evolution of the system .figure [ fig : evol_cos_i ] displays the time evolution of and gives the definition of the time .this figure also illustrates how is chosen. induced by the guess field .the time is given by the relation . ] in numerical calculations , the intensity of the guess field and the parameter are fixed to 20 tw/ and 1 , respectively , for both optimizations , with and without zero - area constraint .the delay in eq .( [ eq : tar_op ] ) is set to .the dynamics under the optimized fields is shown in fig . [fig : jt_area](a ) , comparing the effect of standard and zero - area constraint algorithms . for ( optimization without zero - area - solid thin blue line ) and for ( solid thick black line - a.u . ) .the red dashed line represents the orientation dynamics induced by the initial guess field .the panels ( b ) and ( c ) display the corresponding control fields and their fourier transform . ]figure [ fig : jt_area](b ) shows the guess and the optimized fields with and without the zero - area constraint .note that the global shape of the two optimized fields is similar to the guess field , except for a small oscillatory behavior .the optimized pulse without the zero - area constraint leads to higher orientation ( ) than the pulse with the zero - area constraint ( ) , but the time - integrated area is divided by a factor 60 in the second case , going from -20.57 a.u .to -0.36 a.u . advocating for experimental feasibility .the very good orientation achieved demonstrates the efficiency of the optimal control algorithms , even if the area of the field is no more strictly zero .as expected , we observe in fig .[ fig : jt_area](c ) that the fourier transform of the three fields is equal or nearly equal to 0 at .since the optimized fields are very close to the guess pulse , their fourier transforms show similar features . however , the optimization slightly shifts the fourier transform towards the high frequencies .the efficiency of the zero - area constraint algorithm is also checked for the co molecule at non zero temperature .this is a much more difficult task than controlling the orientation at zero temperature .we discuss only the case of a long optimization time , .we have considered as initial field a closed - form expression with zero area , which can be written as the sum of three hermite polynomials ( see fig .[ fig : jt_area_t30k](b ) for a representation of this pulse ) .we have changed the maximum intensity from 20 tw/ to 2 tw/ .the parameters and are set to 20 a.u . and a.u .respectively . the temperature is fixed to 30 k. figure [ fig : jt_area_t30k](a ) displays the time evolution of the expectation value of induced by the guess and optimized fields .the dynamics under the optimized fields have similar features , but are very distinct from the one induced by the guess field .but for the non - zero temperature case .] figure [ fig : jt_area_t30k](b ) compares the corresponding optimized fields together with the guess field .as could be expected , the optimized fields show similar features . in this example , the area of the optimized field with the zero - area constraint is two orders of magnitude smaller than the one obtained without the zero - area constraint .this is a remarkable result since the time - integrated area is largely reduced while preserving a satisfactory orientation of the order of 0.2 .the price to pay for increasing the final degree of orientation can be seen in the fourier transform of the optimized pulses , which have a much more complicated structure with an oscillatory behavior at low frequency .these additional low frequencies found by the algorithm correspond to the slow oscillations of the optimized fields which appear after . by filtering out such oscillations , we have checked that this oscillatory behavior is essential to produce a high degree of orientation .following ref . , we observe that the low frequency distribution of the optimized field coincides with the rotational resonance frequencies .this suggests an interpretation for the origin of the oscillatory behavior and a possible control mechanism based on the excitation of these different frequencies .another important application of local and optimal control strategies is illustrated on molecular photodissociation .due to the short duration of the pulses ( as compared with the rotational period ) a frozen rotation approximation is valid .in addition , the molecule is assumed to be pre - aligned along the -direction of the laboratory frame .therefore , the diatomic system is described by its reduced mass and the internuclear distance .we aim at controlling the photodissociation of heh through the singlet excited states leading to he fragment in the shell .we shall consider only parallel transitions among the singlet states induced by the dipole operator assuming the internuclear axis pointing along the -direction .the adiabatic potential energy curves , the radial nonadiabatic couplings and the adiabatic transition elements of the dipole operator have been computed in ref .[ fig : heh_pot ] ( a ) displays the adiabatic potential energy curves .the partial photodissociation cross sections computed in ref . are displayed in fig .[ fig : heh_pot ] ( b ) .states of heh leading to fragments in the shell .the target states are : h + he(1s2s ) ( green dashed line ) , h + he(1s2p ) ( red long dashes dots ) .( b ) partial photodissociation cross sections together with an enlargement given as inset .the legend is the same as in panel ( a ) .] dynamics is performed in the diabatic representation obtained from the adiabatic - to - diabatic transformation matrix which has been derived by integrating from the asymptotic region where both representations coincide .the total hamiltonian introduced in eq .( [ eqschrodinger ] ) involves here and where are the potential matrix elements as obtained by diabatization of the adiabatic potential energies using the transformation matrix .the parameter is the number of diabatic electronic states under consideration and . are the diabatized dipole transition matrix elements .the initial state is the lowest vibrational state of the ground electronic adiabatic state .the goal is to enhance the yield in he in the = 2 shell through the dissociation channels leading to he , or to he .these two target asymptotic states are denoted by ( fig .[ fig : heh_pot ] , green dashed curve ) and ( fig .[ fig : heh_pot ] , red long dashes dots ) . in a first attempt we use a zero area gaussian pulse with a carrier frequency chosen from the photodissociation cross section maximum .we limit the total integrated intensity for further comparison with the oct strategy .the yields remain very weak , of the order of 3 % only , close to the value predicted by the fragmentation cross section .we then examine the efficiency of the zero - area constraint in both local and optimal control approaches .the local control can be considered as a very interesting first step before using oct . in the presence of nonadiabatic interactions , the operator referred to in eq .( [ eq : jlc ] ) requires to be chosen carefully since it has to commute with the field - free hamiltonian ( see eq .( [ eq : comuto ] ) ) .the projectors on either adiabatic or diabatic states are thus not appropriate in lct since they do not commute with this hamiltonian due to kinetic or potential couplings , respectively .this crucial problem can be overcome by using projectors on eigenstates of , i.e. on scattering states correlating with the controlled exit channels . in this example , the operator takes the form : where represents the two channels leading to the target he fragments , the objective being .the local control field now reads -2 \tilde{\mu } a(t),\ ] ] involving two adjustable parameters and .the ingoing scattering states are estimated using a time - dependent approach based on mller - operators defined by where is the outgoing plane wave in channel with energy and is the hamiltonian operator of the fragments where all couplings have vanished asymptotically .this control strategy remains local in time but can preemptively account for nonadiabatic transitions that occur later in the dynamics .the photodissociation cross section shows that there is no spectral range where the he dissociation channels dominate ( see fig . [fig : heh_pot](b ) ) .the local control without any constraint ( ) finds a very complicated electric field which begins by a regular oscillatory pattern followed after 10 fs by a complex , positive real component shape whose area is obviously not zero ( see blue thick curve in fig . [fig : heh_field](a ) ) .this erratic positive structure found in the lct field can be interpreted as a stark field .we therefore choose this example to check the efficiency of the zero - area constraint algorithm .we first use the zero - area algorithm with a.u . and different values of (some examples are given in ref .figure [ fig : heh_field](a ) ( green thin curve ) shows the pulse for = 0.05 a.u .the algorithm efficiently reduces the stark structure without completely removing it .the average objective ( eq . ( [ newproj ] ) ) for the two cases without ( blue thick curve ) and with the area constraint ( green thin curve ) are displayed in fig .[ fig : heh_field](d ) .the objective is divided by about 2/3 . as shown in ref . , increasing to still reduce the stark component decreases the objective so that a compromise has to be found .a complementary brute force strategy consists in removing the main part of the stark component by filtration of near - zero frequencies .starting from the initial lct pulse , this already provides a large correction to the non vanishing area . the field after filtration of the low frequenciesis shown in fig .[ fig : heh_field](b ) ( red thin curve ) . to estimate the efficiency of the filtered pulse, we show the occupation of the two target adiabatic channels during the propagation in fig .[ fig : heh_field](e ) ( red thin curve ) .the final value of 3.75 is notably lower than the asymptotic value of the local objective 8.55 ( blue thick curve in fig .[ fig : heh_field](d ) ) .figure [ fig : heh_field](b ) ( black thick curve ) shows the pulse with = 0.05 a.u .after a subsequent filtering of the low - frequency components ( compare with the green thin curve in fig . [fig : heh_field](a ) ) .the resulting regular profile confirms the efficiency of this mixed strategy .the population in the selected adiabatic channels with this filtered pulse is the black thick curve in fig .[ fig : heh_field](e ) .the price to pay for reducing the pulse area is always a decrease of the target yield , but the best compromise is obtained when using both area constraint algorithm and residual filtering .we first use the zero - area algorithm with a.u . anddifferent values of ( some examples are given in ref. ) .figure [ fig : heh_field](a ) ( green curve ) shows the pulse for = 0.05 a.u .the algorithm efficiently reduces the stark structure without completely removing it .the average objective ( eq . ( [ newproj ] ) ) for the two cases without ( blue thick curve ) and with area constraint ( green thin curve ) are drawn in fig .[ fig : heh_field](d ) ) .the objective is divided by about 2/3 . as shown in ref. ) , increasing to still reduce the stark component decreases the objective so that a compromise has to be found .a complementary brute force strategy consists in removing the main part of the stark component by filtration of near - zero frequencies .starting from the initial lct pulse , this already provides a large correction to the non vanishing area .the field after filtration of the low frequencies is shown in fig .[ fig : heh_field](b ) ( red thin curve ) . to estimate the efficiency of the filtered pulse, we show the occupation of the two target adiabatic channels during the propagation in fig .[ fig : heh_field ] ( e ) ( red thin curve ) .the final value of 3.75 is notably lower than the asymptotic value of the local objective 8.55 ( blue curve in fig .[ fig : heh_field ] ( d ) ) .figure [ fig : heh_field](b ) ( black thick curve ) shows the pulse with = 0.05 a.u .after a subsequent filtering of the low - frequency components ( compare with the green curve in figure [ fig : heh_field](a ) ) .the resulting regular profile confirms the efficiency of the mixed method .the population in the selected adiabatic channels with this filtered pulse is the thick black curve in fig .[ fig : heh_field ] ( e ) .the price to pay for reducing the pulse area is always a decrease of the target yield , but the best compromise is obtained when using both area constraint algorithm and residual filtering . , green thin line : with zero - area constraint a.u . , ( b ) lct with residual filtering , red thin curve : a.u . ,black thick curve : a.u .. ( c ) oct starting from the local field after filtration ( red thin line of panel ( b ) ) , red thin line : a.u .lower panels : evolution of the objectives during the control for the different strategies .( d ) the lct objective is the population in the selected scattering states , blue thick line : without constraint , green thin line : with zero - area constraint a.u . , ( e ) population in the adiabatic target states during the propagation with the filtered lct pulses , red thin line : a.u . , black thick line : a.u .( f ) population in the target states during the propagation with the oct pulse , red thin curve : diabatic representation , blue thick line : adiabatic representation .the asymptotic values give the he yields . ] in a second step , we explore the oct strategy .note that this procedure only refers to the zero - area algorithm without any subsequent filtration .the yield obtained with guess gaussian fields increases only weakly while better results are obtained when the trial field is the lct pulse .we choose the lct pulse after filtration ( red thin curve in fig .[ fig : heh_field](b ) ) as guess field .note that the oct strategy only uses the zero - area algoritm without any subsequent filtration .the lct field can be computed as long as the components in the excited states have some amplitude in the range covered by the initial ground vibrational state ( roughly speaking , the franck condon region ) .this leads to a field vanishing after about 20 fs .in the global oct strategy , the field is optimized on a time which may be longer .this opens the flexibility to exploit additional transitions towards the target states .we choose a final time fs .the spatial grid is calibrated so that the target wave packet components do not reach the absorbing potential in the asymptotic region .the oct objective is simply built here from the projector onto the diabatic states and the operator takes the form : as the objective is defined with the wave packet at the final time , this corresponds to the required optimization of the decoupled scattering states . at each iteration, the final condition of the lagrange multiplier is the asymptotic components in the target channels and no amplitude in all the other ones .the parameter is chosen automatically by constraining the integrated intensity to 0.06 a.u .( see ref .the field corresponding to the best a.u .is shown in fig .[ fig : heh_field](c ) with a yield reaching . as the simulation is performed in the diabatic basis set , the time evolution of the objective ( see red thin curve in fig . [fig : heh_field](f ) ) corresponds to the population in the sum of these two diabatic channels .the very strong oscillations reveal that the mechanism found by oct in the last step is strongly non diabatic because the selected states are coupled with the other states during the process and decouple only at the end of the control .the mechanism is more simple in the adiabatic representation as can be seen by the evolution of the total population in the two adiabatic states correlated to the target fragments ( blue thick curve in fig .[ fig : heh_field](f ) ) . fig .[ fig : heh_pop ] compares the occupation of the adiabatic electronic states during the propagation with the guess field ( panel ( a ) ) and the best zero - area criterion optimal control pulse ( panel ( b ) ) .the increase of the global target in oct mainly comes from the enhancement of the ( he* ) ( green dashed curve ) .oct also reduces the unwanted transitions towards all other channels .the lower panels in fig .[ fig : heh_pop ] show the spectrograms of the filtered lct and oct fields .the main operating frequency of the local field corresponds to that predicted by the photodissociation cross section ( at about 1.3 a.u . ) for maximizing both channels .the inset in fig .[ fig : heh_pot](b ) shows that this frequency corresponds to the maximum yield for the fragment ( he* ) . the additional mechanism due to a non optical stark effectis thus suppressed by filtering very low frequencies .the oct field first uses a low frequency component centered at about 0.8 a.u .this frequency favors of channel ( he* ) which explains the steep increase of that population and the vanishing influence of channel ( he* ) .the channel is also more involved .after 20 fs , when the wave packet is out of the franck condon region , one observes a new mechanism proceeding via transitions between the target and the and channels .these transitions require lower frequencies ( about 0.3 a.u . ) corresponding to the gap between the states ( see fig .[ fig : heh_pot](a ) ) . .the legend is the same for panels ( a ) and ( b ) .the target states are : h + he(1s2s ) ( green dahed line ) , h + he(1s2p ) ( red dashes dots ) .lower panels : spectrograms of the fields .a color code with an arbitrary unit is given in each panel ( c ) or ( d ) to estimate the relative intensities .( a ) local control after filtration taken as guess field for oct ( red thin curve in fig .[ fig : heh_field ] ( b ) ) ( b ) optimal control with a.u .( red thin curve in fig .[ fig : heh_field ] ( c ) ) ]after having discussed the physical origin of the time - integrated zero - area constraint on the laser control of molecular dynamics , we show that this fundamental requirement can be included in the standard optimization computational schemes .a detailed description of the dynamics achieved with such zero - area control fields is given and applied to two specific examples of molecular dynamics , namely the control of molecular orientation and that of molecular fragmentation .very encouraging results have been obtained even in the case of complicated quantum systems . in particular ,we have derived for molecular orientation a closed - form expression of the control field depending only on two free parameters .the zero - area constraint is satisfied for any value of these parameters . at zero temperature, this approach reveals to be very efficient even when compared with the optimal solution .however , we have observed that the modified optimal control algorithm used in this work remains the best tool to handle more involved control problems , which can not be solved by lct or analytical fields with a sufficient efficiency .this work and the possibility of including experimental constraints in optimal control algorithms pave the way for future experimental implementations in quantum control .in other words , such results help in bridging the gap between control theory and control experiments .+ + * acknowledgments * s. v. acknowledges financial support from the fonds de la recherche scientifique ( fnrs ) of belgium .financial support from the conseil rgional de bourgogne and the quaint coordination action ( ec fet - open ) is gratefully acknowledged by d. s. and m. n .. we thank the cost xlic action .o. a. acknowledges support from the european union ( project no .itn-2010 - 264951 , corinf ) . | the constraint of time - integrated zero - area on the laser field is a fundamental , both theoretical and experimental requirement in the control of molecular dynamics . by using techniques of local and optimal control theory , we show how to enforce this constraint on two benchmark control problems , namely molecular orientation and photofragmentation . the origin and the physical implications on the dynamics of this zero - area control field are discussed . |
recent studies show that , driven by emergence of highly capable devices such as smartphones and resource demanding wireless services such as video streaming , the demand for wireless capacity will increase roughly 1000x , compared to the current 4 g networks , by the year 2020 . in order to overcome this wireless capacity crunch ,an evolutionary architecture has been introduced in the next - generation 5 g networks , in which low - cost and lower - power small cell base stations ( sbss ) are densely and randomly deployed within the coverage of macrocell base stations ( mbss ) and .this new architecture has shown its great potential to improve the capacity and coverage of wireless cellular systems .however , such extreme network densification makes it challenging to manage the cellular system at various levels that include interference control , coverage optimization , load balancing , mobility robustness optimization , and energy management .many of the existing solutions require autonomous cooperation between network devices .for example , sbss can cooperate for performing coordinated multipoint ( comp ) transmissions or for coordinating their interference via techniques such as interference alignment .similarly , user cooperation can take place in order to further exploit user diversity , e.g. , user devices can relay for each other by using device - to - device ( d2d ) communications , or form cooperative groups to support virtual multiple - input multiple - output ( mimo ) transmissions . despite in emerging wireless networks , most existing optimization techniquesare restricted to addressing the cooperation problem for centralized and homogenous wireless systems in which cooperation is a privilege rather than a necessity . in order to provide a more flexible framework for the cooperative behaviors of future wireless systems , each device in the network can be treated as an individual decision maker that act on its own principles , which naturally leads to game - theoretic approaches where multiple players form a stable and efficient network operating point in a self - organizing manner .in particular , the framework of _ cooperative game theory _ provides the necessary tools for modeling and developing self - organizing techniques for forming cooperative groups or coalitions between network devices , based on the mutual benefit and costs for cooperation .indeed , cooperative games , in general , and coalition formation games ( cf games ) , in particular , have become a popular tool for analyzing wireless networks . in , singleantenna users self - organize into multiple coalitions and share their antennas in each coalition to form a virtual mimo system , and hence benefit from spatial diversity or multiplexing . in , secondary users ( sus ) form multiple coalitions and combine their individual sensing data at the coalition head to perform cooperative spectrum sensing , and hence improve their sensing performance . in ,wireless transmitters form coalitions to coordinate their transmissions so as to improve their physical layer security . in , coalition games have been used to overcome the curse of boundary nodes , in which boundary nodes use cooperative transmissions to help the backbone nodes in the middle of the network . in ,hedonic coalition formation games are utilized to model the task allocations problem , in which a number of wireless agents are required to collect data from several arbitrarily located tasks using wireless transmissions . in ,a coalition formation scheme is proposed for road - side units in vehicular networks to improve the diversity of the information circulating in the network while exploiting the underlying content - sharing vehicle - to - vehicle communication network . in , a cooperative model based on coalition formation gameis proposed to enable femtocells to improve their performance by sharing spectral resources , minimizing the number of collisions , and maximizing the spatial reuse . in , coalition formation games are utilized to strike a balance between the qos provisioning and the energy efficiency in a clustered wireless sensor network .however , most of this existing body of work focuses on coalition formation models in which the players form separate coalitions and get payoffs from the single coalition they join . in future wireless systems ,communication nodes are equipped with more powerful devices that are able to utilize multiple resources in a more flexible way .this makes it possible and necessary to allow nodes to participate in multiple _ overlapping coalitions _ , and , subsequently , receive payoffs from all coalitions they participate .for example , a multiple antenna user may join multiple coalitions by devoting its antennas into different groups of users , and benefit from multiple virtual mimo transmissions , and also , an sbs with multiple neighboring sbss needs to coordinate its transmissions with multiple groups of sbss , so as to avoid inner - channel interferences from all its neighbors . in these scenarios ,the coalitions formed by communication nodes are overlapping , and each node receives payoffs from the multiple coalitions it joins . even though there are many other available modeling tools for analyzing cooperation in wireless networks , we focus on the methods that belong to cooperative game theory .the main contribution of this paper is to present an introduction to a novel mathematical framework from cooperative games , _ overlapping coalition formation games _( ocf games ) , which provides the necessary analytical tools for analyzing how players in a wireless network can cooperate by joining , simultaneously , multiple overlapping coalitions .first , in section , we introduce the basic concepts of ocf games in general , and develop two polynomial algorithms for two classes of ocf games .then , in sections and , based on and , we present two emerging applications of ocf games in small cell - based heterogeneous networks ( hetnets ) and cognitive radio networks , in order to show the advantages of forming overlapping coalitions compared with the traditional non - overlapping coalitional games . in section ,we conclude by summarizing the potential applications of ocf games in future wireless networks .in this section , we formally introduce the notion of cooperative games with overlapping coalitions , or ocf games . in section .a , we present the basic model of ocf games and illustrate the overlapping gain compared to traditional cooperative games with non - overlapping coalitions . in section .b , we focus on one key stability notion in ocf games , -core , which is a direct extension of the core from traditional cooperative games . in this regard , we show that the computation of stable " outcomes in the sense of the -core can generally be intractable .therefore , in section .c , we identify several constraints that lead to tractable subclasses of ocf games , and provide efficient algorithms for solving games that fall under these subclasses .game theory is a mathematical tool that analyzes systems with multiple decision makers having interdependent objectives and actions .the decision makers , which are usually referred to as _ players _ , will interact and obtain individual profits from the resulting outcome . in cooperative games, the players can form cooperative groups , or _ coalitions _, to jointly increase their profits in a game . in traditional cooperative games ,the players are typically assumed to form disjoint , non - overlapping coalitions , and they only cooperate with players within the same coalition .however , there are situations in which some players may be involved in multiple coalitions simultaneously . in such cases, these players may need to split their resources among the coalitions that they participate in .for example , a multi - mode wireless terminal may access base stations from different networks and it needs to distribute its traffic load among these networks . in such situations , some coalitions ( cells of different systems ) may involve some of the same players ( multi - mode terminals ) , and therefore may overlap with one another .now , we formally introduce the mathematical tool to model these overlapping " situations , _ cooperative games with overlapping coalitions _ , or _ocf games_. in ocf games , each player possesses a certain amount of resources such as time , power or money . in order to obtain individual profits ,the players form coalitions by contributing a portion of their resources and receive payoffs from the devoted coalitions .a _ coalition _ can be represented by the resource vector contributed by its coalition members , i.e. , , where represents player s resources that are contributed to this coalition . for each coalition ,the _ coalition value _ is decided by a function ^n \rightarrow \mathbb{r}^+ ] , such that for any resource vector .briefly , is the maximal total value that the players can generate by forming overlapping coalitions when their total resources are given by .we observe that , which is a recurrence relation for a discrete - time dynamic system .thus , we can use the dynamic programming algorithm to calculate .given the values of for all , the computation of requires times of computing .therefore , the entire computation of requires at most times of computing .when is calculated , we can trace backward the optimal path and achieve every coalition in the optimal coalition structure .therefore , the optimal coalition structure can be calculated in time .therefore , we can calculate an o - profitable deviation in time , which is polynomial in .the algorithm for -coalition games is shown in table [ k - coalition ] .p160 mm input an initial outcome . [ 1 ] initial coalition structure initial payoffs for all decide new payoffs final coalitional structure final payoffs output an o - stable outcome .+ in a -task ocf game , each coalition in the game corresponds to a specific task and each player can only contribute to tasks .being different from -coalition ocf games , the number of coalitions in a -task ocf game is strictly limited by the number of tasks , which are predetermined by the considered problem .for example , in a software company , the available projects are predetermined and the developers can not form coalitions to generate new projects but only divide his time among the existing ones .since the number of coalitions is fixed in a -task ocf game , a deviation will not form new coalition structures but only move resources among the existing coalitions , and thus , we refer to deviation as _ transfer _ in -task ocf games .the number of possible transfers is now given by ^s = \mathcal{o}(n^s)$ ] , which is polynomial in . since the deviators do not form an overlapping coalition structure , their payoffs can be easily calculated using the arbitration function .therefore , an o - profitable deviation of a -task ocf game can be calculated in time .the algorithm for -task games is shown in table [ k - task ] .p160 mm input an initial outcome .[ 1 ] initial coalition structure initial payoffs for all decide new payoffs final coalitional structure final payoffs output an o - stable outcome . + given the polynomial algorithms for -coalition games and -task games , we then provide two example applications to show how the concepts and algorithms of ocf games can be utilized in wireless networks .note that we restrict our model to single - resource scenarios in which the players only have one type of resources .however , this model can be extended to the multi - resource setting , by using a vector rather than a scalar to describe the contribution of a player , and all the concepts and algorithms can also be extended to such a case .in small cell - based hetnets , a large number of small cells may be randomly deployed in the same spectrum as the existing , macro - cellular network . due to the large amount of small cells and their ad hoc nature of deployment ,interference management is always one of the key challenges in hetnets .there are many interference management techniques , such as successive interference cancellation ( sic ) , parallel interference cancellation ( pic ) and multiuser detection ( mud ) .however , these techniques require global knowledge of the characteristics of the interfering channels , which generates a huge amount of backhaul traffic for small cells and makes it impractical when the number of small cells is large . in this case , distributed schemes become important , where no central controller is involved in the system and minimum information exchange is required among the small base stations . game theory , due to its self - organizing characteristic , has also been widely utilized to design distributed approaches for interference management . however , most of these approaches are based on non - cooperative games where no information exchange is allowed among small cells . in this paper, we propose a cooperative approach based on ocf games .we consider the downlink scenario in which the macro users are interfered by nearby sbss , and small cell users are interfered by mbss as well as nearby sbss . to avoid interference between the macro network and small cells, the underlayed mbs can inform each sbs in coverage of the available rbs in the current slot , which is determined by the radio assignment of the macro users .however , interference between small cells still exists due to the lack of coordination between sbss .this scenario is illustrated in fig .[ app1 ] . we will study how ocf games can be used to coordinate the interference between sbss and improve the entire network performance . for each sbs , the available rbs , which are decided by the mbs that covers it ,are the resources that can be used to cooperate with other sbss . for each available rb ,the sbs can decide to leave it unoccupied so as to reduce the inter - cell interference , or to utilize it for downlink transmissions so as to improve the throughput .when the sbss cooperate with each other by contributing a part of their available rbs , they form a coalition in which all contributed rbs are evenly distributed to the involved sbss . to avoid interference inside the coalition , each rbcan only be distributed to one sbs , and the sbss can only utilize the rbs distributed to them .the value of a coalition is given by the total downlink throughput generated by the resources of this coalition .note that the interference outside the coalition still exists and it should be considered in the calculation of coalition value . the value is then distributed to the involved sbss as their payoffs when they actually use them in the downlink transmission . in order maximize their individual payoffs, the sbss may form different overlapping coalitions by deviating from the current overlapping coalition structure .the dynamics can be seen as an ocf - game , which converges to an o - stable structure as we explained . since the coalitions can only be formed by neighboring sbss ,the number of coalitions that a sbs can participate is limited .therefore , the studied ocf game is a -coalition ocf game .we use the developed algorithm in table [ k - coalition ] and the performance is shown in fig .[ radio ] . area with different levels of traffic load .the interference radius of each sbs is set as m.,width=403 ] in fig .[ radio ] , we compare the developed algorithm with the situation of no overlapping in networks with different levels of traffic load . the values and represent the average rate between the number of required rbs by small cell users and the total available rbs for sbss .when the sbss are sparsely deployed , the sbss seldom interfere each other , and the performance improvement by the developed algorithm is limited . as more sbss are deployed in the area , the interference coordination becomes crucial and the developed algorithm improves the network performance by to .when the sbss are extremely dense , the inevitable interference between the coalitions dominates the network performance , and the advantage of the developed algorithm converges to zero .also , when the traffic load is heavier , interference coordination can bring more benefits to the network , and thus , the developed algorithm performs better .in order to provide gigabit transmission rate , future wireless networks must use a large amount of spectrum resources .however , the scarcity of the radio spectrum coupled with the existing , fixed spectrum assignment policies , has motivated the need for new , dynamic spectrum access mechanisms to efficiently exploit the spectral resources .cognitive radio ( cr ) is one highly promising technique to achieve such dynamic exploitation of the radio spectrum . in cr networks ,unlicensed , secondary users ( sus ) , can sense the environment and change their parameters to access the spectrum of licensed , primary users ( pus ) , while maintaining the interference to the pus below a tolerable threshold . such spectrum sensing is an integral part of any cr network .indeed , reaping the benefit of cr is contingent upon deploying smart and efficient spectrum sensing policies . here , we consider the cooperative sensing scheme , in which nearby sus exchange their local sensing results and then make collaborative decisions on the detection of pus .there are three categories of cooperative spectrum sensing , based on how cooperating sus share their sensing data in the network : centralized , distributed , and relay - assisted . here , we consider the distributed case , where sus communicate among themselves with no fusion center or relay , as seen in fig . [ app2 ]traditionally , the distributed approach adopts non - cooperative schemes for its simplicity . in this paper, we will show how ocf games can be used to introduce cooperation between sus , and thus , improve the overall sensing performance .consider a cognitive radio network with multiple sus equipped with energy detectors and a single pu far away from them . in this network ,the sus can individually and locally decide on the presence or absence of the pu via their own detectors .then , they can cooperate with one another by exchanging their local decisions via a reporting channel . at last, each su combines its local decision with the received decisions and decides whether or not the pu is present .note that the sus may have different local detectors with different detection threshold , their missed detection probabilities and false alarm probabilities may be different . in process of cooperativesensing , each su in the system needs to collect local decisions from other sus , and thus , each su represents a sensing task to be accomplished via the cooperation of sus .however , the bandwidth of the report channel is limited for every su , which is usually not enough to transmit to all other sus , especially for those sus with bad channel conditions .thus , a su needs to decide how to cooperate with other sus by efficiently allocating its limited bandwidth to the transmissions of different sus .therefore , this is -task ocf game , where each task is presented by a coalition composed of a head su that collects local decisions and other sus that report to it . the value of the coalition is then given by the sensing performance of the head su , which can be calculated based on the adopted fusion rule .we utilize the developed algorithm in table [ k - task ] and show its performance in fig .[ cr ] . in fig .[ cr ] , we compare the developed algorithm with the local sensing method and the non - overlapping method . in the local sensing method ,the sus use their local sensing decision without any information exchange . in the non - overlapping method , the sus form non - overlapping coalitions and each su only share information inside the coalition it joins .clearly , the developed algorithm outperforms both methods .the incorrect probability decreases to of the local sensing method and of the non - overlapping method . also , the improvement becomes larger as the number of sus increases .ocf game are quite suitable for modeling the future wireless networks , in which the wireless nodes are dense , self - organizing , and cooperative . in this section ,we briefly discuss other potential applications of ocf games and then summarize the applications in table [ app ] .* application * & * player * & * resources * & * coalition * & * coalition value * & * type * + radio resource allocation & small cells & available rbs & radio coordination among the rbs from different small cells & total throughput of the coordinated rbs considering all the potential interference & -coalition + cooperative spectrum sensing & sus & signaling bits & a specific su and the sus that report to this su & the cooperative sensing performance of this specific secondary user & -task + multi - radio traffic offloading & multi - mode devices & user traffic & a specific base station and the traffic contributed by different devices & a function reflecting the user experience & -task + comp & base stations & radio resources & a cell - edge user and the resources contributed by different base stations & the throughput of the cell - edge user & -task + virtual mimo & mobile phones & radio resources & cooperative users forming a virtual mimo group & the mimo link rate & -coalition + smartphone sensing & smartphones & battery energy & a task and the energy devoted by different smartphones & the task utility & -task + cellular networks are constantly evolving into their next generation . however , the former systems are not entirely replaced by the new systems .in fact , it is expected that different networks will coexist for a long time , and , thus , mobile phones will be multi - mode terminals that enable communications over different radio access technologies ( rats ) . in order to fully explore their network investments , the operators must intelligently offload their network traffic over different rats . developing such offloading schemes , which must consider the demands and access authorities of different users , the transmitting rates of different technologies , and the deployment and load of different base stations , is quite challenging for a large number of users and base stations .however , one can use the proposed -task ocf game to model this problem . in the ocf game model, the mobile users can distribute their traffic into different base stations in different networks .a coalition here represents a base station as well as the traffic devoted from different mobile users .the coalition value can be simply defined as the total throughput of this base station with channel and technology limitations , or a sophisticated function reflecting the user experience , which considers the delay and rate experienced by the users , and the cost and energy efficiency of the network . using the developed algorithm in table [ k - task ] ,the user traffic can be intelligently distributed among different networks with high network performance in the sense of the defined value function . in order to increase the performance of cell - edge users , coordinated multipoint ( comp )transmission has been proposed , in which the signals of multiple base stations are coordinated to serve a cell - edge user .since there are multiple cell - edge users , the base stations should allocate their radio resources among these users .it is a challenging optimization problem , since the channel conditions , traffic demands and radio resources are different for different users and base stations .however , we can model this problem using a -task ocf game . in the ocf game model, the base stations can freely allocate their radio resources to different users , including bandwidth , power and antenna resources .a coalition represents a cell - edge user as well as the radio resources devoted from different base stations .the coalition value is defined as the throughput of this cell - edge user .thus , using the developed algorithm in table [ k - task ] , the radio resources of base stations can be efficiently distributed among different cell - edge users .another related application is the cooperation between user devices . in order to increase their transmission rate ,nearby users may group together to use virtual mimo transmissions .the mimo link rate is generally increasing with the number of cooperated users , while the marginal increase is decreasing due to the increasing distance between different users .thus , a user may want to allocate its radio resources among different cooperative groups , so as to maximize its individual throughput .this problem can be modeled via a -coalition ocf game , in which a coalition represents a virtual mimo group and the coalition value is the mimo link rate . using the developed algorithm in table [ k - coalition ] , the radio resources of users can be efficiently distributed among different virtual mimo groups . in recent years , smartphones are equipped with more and more sensors .these powerful sensors allow public departments or commercial companies to accomplish large - area sensing tasks via individual smartphones .these tasks often require collecting data in a large area , and thus , a huge number of smartphones may be involved .based on the task itself and the geographic locations of smartphones , different tasks may require different amount of energy and provide different payoffs for different smartphones . a smartphone user must decide to which tasks he should devote the limited energy .therefore , we can model this problem with the studied -task ocf game , in which each coalition represents a task and the energy devoted from different smartphones , and the coalition value is given by the task utility . using the developed algorithm in table [ k - task ], the smartphone users can efficiently allocate their energy into different sensing tasks .in this paper , we have introduced the framework of overlapping coalition formation games as a tool to model and analyze the communication scenarios in future networks .in particular , we have defined two subclasses , namely -coalition and -task ocf games , and we have developed polynomial algorithms to achieve an o - stable outcome .subsequently , we have presented , in detail , how ocf games can be used to address challenging problems in two application domains : radio resource allocation in hetnets and cooperative sensing .in addition , we have discussed some other potential applications of ocf - games , including multi - radio traffic offloading , cooperative communications , and smartphone sensing . finally , we envision that the use of the ocf game framework will play an important role in 5 g networks , particularly , as the network becomes more dense , decentralized and self - organizing .o. onireti , f. heliot , and m. a. imran , on the energy efficiency - spectral efficiency trade - off in the uplink of comp system , " _ ieee transactions on wireless communications _ , vol .11 , no . 2 , pp . 556 - 561 ,f. pantisano , m. bennis , w. saad , m. debbah , and m. latva - aho , interference alignment for cooperative femtocell networks : a game - theoretic approach , " ieee transactions on mobile computing , vol .12 , no . 11 , pp . 2233 - 2246 , nov .y. kawamoto , j. liu , h. nishiyama , and n. kato , an efficient traffic detouring method by using device - to - device communication technologies in heterogeneous network . " , _ ieee wireless communications and networking conference _ , istanbul , turkey , apr .2014 . w. feng , y. wang , n. ge , j. lu , and j. zhang , virtual mimo in multi - cell distributed antenna systems : coordinated transmissions with large - scale csit , " _ ieee journal on selected areas in communications _31 , no .10 , pp . 2067 - 2081 , oct .2013 .w. saad , z. han , m. debbah , a. hjorungnes , and t. basar , coalitional game theory for communication networks , " _ ieee signal processing magazine , special issue on game theory _ ,26 , no . 5 , pp .77 - 97 , sep . 2009 .w. saad , z. han , m. debbah , and a. hjorungnes , a distributed merge and split algorithm for fair cooperation in wireless networks , " in _ proceedings of international conferernce on communications , workshop cooperative communications and networking _ , beijing , china , may 2008 .w. saad , z. han , m. debbah , a. hjorungnes , and t. basar , coalitional games for distributed collaborative spectrum sensing in cognitive radio networks , " in _ proceedings of ieee infocom _ , rio de janeiro , brazil , apr .w. saad , z. han , t. basar , m. debbah , and a. hjorungnes , physical layer security : coalitional games for distributed cooperation , " in _ proceedings of 7th international symposium of modeling and optimization in mobile , ad hoc , and wireless networks _ ,seoul , south korea , jun .z. han , h. v. poor , coalition games with cooperative transmission : a cure for the curse of boundary nodes in selfish packet - forwarding wireless networks , " _ ieee transactions on communications _57 , no . 1 , pp . 203 - 213 , jan . 2009 .w. saad , z. han , t. basar , m. debbah , a. hjorungnes , hedonic coalition formation for distributed task allocation among wireless agents , " _ ieee transactions on mobile computing _10 , no . 9 , pp . 1327 - 1344 , dec . 2010 .w. saad , z. han , a. hjorungnes , d. niyato , and e. hossain , coalition formation games for distributed cooperation among roadside units in vehicular networks , " _ ieee journal on selected areas in communications _29 , no . 1 ,48 - 60 , jan . 2011 .f. pantisano , m. bennis , w. saad w , r. verdone , m. latva - aho , coalition formation games for femtocell interference management : a recursive core approach , " in _ proceedings of wireless communications and networking conference _, quintana - roo , mexico , mar .d. wu , y. cai , l. zhou , and j. wang , a cooperative communication scheme based on coalition formation game in clustered wireless sensor networks , " _ ieee transactions on wireless communications _11 , no . 3 , pp .1190 - 1200 , feb . 2012 .z. zhang , l. song , z. han , and w. saad , coalitional games with overlapping coalitions for interference management in small cell networks , " _ ieee transactions on wireless communications _ , vol . 13 , no . 5 , pp .2659 - 2669 , may 2014 .t. wang , l. song , z. han , and w. saad , overlapping coalitional games for collaborative sensing in cognitive radio networks , " in _ proceedings of wireless communications and networking conference _ ,shanghai , china , apr .2013 . b. di , t. wang , l. song , and z. han , incentive mechanism for collaborative smartphone sensing using overlapping coalition formation games , " in _ proceedings of global communications conference _ ,atlanta , ga , dec . 2013 .m. bennis , m. simsek , w. saad , s. valentin , m. debbah , and a. czylwik , when cellular meets wifi in wireless small cell networks , " _ ieee communications magazine , special issue on heterogeneous networks _ ,vol . 51 , no . 6 , jun . 2013. p. sangkyu and b. saewoong , dynamic inter - cell interference avoidance in self - organizing femtocell networks , " in _ proceedings of ieee international conference on communications _ ,tokyo , japan , jun .2011 .y. zick , g. chalkiadakis , and e. elkind , overlapping coalition formation games charting the tractability frontier , " in _ proceedings of 10th international conference on autonomous agents and multiagent systems _ , valencia ,spain , jun .x. chen , x. gong , l. yang , and j. zhang , a social group utility maximization framework with applications in database assisted spectrum access , " in _ proceedings of ieee infocom _ ,toronto , canada , apr . | modern cellular networks are witnessing an unprecedented evolution from classical , centralized and homogenous architectures into a mix of various technologies , in which the network devices are densely and randomly deployed in a decentralized and heterogenous architecture . this shift in network architecture requires network devices to become more autonomous and , potentially , cooperate with one another . such cooperation can , for example , take place between interfering small access points that seek to coordinate their radio resource allocation , nearby single - antenna users that can cooperatively perform virtual mimo communications , or even unlicensed users that wish to cooperatively sense the spectrum of the licensed users . such cooperative mechanisms involve the simultaneous sharing and distribution of resources among a number of overlapping cooperative groups or coalitions . in this paper , a novel mathematical framework from cooperative games , dubbed _ overlapping coalition formation games _ ( ocf games ) , is introduced to model and solve such cooperative scenarios . first , the concepts of ocf games are presented , and then , several algorithmic aspects are studied for two main classes of ocf games . subsequently , two example applications , namely , interference management and cooperative spectrum sensing , are discussed in detail to show how the proposed models and algorithms can be used in the future scenarios of wireless systems . finally , we conclude by providing an overview on future directions and applications of ocf games . |
online social networks ( e.g. , facebook , google ) have become increasingly important resources for interacting with people , processing information and diffusing social influence . understanding and modeling the mechanisms by which these networks evolve are therefore fundamental issues and active areas of research .the classical _ link prediction problem _ has attracted particular interest . in thissetting , we are given a snapshot of a social network at time and aim to predict links ( e.g. , friendships ) that will emerge in the network between and a later time . alternatively , we can imagine the setting in which some links existed at time but are missing at . in online social networks ,a change in privacy settings often leads to missing links , e.g. , a user on google decide to hide her family circle between time and .the missing link problem has important ramifications as missing links can alter estimates of network - level statistics , and the ability to infer these missing links raises serious privacy concerns for social networks .since the same algorithms can be used to predict new links and missing links , we refer to these problems jointly as link prediction .another problem of increasing interest revolves around node attributes .many real - world networks contain rich categorical node attributes , e.g. , users in google profiles with attributes including employer , school , occupation and places lived . in the _ attribute inference problem _ , we aim to populate attribute information for network nodes with missing or incomplete attribute data .this scenario often arises in practice when users in online social networks set their profiles to be publicly invisible or create an account without providing any attribute information .the growing interest in this problem is highlighted by the privacy implications associated with attribute inference as well as the importance of attribute information for applications including people search and collaborative filtering . in this work ,we simultaneously use network structure and node attribute information to improve performance of both the link prediction and the attribute inference problems , motivated by the observed interaction and homophily between network structure and node attributes .the principle of social influence , which states that users who are linked are likely to adopt similar attributes , suggests that network structure should inform attribute inference. other evidence of interaction shows that users with similar attributes , or in some cases antithetical attributes , are likely to link to one another , motivating the use of attribute information for link prediction .additionally , previous studies have empirically demonstrated those effects on real - world social networks , providing further support for considering both network structure and node attribute information when predicting links or inferring attributes .however , the algorithmic question of how to simultaneously incorporate these two sources of information remains largely unanswered .the relational learning , matrix factorization and alignment based approaches have been proposed to leverage attribute information for link prediction , but they suffer from scalability issues .more recently , backstrom and leskovec presented a supervised random walk ( srw ) algorithm for link prediction that combines network structure and edge attribute information , but this approach does not fully leverage node attribute information as it only incorporates node information for neighboring nodes .for instance , srw can not take advantage of the common node attribute san francisco of and in fig .[ figure : san ] since there is no edge between them .yin et al . proposed the use of _ social - attribute network _( san ) to gracefully integrate network structure and node attributes in a scalable way .they focused on generalizing random walk with restart ( rwwr ) algorithm to the san model to predict links as well as infer node attributes . in this paper , we generalize several leading supervised and unsupervised link prediction algorithms to the san model to both predict links and infer missing attributes .we evaluate these algorithms on a novel , large - scale google , and demonstrate performance improvement for each of them .moreover , we make the novel observation that inferring attributes could help predict links , i.e. , link prediction accuracy is further improved by first inferring missing node attributes .in our problem setting , we use an undirected graph to represent a social network , where edges in represent interactions between the latexmath:[ ] the attribute matrix for all nodes .note that certain attributes ( e.g. female and male , age of 20 and 30 ) are mutually exclusive .let be the set of all pairs of mutually exclusive attributes .this set constrains the attribute matrix so that no column contains a for two mutually exclusive attributes .we define the link prediction problem as follows : let and be snapshots of a social network at times and .then the link prediction problem involves using to predict the social network structure . when , new links are predicted . when , missing links are predicted . in this paper, we work with three snapshots of the google crawled at three successive times , denoted , and .to predict new links , we use various algorithms to solve the link prediction problem with and and first learn any required hyperparameters by performing grid search on the link prediction problem with and . similarly , to predict missing links , we solve the link prediction problem with and and learn hyperparameters via grid search with and . forany given snapshot , several entries of will be zero , corresponding to missing attributes . the attribute inference problem , which involves only a single snapshot of the network , is defined as follows : let be a snapshot of a social network .then the attribute inference problem is to infer whether each zero entry of corresponds to a positive or negative attribute , subject to the constraints listed in .our goal is to design scalable algorithms leveraging both network structure and rich node attributes to address these problems for real - world large - scale networks ._ social - attribute network _was first proposed by yin et al . to predict links and infer attributes. however , their original model did nt consider negative and mutually exclusive attributes . in this section ,we review this model and extend it to incorporate negative and mutex attributes . given a social network with distinct categorical attributes , an attribute matrix and mutex attributes set , we create an augmented network by adding additional nodes to , with each additional node corresponding to an attribute . for each node in with positive or negative attribute , we create an undirected link between and in the augmented network . for each mutually exclusive attribute pair , we create an undirected link between and .this augmented network is called the _ social - attribute network _ ( san ) since it includes the original social network interactions , relations between nodes and their attributes and mutex links between attributes . nodes in the san model corresponding to nodes in are called _ social nodes _ , and nodes representing attributes are called _attribute nodes_. links between social nodes are called _ social links _ , and links between social nodes and attribute nodes are called _attribute links_. attribute link is a _ positive attribute link _ if is a positive attribute of node , and it is a _ negative attribute link _ otherwise .links between mutually exclusive attribute nodes are called _mutex links_. intuitively , the san model explicitly describes the sharing of attributes across social nodes as well as the mutual exclusion between attributes , as illustrated in the sample san model of fig .[ figure : san ] .moreover , with the san model , the link prediction problem reduces to predicting social links and the attribute inference problem involves predicting attribute links .we also place weights on the various nodes and edges in the san model .these node and edge weights describe the relative importance of individual nodes or relationships across nodes and can also be used in a global fashion to balance the influence of social nodes versus attribute nodes and social links versus attribute links .we use and to denote the weight of node and the weight of link , respectively . additionally , for a given social or attribute node in the san model , we denote by and respectively the set of _ all neighbors _ and the set of _ social neighbors _ connected to via social links or positive attribute links .we define and in a similar fashion .this terminology will prove useful when we describe our generalization of leading link prediction algorithms to the san model in the next section .the fact that no social node can be linked to multiple mutex attributes is encoded in the _ mutex property _, i.e. , there is no triangle consisting of a mutex link and two positive attribute links in any social - attribute network , which enforces a set of constraints for all attribute inference algorithms . in this work ,we focus primarily on node attributes . however , we note that the san model can be naturally extended to incorporate _edge attributes_. indeed , we can use a function ( e.g. , the logistic function ) to map a given set of attributes for each edge ( e.g. , edge age ) into the real - valued edge weights of the san model .the attributes - to - weight mapping function can be learned using an approach similar to the one proposed by backstrom and leskovec .link prediction algorithms typically compute a probabilistic score for each candidate link and subsequently rank these scores and choose the largest ones ( up to some threshold ) as putative new or missing links . in the following ,we extend both unsupervised and supervised algorithms to the san model .furthermore , we note that when predicting attribute links , the san model features a post - processing step whereby we change the lowest ranked putative positive links violating the mutex property to negative links .liben - nowell and kleinberg provide a comprehensive survey of unsupervised link prediction algorithms for social networks .these algorithms can be roughly divided into two categories : local - neighborhood - based algorithms and global - structure - based algorithms . in principle , all of the algorithms discussed in can be generalized for the san model . in this work we focus on representative algorithms from both categories and we describe below how to generalize them to the san model to predict both social links and attribute links .we add the suffix ` -san ' to each algorithm name to indicate its generalization to the san model . in our presentation of the unsupervised algorithms ,we only consider positive attribute links , though many of these algorithms can be extended to signed networks . + * common neighbor ( cn - san ) * is a local algorithm that computes a score for a candidate social or attribute link as the sum of weights of and s common neighbors , i.e. .conventional cn only considers common social neighbors .+ * adamic - adar ( aa - san ) * is also a local algorithm . for a candidate social link aa - san score is conventional aa , initially proposed in to predict friendships on the web and subsequently adapted by to predict links in social networks , only considers common social neighbors .aa - san weights the importance of a common neighbor proportional to the inverse of the log of social degree .intuitively , we want to downweight the importance of neighbors that are either i ) social nodes that are social hubs or ii ) attribute nodes corresponding to attributes that are widespread across social nodes .since in both cases this weight depends on the social degree of a neighbor , the aa - san weight is derived based on social degree , rather than total degree .in contrast , for a candidate attribute link , the attribute degree of a common neighbor does influence the importance of the neighbor . for instance , consider two social nodes with the same social degree that are both common neighbors of nodes and . if the first of these social nodes has only two attribute neighbors while the second has attribute neighbors , the importance of the former social node should be greater with respect to the candidate attribute link .thus , aa - san computes the score for candidate attribute link as * low - rank approximation ( lra - san ) * takes advantage of global structure , in contrast to cn - san and aa - san .denote as the weighted social adjacency matrix where the entry of is if is a social link and zero otherwise .similarly , let be the weighted attribute adjacency matrix where the entry of is if is a positive attribute link and zero otherwise .we then obtain the weighted adjacency matrix for the san model by concatenating and , i.e. , ] , approximating with and using the entry of as a score for link . ] + * aa + low - rank approximation(aa+lra - san ) * is identical to cn+lra - san but with the score matrices and generated via the aa - san algorithm .+ * random walk with restart ( rwwr - san ) * is a global algorithm . in the san model ,a random walk with restart starting from recursively walks to one of its neighbors with probability proportional to the link weight and returns to with a fixed restart probability .the probability is the stationary probability of node in a random walk with restart initiated at . in general , . for a candidate social link , we compute and and let .note that rwwr for link prediction in previous work computes these stationary probabilities based only on the social network . for a candidate attribute link , rwwr - san only computes , and is taken as the score of .+ we finally note that for predicting social links , if we set the weights of all attribute nodes and all attribute links to zero and we set the weights of all social nodes and social links to one , then all the algorithms described above reduce to their standard forms described in . is an matrix of zeros , so the truncated svd of is equivalent to that of except for zeros appended to the right singular vectors of . ] in other words , we recover the link prediction algorithms on pure social networks .link prediction can be cast as a binary classification problem , in which we first construct features for links , and then use a classifier such as svms or logistic regression .in contrast to unsupervised attribute inference , negative attribute links are needed in supervised attribute inference . +* supervised link prediction ( slp - san ) * for each link in our training set , we can extract a set of topological features ( e.g. cn , aa , etc . )computed from pure social networks and the similar features computed from the corresponding social - attribute networks .we explored 4 feature combinations : i ) slp - i uses only topological features computed from social networks ; ii ) slp - ii uses topological features as well as an aggregate feature , i.e. , the number of common attributes of the two endpoints of a link ; iii ) slp - san - iii uses topological features ; and iv ) slp - san - vi uses topological features and .slp - san - iii and slp - san - vi contain the substring ` san ' because they use features extracted from the san model .slp - i and slp - ii are widely used in previous work . +* supervised attribute inference ( sai - san ) * recall that attribute inference is transformed to attribute link prediction with the san model .we can extract a set of topological features for each positive and negative attribute link .moreover , the positive attribute links are taken as positive examples while the negative attribute links are taken as negative examples .hence , we can train a binary classifier for attribute links and then apply it to infer the missing attribute links . in many real - world networks , most node attributes are missing .[ figure : node - attri ] shows the fraction of users as a function of the number of node attributes in google network .from this figure , we see that roughly 70% of users have no observed node attributes .hence , we will also investigate an iterative variant of the san model .we first infer the top attributes for users without any observed attributes .we then update the san model to include these predicted attributes and perform link prediction on the updated san model .this process can be performed for several iterations .google launched its new social network service named google in early july 2011 .we crawled three snapshots of the google social network and their users profiles on july 19 , august 6 and september 19 in 2011 .they are denoted as jul , aug and sep , respectively .we then pre - processed the data before conducting link prediction and attribute inference experiments . + * preprocessing social networks * in google , users divide their social connections into circles , such as a family circle and a friends circle .if user is in s circle , then there is a directed edge in the graph , and thus the google is a directed social graph .we converted this dataset into an undirected graph by only retaining edges if both directed edges and exist in the original graph .we chose to adopt this filtering step for two reasons : ( 1 ) bidirectional edges represent mutual friendships and hence represent a stronger type of relationship that is more likely to be useful when inferring users attributes from their friends attributes ( 2 ) we reduce the influence of spammers who add people into their circles without those people adding them back .spammers introduce fictitious directional edges into the social graph that adversely influence the performance of link prediction algorithms .+ * collecting attribute vocabulary * google include short entries about users such as occupation , employment , education , places lived , and gender , etc .we use employment and education to construct a vocabulary of attributes in this paper .we treat each distinct employer or school entity as a distinct attribute .google has predefined employer and school entities , although users can still fill in their own defined entities .due to users changing privacy settings , some profiles in jul are not found in aug and sep , so we use jul to construct our attribute vocabulary .specifically , from the profiles in jul , we list all attributes and compute frequency of appearance for each attribute . our attribute vocabulary is constructed by keeping attributes with frequency of at least 3 . + * constructing social - attribute networks * in order to demonstrate that the san model leverages node attributes well , we derived social - attribute networks in which each node has some positive attributes from the above google networks and attribute vocabulary .specifically , for an attribute - frequency threshold , we chose the largest connected social network from jul such that each node has at least distinct positive attributes .we also found the corresponding social networks consisting of these nodes in snapshots aug and sep .social - attribute networks were then constructed with the chosen social networks and the attributes of the nodes . specifically , we chose to construct 6 social - attribute networks whose statistics are shown in table [ table : net_stat ] .each social - attribute network is named by concatenating the snapshot name and the attribute - frequency threshold .for example , ` jul4 ' is the social - attribute network constructed using jul and .these names are indicated in the first column of the table . in the crawled raw networks ,some social links in jul are missing in aug and sep , where .these links are missing due to one of two events occurring between the jul and aug or sep snapshots : 1 ) users block other users , or 2 ) users set ( part of ) their circles to be publicly invisible after which point they can not be publicly crawled .these missed links provide ground truth labels for our experiments of predicting missing links . however , these missing links can alter estimates of network - level statistics , and can have unexpected influences on link prediction algorithms .moreover , it is likely in practice that companies like facebook and google keep records of these missing links , and so it is reasonable to add these links back to aug and sep for our link prediction experiments .the third column in table [ table : net_stat ] is the number of all social links after filling the missing links into aug and sep . the second column _ # soci links _ is used for experiments of predicting missing links , and column _ # all soci links _is used for the experiments of predicting new links . from these two columns ,the number of new links or missing links can be easily computed .for example , if we use aug2 as training data and sep2 as testing data for link prediction , the number of new links is , which is computed with entries in column _ # all soci links_. if we use aug2 as training data and jul2 as testing data in predicting missing links , the number of missing links is , which is computed with corresponding entries in column _ # soci links _ and _ # all soci links_. .*statistics of social - attribute networks .* [ cols="^,^,^,^,^,^",options="header " , ] [ table : net_stat_a ] [ table : net_stat ][ table : link - pre ] in our experiments , the main metric used is auc , area under the receiver operating characteristic ( roc ) curve , which is widely used in the machine learning and social network communities .auc is computed in the manner described in , in which both positive and negative examples are required . in principle, we could use new links or missing links as positive examples and all non - existing links as negative examples .however , large - scale social networks tend to be very sparse , e.g. , the average degree is in sep2 , and , as a result , the number of non - existing links can be enormous , e.g. , sep2 has around non - existing links .hence , computing auc using all non - existing links in large - scale networks is typically computationally infeasible .moreover , the majority of new links in typical online social networks close triangles , i.e. , are hop-2 links . for instance , we find that of the newly added links in google+ are hop-2 links .we thus evaluate our large network experiments using hop-2 link data as in , i.e. , new or missing hop-2 links are treated as positive examples and non - existing hop-2 links are treated as negative examples . in a social - attribute network , there are two categories of hop-2 links : 1 ) those with two endpoints sharing at least one common social node , and 2 ) those with two endpoints sharing only common attribute nodes .local algorithms applied to the original social network are unable to predict hop-2 links in the second category .thus , we evaluate only with respect to hop-2 links in the first category , so as not to give unfair advantage to algorithms running on the social - attribute network . to better understand whether the auc performance computed on hop-2 links can be generalized to performance on any - hop links, we additionally compute auc using any - hop links on the smaller google .in general , different nodes and links can have different weights in social - attribute networks , representing their relative importance in the network . in all of our experiments in this paper, we set all weights to be one and leave it for future work to learn weights .we compare our link prediction algorithms with supervised random walk ( srw ) , which leverages edge attributes , by transforming node attributes to edge attributes .specifically , we compute the number of common attributes of the two endpoints of each existing link . as in , we also use the number of common neighbors as an edge attribute .we adopt the wilcoxon - mann - whitney ( wmw ) loss function and logistic edge strength function in our implementations as recommended in .we compare our attribute inference algorithms with two algorithms , baseline and link , introduced by zheleva and getoor . using only node attributes, baseline first computes a marginal attribute distribution and then uses an attribute s probability as its score .link trains a classifier for each attribute by flattening nodes as the rows of the adjacency matrix of the social networks .zheleva and getoor found that link is the best algorithm when group memberships are not available .we use svm as our classifier in all supervised algorithms . for link prediction, we extract six topological features ( cn - san , aa - san , lra - san , cn+lra - san , aa+lra - san and rwwr - san ) from both pure social networks and social - attribute networks .hence , slp - i , slp - ii , slp - san - iii and slp - san - vi use 6 , 7 , 6 and 12 features , respectively . for attribute inference, we extract 9 topological features for each attribute link .we adopt two ranks ( detailed in [ sec : attri - infer ] ) for each low - rank approximation based algorithms , thus obtaining 6 features .the other three features are cn - san , aa - san and rwwr - san . to account for the highly imbalanced class distribution of examples for supervised link prediction andattribute inference we downsample negative examples so that we have equal number of positive and negative examples ( techniques proposed in could be used to further improve the performance ) .we use the pattern _dataset1_-_dataset2 _ to denote a train - test or train - validation pair , with _dataset1 _ a training dataset and _ dataset2 _ a testing or validation dataset .when conducting experiments to predict new links on the aug-sep train - test pair , srw , classifiers and hyperparameters of global algorithms , i.e. , ranks in lra - san , cn+lra - san , and aa+lra - san and the restart probability in rwwr - san , are learned on the jul-aug train - validation pair .similarly , when predicting missing links on train - test pair aug-jul , they are learned on train - validation pair sep-aug , where .the cn - san and aa - san algorithms are implemented in python 2.7 while the rwwr - san algorithm and supervised random walk ( srw ) are implemented in matlab , and all of them are run on a desktop with a 3.06 ghz intel core i3 and 4 gb of main memory .lra - san , cn+lra - san and aa+lra - san algorithms are implemented in matlab and run on an x86 - 64 architecture using a single 2.60 ghz core and 30 gb of main memory . in this sectionwe present evaluations of the algorithms on the google+ dataset .we first show that incorporating attributes via the san model improves the performance of both unsupervised and supervised link prediction algorithms .then we demonstrate that inferring attributes via link prediction algorithms within the san model achieves state - of - the - art performance .finally , we show that by combining attribute inference and link prediction in an iterative fashion , we achieve even greater accuracy on the link prediction task . to demonstrate the benefits of combining node attributes and network structure , we run the san - based link prediction algorithms described in section [ sec : algorithms ] both on the original social networks and on the corresponding social - attribute networks ( recall that the san - based unsupervised algorithms reduce to standard unsupervised link prediction algorithms when working solely with the original social networks ) .+ * predicting new links * table [ table : link - pre ] shows the auc results of predicting new links for each of our datasets .we are able to draw a number of conclusions from these results .first , the san model improves every unsupervised learning algorithm on every dataset , save for lra - san on aug2-sep2 .second , table [ table : link - pre - svm ] shows that attributes also improve supervised link prediction performance since slp - san - vi , slp - san - iii and slp - ii outperform slp - i .moreover , slp - san - vi , which adopts features extracted from both social networks and social - attribute networks , achieves the best performance , thus demonstrating the power of the san model .third , comparing rwwr - san in table [ table : link - pre-4-any - hop ] and srw in table [ table : link - pre - svm ] , we observe that the san model is better than srw at leveraging node attributes since rwwr - san with attributes outperforms srw .this result is not surprising given that srw is designed for edge attributes and when transforming node attributes to edge attributes , we lose some information .for instance , as illustrated in fig .[ figure : san ] , nodes and share the attribute san francisco . when transforming node attributes to edge attributes , this common attribute information is lost since and are not linked .[ figure : exp1-link - pre - roc-4 ] shows the roc curves of the cn+lra - san algorithm .we see that curve of cn+lra - san with attributes dominates that of cn+lra - san without attributes , demonstrating the power of the san model to effectively incorporate the additional predictive information of attributes .+ [ table : infer - link ] * predicting missing links * missing links can be divided into two categories : 1 ) links whose two endpoints have some social links in the training dataset .2 ) links with at least one endpoint that has no social links in the training dataset .category 1 corresponds to the scenarios where users block users or users set a part of their friend lists ( e.g. family circles ) to be private .category 2 corresponds to the scenario in which users hide their entire friend lists .note that all hop-2 missing links belong to category 1 .in addition to performing experiments to show that the san model improves missing link prediction , we also perform experiments to explore which category of missing links is easier to predict .table [ table : infer - link ] shows the results of predicting missing links on various datasets . as in the new - link prediction setting , the performance of every algorithm is improved by the san model , except for lra - san on aug4-jul4 and rwwr - san on aug4-jul4 for hop-2 missing links .when comparing tables [ table : infer - missing - link-4-any - hop ] and [ table : infer - missing - link-4-any - hop-1 ] or tables [ table : infer - missing - link - svm ] and [ table : infer - missing - link - svm-1 ] , we conclude that the missing links in category 2 are harder to predict than those in category 1 .rwwr - san without attributes performs poorly when predicting any - hop missing links in both categories ( as indicated by the entry with in table [ table : infer - missing - link-4-any - hop ] ) .this poor performance is due to the fact that rwwr - san without attributes assigns zero scores for all the missing links in category 2 ( positive examples ) and positive scores for most non - existing links ( negative examples ) , making many negative examples rank higher than positive examples and resulting in a very low auc . in this section ,we focus on inferring attributes using the san model . in our next set of experiments in section [ sec : iterate ] , we use the results of these attribute inference algorithms to further improve link prediction , and the results of this iterative approach further validate the performance of the san model for attribute inference . since the first step of iterative approach of section [ sec : iterate ] involves inferring the top attributes for each node , we employ an additional performance metric called pre@ in our attribute inference experiments . compared to auc, pre@ better captures the quality of the top attribute predictions for each user .specifically , for each sampled user , the top- predicted attributes are selected , and ( unnormalized ) pre@ is then defined as the number of positive attributes selected divided by the number of sampled users .we address score ties in the manner described in .since most google users have a small number of attributes , we set in our experiments . when evaluating algorithms for the inference of missing attributes , we require ground truth data .in general , ground truth for node attributes is difficult to obtain since it is often not possible to distinguish between negative and missing attributes .however , for most users the number of attributes is quite small , and so we assume that users with many positive attributes have no missing attributes . hence , we evaluate attribute inference on users that have at least 4 specified attributes , i.e. , we work with users in sep4 and assume that each attribute link in sep4 is either positive or negative . in our experiment , we sample 10% of the users in sep4 uniformly at random , remove their attribute links from sep4 , and evaluate the accuracy with which we can infer these users attributes .all removed positive attribute links are viewed as positive examples , while all the negative attribute links of the sampled users are treated as negative examples .we run a variety of algorithms for attribute inference , and for each algorithm we average the results over 10 random trials . as noted above, we evaluate the performance of attribute inference using both auc and pre@ . for the low - rank approximation based algorithms ,i.e. , lra - san , cn+lra - san and aa+lra - san , we report results using two different ranks , 100 and 1000 , and indicate which was used by the number following the algorithm name in fig .[ figure : infer - attri ] .we choose these two small ranks for computational reasons and also based on the fact that low - rank approximation methods assume that a small number of latent factors ( approximately ) describe the social - attribute networks . for rwwr - san, we set the restart probability to be 0.7 .[ figure : infer - attri ] shows the attribute inference results for various algorithms .several interesting observations can be made from this figure .first , under both metrics , all san - based algorithms perform better than baseline , save lra100-san and lra1000-san under pre,3,4 metric , which indicates that the san model is good at leveraging network structure to infer missing attributes .second , we find that auc and pre@ provide inconsistent conclusions about relative algorithm performance .for instance , the mean auc values suggest that sai - san beats all other algorithms .however , several unsupervised algorithms outperform sai - san with respect to pre,3,4 .the inconsistencies between the two metrics are expected since auc is a global measurement while pre@ is a local one .our sai - san algorithm dominates link under both auc and pre,3,4 metrics , thus demonstrating the power of mapping attribute inference to link prediction with the san model .section [ sec : link - prediction ] demonstrated that knowledge of a user s attributes can lead to significant improvements in link prediction .however , in real - world social networks like google+ , the vast majority of user attributes are missing ( see fig .[ figure : node - attri ] ) . to increase the realized benefits of social - attribute networks with few attributes , we propose first inferring missing attributes for each user whoseattributes are missing and then performing link prediction on the inferred social - attribute networks .recall that sai - san achieves the best auc , rwwr - san achieves the best pre@ in inferring attributes ( see fig . [figure : infer - attri ] ) and aa - san achieves comparable pre@ results while being more scalable . thus , in the following experiments , we use aa - san to first infer the top- missing attributes for users , and subsequently perform link prediction using various methods . in our experiments ,when we are working on the pair _ train - test _ , we sample 10% of the users of _ train _ uniformly at random and remove their attributes .we then run three variants of link prediction algorithms : i ) without attributes , ii ) with only the remaining attributes , and iii ) with the remaining attributes along with the inferred attributes .the top-4 attributes are inferred for each sampled user by aa - san .we report the results averaged over 10 trials .the hyperparameters of the global algorithms are the same as those in ( section [ sec : link - prediction ] ) , which are learned from the corresponding train - validation pair .table [ table : iter - link - pre ] shows the results of first inferring attributes and then predicting new links on the aug4-sep4 train - test pair .table [ table : iter - infer - missing - link ] shows the results of first inferring attributes and then predicting missing links on the aug4-jul4 train - test pair .we see that the inferred attributes improve the performance of all algorithms except lra - san on predicting missing links , which is unable to make use of attributes as demonstrated earlier in table [ table : infer - missing - link-4 ] .the aucs obtained with inferred attributes for all other algorithms are very close to those obtained with all positive attributes as shown in table [ table : link - pre-4 ] .this further demonstrates that aa - san is an effective algorithm for attribute inference .a wide range of link prediction methods have been developed .liben - nowell and kleinberg surveyed a set of unsupervised link prediction algorithms .li proposed maximal entropy random walk ( merw ) .lichtenwalter et al . proposed the propflow algorithm which is similar to rwwr but more localized .however , none of these approaches leverage node attribute information .link prediction methods leveraging attribute information first appear in the relational learning community .however , these approaches suffer from scalability issues .for instance , the largest network tested in has about nodes . recently ,backstrom and leskovec proposed the supervised random walk ( srw ) algorithm to leverage edge attributes .however , srw does not handle the scenario in which two nodes share common attributes ( e.g. nodes and in fig .[ figure : san ] ) , but no edge already exists between them .mapping link prediction to a classification problem is another way to incorporate attributes .we have shown that classifiers using features extracted from the san model perform very well .yang et al . proposed to jointly predict links and propagate node interests ( e.g. , music interest ) .their algorithm relies on the assumption that each node interest has a set of explicit attributes . as a result, their algorithm can not be applied to our scenario in which it s hard ( if possible ) to extract explicit attributes for our node attributes .previous works in aim at inferring node attributes ( e.g. , ethnicity and political orientation ) using supervised learning methods with features extracted from user names and user - generated texts .zheleva and getoor map attribute inference to a relational classification problem .they find that methods using group information achieve good results .these approaches are complementary to ours since they use additional information apart from network structure and node attributes . in this paper, we transform the attribute inference problem into a link prediction problem with the san model .therefore , any link prediction algorithm can be used to infer missing attributes .more importantly , we demonstrate that attribute inference can in turn help link prediction with the san model .we comprehensively evaluate the _ social - attribute network _ ( san ) model proposed in in terms of link prediction and attribute inference . more specifically, we adapt several representative unsupervised and supervised link prediction algorithms to the san model to both predict links and infer attributes .our evaluation with a large - scale novel google dataset demonstrates performance improvement for each of these generalized algorithm on both link prediction and attribute inference .moreover , we demonstrate a further improvement of link prediction accuracy by using the san model in an iterative fashion , first to infer missing attributes and subsequently to predict links .interesting avenues for future research include devising an iterative algorithm that alternates between attribute and link prediction , learning node and edge weights in the san model , and incorporating edge attributes , negative node attributes and mutex edges into large - scale experiments .we would like to thank di wang , satish rao , mario frank , kurt thomas , and shobha venkataraman for insightful feedback .this work is supported by the nsf under grants no .ccf-0424422 , 0311808 , 0832943 , 0448452 , 0842694 , 0627511 , 0842695 , 0808617 , 1122732 , 0831501ct - l , by the afosr under muri award no .fa9550 - 09 - 1 - 0539 , by the afrl under grant no .p010071555 , by the office of naval research under muri grant no .n000140911081 , by the muri program under afosr grant no .fa9550 - 08 - 1 - 0352 , the nsf graduate research fellowship under grant no .dge-0946797 , the dod through the ndseg program , by intel through the istc for secure computing , and by a grant from the amazon web services in education program .any opinions , findings , and conclusions or recommendations expressed in this material are those of the author(s ) and do not necessarily reflect the views of the funding agencies . | the effects of social influence and homophily suggest that both network structure and node attribute information should inform the tasks of link prediction and node attribute inference . recently , yin et al . proposed _ social - attribute network _ ( san ) , an attribute - augmented social network , to integrate network structure and node attributes to perform both link prediction and attribute inference . they focused on generalizing the random walk with restart algorithm to the san framework and showed improved performance . in this paper , we extend the san framework with several leading supervised and unsupervised link prediction algorithms and demonstrate performance improvement for each algorithm on both link prediction and attribute inference . moreover , we make the novel observation that attribute inference can help inform link prediction , i.e. , link prediction accuracy is further improved by first inferring missing attributes . we comprehensively evaluate these algorithms and compare them with other existing algorithms using a novel , large - scale google , which we make publicly available . |
einstein s recognition early last century that gravity can be interpreted as the curvature of space and time represented an enormous step forward in the way we think about fundamental physics . besides its obvious impact for understanding gravity over astrophysical distances complete with resolutions of earlier puzzles ( like the detailed properties of mercury s orbit ) and novel predictions for new phenomena ( like the bending of light and the slowing of clocks by gravitational fields ) its implications for other branches of physics have been equally profound .these implications include many ideas we nowadays take for granted .one such is the universal association of fundamental degrees of freedom with fields ( first identified for electromagnetism , but then cemented with its extension to gravity , together with the universal relativistic rejection of action at a distance ) .another is the recognition of the power of symmetries in the framing of physical law , and the ubiquity in particular of gauge symmetries in their description ( again reinforcing the earlier discovery in electromagnetism ) .a third is the systematization of the belief that the physical content nature s laws should be independent of the variables used in their description , and the consequent widespread penetration of geometrical methods throughout physics .but the study of general relativity ( gr ) and other interactions ( like electromagnetism , and its later - discovered relatives : the weak and strong forces ) have since drifted apart. like ex - lovers who remain friends , for most of the last century practitioners in either area have known little of the nitty gritty of each other s day - to - day struggles , even as they read approvingly of their occasional triumphs in the popular press . over the yearsthe study of both gravity and the other interactions has matured into precision science , with many impressive theoretical developments and observational tests . for gravitythis includes remarkably accurate accounts of motion within the solar system , to the point that gr through its use within the global positioning system ( gps ) is now an indispensable tool for engineers [ will 2001 ] . for the other interactions the successes include the development and testing of the standard model ( sm ) , a unified framework for all known non - gravitational physics , building on the earlier successes of quantum electrodynamics ( qed ) .there is nevertheless a mounting chorus of calls for modifying general relativity , both at very short and very long distances .these arise due to perceived failures of the theory when applied over distances much different from those over which it is well - tested .the failures at short distances are conceptual , to do with combining gravity with quantum effects .those at long distances are instead observational , and usually arise as ways to avoid the necessity for introducing the dark matter or dark energy that seem to be required when general relativity is applied to describe the properties of the universe as a whole .the remainder of this chapter argues that when searching for replacements for gr over short and long distances there is much to be learned from other branches of physics , where similar searches have revealed general constraints on how physics at different scales can relate to one another .the hard - won lessons learned there also have implications for gravitational physics , and this recognition is beginning to re - establish the connections between the gravitational and non - gravitational research communities . in a nutshell , the lessons distilled from other areas of physics make it likely that it is much more difficult to modify gravity over very long distances than over very tiny ones .this is because very broad principles ( like unitarity and stability ) strongly restrict what is possible .the difficulty of modifying gravity over long distances is a very useful ( but often neglected ) clue when interpreting cosmological data , because it strongly constrains the theoretical options that are available .we ignore such clues at our peril .the demand to replace general relativity at short distances arises because quantum mechanics should make it impossible to have a spacetime description of geometry for arbitrarily small scales .for example , an accurate measurement of a geometry s curvature , , requires positions to be measured with an accuracy , , smaller than the radius of curvature : but for position measurements with resolution , , the uncertainty principle requires a momentum uncertainty , , which implies an associated energy uncertainty , , or equivalently a mass .but the curvature associated with having this much energy within a distance of order is then , where defines the planck length , , and is newton s constant .requiring eq .( [ deltacriterion ] ) , then shows that there is a lower bound on the resolution with which spacetime can be measured : although this is an extremely short distance ( present experiments only reach down to about m ) , it is also only a lower bound .depending on how gravity really works over short distances , quantum gravity effects could arise at much longer scales .notice how crucial it is to this argument that the interaction strength , , has dimensions of length ( in fundamental units , for which ) .imagine performing a similar estimate for an electrostatic field .the coulomb interaction energy between two electrons separated by a distance is , where denotes the electron s electric charge . butthe energy required by the uncertainty principle to localize electrons this close to one another is , so the condition that this be smaller than is where the fine - structure constant , , is dimensionless .this condition does nt depend on because the relative strength of quantum fluctuations to electrostatic interactions does not change with distance .the observation that quantum fluctuations do not get worse at shorter distances in electrodynamics , and so is much less severe than the power - law competition found above for gravity . ] but do for gravity can be more technically expressed as the statement that qed is a _renormalizable _ quantum field theory ( qft ) while gr is not . inqft small - distance quantum fluctuations appear ( within perturbation theory ) as divergences at small distances ( or high momenta ) when summing over all possible quantum intermediate states .for instance , given a hamiltonian , , the second - order shift in the energy of a state is where the approximate equality focusses on the sum over a basis of free single - particle states having energies when performing the sum over . because the combination $ ] typically falls with large , limit .( relativistic calculations organize these sums differently to preserve manifest lorentz invariance at each step , but the upshot is the same . ) renormalizability means that these divergences can all be absorbed into the unknown parameters of the theory like the electron s charge and mass , for instance whose values must in any case be inferred by comparison with experiments . as the above estimates suggest , the hallmark of a nonrenormalizable theory is the appearance of couplings ( like newton s constant ) having dimensions of length to a positive power ( in fundamental units ) .couplings like this ruin perturbative renormalizability because the more powers of them that appear in a result , the more divergent that result typically is .for instance , a contribution that arises at order in newton s constant usually depends on through the dimensionless combination , where is the uv cutoff in momentum space ( equivalently , is the small - distance cutoff in position space ) .by contrast , having more powers of dimensionless couplings , or those having dimensions of inverse powers of length , do not worsen uv divergences .ever - worsening divergences ruin the arguments that show for renormalizable theories that all calculations are finite once a basic set of couplings are appropriately redefined .removal of divergences can be accomplished , but only by introducing an infinite number of coupling parameters to be renormalized .lack of renormalizability was for a long time regarded as a fundamental obstacle to performing any quantum calculations within gravity .after all , if every calculation is associated with a new parameter that absorbs the new divergences , whose value must be inferred experimentally , then there are as many parameters as observables and no predictions are possible .if this were really true , it would mean that any classical prediction of gr would come with incalculable theoretical errors due to the uncontrolled size of the quantum corrections . andthe presence of such errors would render meaningless any detailed comparisons between classical predictions and observations , potentially ruining gr s observational successes .how can meaningful calculations be made ?as it happens , tools for making meaningful quantum calculations using non - renormalizable theories exist , having been developed for situations where quantum effects are more important than they usually are for gravity [ weinberg 1979 , gasser 1984 ] . the key to understanding how to work with non - renormalizable theories is to recognize that they can arise as approximations to more fundamental , renormalizable physics , for which explicit calculations are possible .the way non - renormalizable theories arise in this case is as a low - energy / long - distance approximation in situations for which short - distance physics is unimportant , and so is coarse - grained or integrated out [ gell - mann 1954 , wilson 1974 ] . for instance , consider the lagrangian density for the quantum electrodynamics of electrons and muons where and ( or and ) are the electron ( or muon ) mass and field . here and , as usual , and represents the dirac matrices that satisfy .this is a renormalizable theory because all parameters , , and , have non - positive dimension when regarded as a power of length in fundamental units .suppose now we choose to examine observables only involving the electromagnetic interactions of electrons at energies ( such as the energy levels of atoms , for instance ) .muons should be largely irrelevant for these kinds of observables , but not completely so .muons are not completely irrelevant because they can contribute to electron - photon processes at higher orders in perturbation theory as virtual states .it happens that any such effects due to virtual muons can be described at low energies by the following _ effective field theory _ of electrons and photons only : where the second line is obtained from the first by performing the field redefinition + \cdots \,.\ ] ] in both equations the ellipses describe terms suppressed by more than two powers of .the lagrangian densities of eqs .( [ lqed ] ) and ( [ leffqed ] ) are precisely equivalent in that they give precisely the same results for _ all _ low - energy electron / photon observables , provided one works only to leading order in . if the accuracy of the agreement is to be at the one - loop level , then equivalence requires the choice , and the effective interaction captures the leading effects of a muon loop in the vacuum polarization .if agreement is to be at the two - loop level , then captures effects coming from higher loops as well , and so on .this example ( and many many others ) shows that it must be possible to make sensible predictions using non - renormalizable theories .this must be so because the lagrangian of eq .( [ leffqed ] ) is not renormalizable its coupling has dimensions ( length) yet it agrees precisely with the ( very sensible ) predictions of qed , eq .( [ lqed ] ) .but it is important that this agreement only works up to order .if we work beyond order in this expansion , we can still find a lagrangian , , that captures all of the effects of qed to the desired order .the corresponding lagrangian requires more terms than in eq .( [ leffqed ] ) , however , also including terms like that arise at order .agreement with qed in this case requires .sensible predictions can be extracted from non - renormalizable theories , but only if one is careful to work only to a fixed order in the expansion . what is useful about this process is that an effective theory like ( [ leffqed ] ) is much easier to use than is the full theory ( [ lqed ] ) . and any observable whatsoevermay be computed once the coefficients ( and in the above examples ) of the various non - renormalizable interactions are identified .this can be done by comparing its implications with those of the full theory for a few specific observables .what about the uv divergences associated with these new effective interactions ?they must be renormalized , and the many couplings required to perform this renormalization correspond to the many couplings that arise within the effective theory at successive orders in .but predictiveness is not lost because working to fixed order in means that only a fixed number of effective couplings are required in any given application . at presentthis is the _ only _ known way to make sense of perturbatively non - renormalizable theories . in particularit means that there is a hidden approximation involved in the use of a non - renormalizable theory the low - energy , , expansion that may not have been obvious from the get - go .what would this picture mean if applied to gr ?first , it would mean that gr must be regarded as the leading term in the low - energy / long - distance approximation to some more fundamental theory .working beyond leading order would mean extending the einstein - hilbert action to include higher powers of curvatures and their derivatives , with the terms with the fewest derivatives being expected to dominate at low energies [ for a review see burgess 2004 ] .since we do not know what the underlying theory is , we can not hope to compute the couplings in this effective theory from first principles as was done above for qed .instead we treat these couplings as phenomenological , ultimately to be determined from experiment .the most general interactions involving the fewest curvatures and derivatives , that are consistent with general covariance are where is the metric s riemann tensor , is its ricci tensor , and is the ricci scalar , each of which involves precisely two derivatives of the metric . the first term in eq . ( [ gravaction ] )is the cosmological constant , which we drop because observations imply is ( for some unknown reason , see below ) extremely small .once this is done the leading term in the derivative expansion is the einstein - hilbert action whose coefficient , gev , has dimensions of mass ( when ) , and is set by the value of newton s constant .this is followed by curvature - squared terms having dimensionless effective couplings , , and curvature - cubed terms with couplings inversely proportional to a mass , , ( not all of which are written in eq .( [ gravaction ] ) ) .although the numerical value of is known , the mass scale appearing in the curvature - cubed ( and higher ) terms is not .but since it appears in the denominator it is the lowest mass scale to have been integrated out that should be expected to dominate .what its value should be depends on the scale of the applications one has in mind . for applications to the solar system or to astrophysics reasonably be taken to be the electron mass , .but for applications to inflation , where the scales of interest are much larger than , would instead be taken to be the lightest particle that is heavier than the scales of inflationary interest .the einstein - hilbert term should dominate at low energies ( since it involves the fewest derivatives ) , and this expectation can be made more precise by systematically identifying which interactions contribute to a particular order in the semiclassical expansion .to do so we expand the metric about an asymptotically static background spacetime : , and compute ( say ) the scattering amplitudes for asymptotic graviton states that impinge onto the geometry from afar .if the energy , , of the incoming states are all comparable and similar to the curvatures scales of the background spacetime , dimensional analysis can be used to give an estimate for the energy - dependence of an -loop contribution to a scattering amplitude , .consider a contribution to this amplitude that involves external lines and vertices involving derivatives and attached graviton lines .dimensional analysis leads to the estimate : ^{v_{id } } \,.\ ] ] notice that no negative powers of appear here because general covariance requires derivatives come in pairs , so the index in the product runs over , with .this last expression displays the low - energy approximation alluded to above because it shows that the small quantities controlling the perturbative expansion are and .use of this expansion ( and in particular its leading , classical limit see below ) presupposes both of these quantities to be small .notice also that because , factors of are much larger than factors of , but because they do not arise until curvature - cubed interactions are important , the perturbative expansion always starts off with powers of .( [ grcount1a ] ) answers a question that is not asked often enough : what is the theoretical error made when treating gravitational physics in the classical approximation ?what makes it so useful in this regard is that it quantifies the size of the contribution to ( or other observables ) arising both from quantum effects ( _ i.e. _ loops , with ) , and from terms normally not included in the lagrangian ( such as higher - curvature terms ) .this allows an estimate of the size of the error that is made when such terms are not considered ( as is often the case ) . in particular , eq .( [ grcount1a ] ) justifies why classical calculations using gr work so well , and quantifies just how accurate their quantum corrections are expected to be . to see this ,we ask which graphs dominate in the small- limit . for any fixed process ( _ i.e. _ fixed ) eq .( [ grcount1a ] ) shows the dominant contributions are those for which that is , the dominant contribution comes from arbitrary tree graphs constructed purely from the einstein - hilbert action .this is precisely the prediction of classical general relativity .for instance , for the scattering of two gravitons about flat space , , we have , and eq .( [ grcount1a ] ) predicts the dominant energy - dependence to be .this is borne out by explicit tree - level calculations [ dewitt 1967 ] which give for an appropriate choice of graviton polarizations .here , and are the usual lorentz - invariant mandelstam variables built from the initial and final particle four - momenta , all of which are proportional to .this shows both that to leading order , and that it is the physical , invariant , centre - of - mass energy , , that is the relevant scale against which and should be compared .the next - to - leading contributions , according to eq .( [ grcount1a ] ) , arise in one of two ways : either these correspond to one - loop ( quantum ) corrections computed only using einstein gravity ; plus a tree - level contribution including precisely one vertex from one of the curvature - squared interactions ( in addition to any number of interactions from the einstein - hilbert term ) .the uv divergences arising in the first type of contribution are absorbed into the coefficients of the interactions appearing in the second type .both are suppressed compared to the leading , classical , term by a factor of .this estimate ( plus logarithmic complications due to infrared divergences ) is also borne out by explicit one - loop calculations about flat space [ weinberg 1965 , dunbar 1995 , donoghue 1999 ] .this is the reasoning that shows why it makes sense to compute quantum effects , like hawking radiation or inflationary fluctuations , within a gravitational context . for observableslocated a distance away from a gravitating mass , the leading quantum corrections are predicted to be of order . for comparison , the size of classical relativistic corrections is set by , where denotes the schwarzschild radius . at the surface of the sun this makes relativistic corrections of order , while quantum corrections are .clearly the classical approximation to gr is _ extremely _ good within the solar system . on the other hand , although relativistic effects can not be neglected near a black hole , since , the relative size of quantum corrections near the event horizon is which is negligible provided . since is of order tens of micrograms , this shows why quantum effects represent small perturbations for any astrophysical black holes , but would not be under control for any attempt to interpret the gravitational field of an elementary particle ( like an electron ) as giving rise to a black hole .the good news is that it says that the observational successes of gr are remarkably robust against the details of whatever small - distance physics ultimately describes gravity over very small distances .this is because _ any _ microscopic physics that predicts the same symmetries ( like lorentz invariance ) and particle content ( a massless spin-2 particle , or equivalently a long - range force coupled to stress - energy ) as gr , must be described by a generally covariant effective action like eq .( [ gravaction ] ) .because this is dominated at low energies by the einstein - hilbert action , it suffices to get the low - energy particle content and symmetries right to get gr right in all of its glorious detail [ deser 1970 ] .the bad news applies to those who think they know what the fundamental theory of quantum gravity really is at small scales , since whatever it is will be very hard to test experimentally .this is because all theories that get the bare minimum right ( like a massless graviton ) , are likely to correctly capture all of the successes of gr in one fell swoop . at low energies the only difference between the predictions of _any _ such theory is the value of the coefficients , and _ etc _ , appearing in the low - energy lagrangian ( [ gravaction ] ) , none of which are yet observable .there are two kinds of proposals that allow tests at low energies : those that change the low - energy degrees of freedom ( such as by adding new light particles in addition to the graviton more about these proposals below ) ; and those that change the symmetries predicted for the low - energy theory .prominent amongst this latter category are theories that postulate that gravity at short distances breaks lorentz or rotational invariance , perhaps because spacetime becomes discrete at these scales . at first sight ,breaking lorentz invariance at short distances seems batty , due to the high accuracy with which experimental tests verify the lorentz - invariance of the vacuum within which we live .how could the world we see appear so lorentz invariant if it is really not so deeper down ?surprisingly , experience with other areas of physics suggests this may not be so crazy an idea ; we know of other , emergent , symmetries that can appear to be very accurate at long distances even though they are badly broken at short distances .most notable among these is the symmetry responsible for conservation of baryon number , which has long been known to be an ` accidental ' symmetry of the standard model .this means that for _ any _ microscopic theory whose low - energy particle content is that of the sm , any violations of baryon number must necessarily be described by a non - renormalizable effective interaction [ weinberg 1979a , wilczek 1979 ] , and so be suppressed by a power of a large inverse mass , . this suppression can be enough to agree with observations ( like the absence of proton decay ) if is as large as gev .could lorentz invariance be similarly emergent ?if so , it should be possible to find effective field theories for which lorentz violation first arises suppressed by some power of a heavy scale , , even if lorentz invariance is not imposed from the outset as a symmetry of the theory .unfortunately this seems hard to achieve , since in the absence of lorentz invariance it is difficult transformations , at least for the kinetic terms . ] in an effective theory to explain why the effective terms should have precisely the same coefficient in the low - energy theory .( see however [ groot nebbelink 2005 ] for some attempts . )the problem is that the coefficients of these terms are dimensionless in fundamental units , and so are unsuppressed by powers of .but the relative normalization of these two terms governs the maximal speed of propagation of the corresponding particle , and there are extremely good bounds ( for some particles better than a part in ) on how much this can differ from the speed of light [ see , for instance , mattingly 2005 for a recent review ] .this underlines why proponents of any particular quantum gravity proposal must work hard to provide the effective field theory ( eft ) that describes their low - energy limit [ see kostelecky 2004 , mattingly 2005 for some gravitational examples ] . since all of the observational implications are contained within the effective theory , it is impossible to know without it whether or not the proposal satisfies all of the existing experimental teststhis is particularly true for proposals that claim to predict a few specific low - energy effects that are potentially observable ( such as small violations of lorentz invariance in cosmology ) .even if the predicted effects should be observed , the theory must also be shown not to be in conflict with other relevant observations ( such as the absence of lorentz invariance elsewhere ) , and this usually requires an eft formulation .there also has been considerable activity over recent years investigating the possibility that gr might fail , but over very long distances rather than short ones .this possibility is driven most persuasively from cosmology , where the hot big bang paradigm has survived a host of detailed observational tests , but only if the universe is pervaded by no less than _ two _ kinds of new exotic forms of matter : dark matter ( at present making up of the universal energy density ) and dark energy ( comprising of the cosmic energy density ) . because all of the evidence for the existence of these comes from their gravitational interactions ,inferred using gr , the suspicion is that it might be more economical to interpret instead the cosmological tests as evidence that gr is failing over long distances .but since the required modifications occur over long distances , their discussion is performed most efficiently within an effective lagrangian framework .these next paragraphs summarize my personal take on what has been learnt to this point .an important consideration when trying to modify gravity over long distances is the great difficulty in doing so in a consistent way .almost all modifications so far proposed run into trouble with stability or unitarity , in that they predict unstable degrees of freedom like ` ghosts , ' particles having negative kinetic energy .the presence of ghosts in a low energy theory is generally regarded as poison because it implies there are instabilities . at the quantum levelthese instabilities usually undermine our understanding of particle physics and the very stability of the vacuum [ see cline 2004 for a calculation showing what can go wrong ] , but even at the classical level they typically ruin the agreement between the observed orbital decay of binary pulsars and gr predictions for their energy loss into gravitational waves .the origin of these difficulties seems to be the strong consistency requirements that quantum mechanics and lorentz invariance impose on theories of massless particles having spin - one or higher [ weinberg 1964 , deser 1970 , weinberg 1980 ] , with static ( non - derivative ) interactions . a variety of studies indicate that a consistent description of particles with spins always requires a local invariance , which in the cases of spins 1 , 3/2 and 2 corresponds to gauge invariance , supersymmetry or general covariance , and this local symmetry strongly limits the kinds of interactions that are possible . a remarkable equivalence between asymptotically anti - de sitter gravitational theories and non - gravitational systems in one lower dimensions may provide a loophole to some of these arguments , although its ultimate impact is not yet known . ]although it remains an area of active research [ dvali 2000 ] , at present the only systems known to satisfy these consistency constraints consist of relativistic theories of spins 0 through 1 coupled either to gravity or supergravity ( possibly in more than 4 spacetime dimensions ) . as might be expected , widespread acceptance of the existence of a hitherto - unknown form of matter requires the concordance of several independent lines of evidence , and this constrains one s options when formulating a theory for dark matter .it is useful to review this evidence when deciding whether it indicates a failure of gr or a new form of matter .the evidence for dark matter comes from measuring the amount of matter in a region as indicated by how things gravitate towards it , and comparing the result with the amount of matter that is directly visible .several types of independent comparisons consistently point to there being more than 10 times as much dark , gravitating material in space than is visible : * _ galaxies : _ the total mass in a galaxy may be inferred from the orbital motion of stars and gas measured as a function of distance from the galactic center .the results , for large galaxies like the milky way , point to several times more matter than is directly visible . * _ galaxy clusters : _ similar measurements using the motion of galaxies and temperature of hot gas in large galaxy clusters also indicate the presence of much more mass than is visible . * _ structure formation : _ present - day galaxies and galaxy clusters formed through the gravitational amplification of initially - small primordial density fluctuations . in this case the evidence for dark matter arises from the interplay of two facts : first, the initial density fluctuations are known to be very small , , at the time when the cmb was emitted .second , small initial fluctuations can not be amplified by gravity until the epoch where non - relativistic matter begins to dominate the total energy density . but this does not give enough time for the initially - small fluctuations to form galaxies unless there is much more matter present than can be accounted for by baryons .the amount required agrees with the amount inferred from the previous measures described above . * _ primordial nucleosynthesis : _ the total mass density of ordinary matter ( baryons ) in the universe can be inferred from the predicted relative abundance of primordial nuclei created within the hot big bang .this predicted abundance agrees well with observations , and relies on the competition between nuclear reaction rates and the rate with which the universe cools . but both of these rates themselves depend on the net abundance of baryons in the universe : the nuclear reaction rates depend on the number of baryons present ; and the cooling rate depends on how fast the universe expands , and so at least , in gr on its total energy density .the success of the predictions of big bang nucleosynthesis ( bbn ) therefore fixes the fraction of the universal energy density which can consist of baryons , and implies that there can at most be a few times more baryons than what would be inferred by counting those that are directly visible .* _ the cosmic microwave background ( cmb ) : _ cmb photons provide an independent measure of the total baryon abundance .they do so because sound waves in the baryon density that are present when these photons were radiated are observable as small temperature fluctuations .since the sound - wave properties depend on the density of baryons , a detailed understanding of the cmb temperature spectrum allows the total baryon density to be reconstructed .the result agrees with the bbn measure described above .there are two main options for explaining these observations .since dark matter is inferred gravitationally , perhaps the laws of gravity differ on extra - galactic scales than in the solar system .alternatively , there could exists a cosmic abundance of a new type of hitherto - undiscovered particle . at presentthere are several reasons that make it more likely that dark matter is explained by the presence of a new type of particle than by changing gr on long distances .first , as mentioned above , sensible modifications are difficult to make at long distances that lack ghosts and other inconsistencies .second , no phenomenological modification of gravity has yet been proposed that accounts for all the independent lines of evidence given above ( although there is a proposal that can explain the rotation of galaxies [ milgrom 1983 , sanders 2002 ] ) . on the other hand ,all that is required to obtain dark matter as a new form of matter is the existence of a new type of stable elementary particle having a mass and couplings similar to those of the boson , which is already known to exist . bosons would be excellent dark matter candidates if only they did not decay . a particle with mass and couplings like the boson , but which is stable called a weakly interacting massive particle ( wimp ) would naturally have a relic thermal abundance in the hot big bang that lies in the range observed for dark matter [ for a review , see eidelman 2004 ] .new particles with these properties are actually predicted by many current proposals for the new physics that is likely to replace the standard model at energies to be explored by the large hadron collider ( lhc ) . at the present juncture the preponderance of evidence the simplicity of the particle option and the difficulty of making a modification to gr that works favours the interpretation of cosmological evidence as pointing to the existence of a new type of matter rather than a modification to the laws of gravity . * _ universal acceleration : _ since gravity is attractive , one expects an expanding universe containing only ordinary ( and dark ) matter and radiation to have a decelerating expansion rate .evidence for dark energy comes from measurements indicating the universal expansion is _ accelerating _ rather than decelerating , obtained by measuring the brightness of distant supernovae [ perlmutter 1997 , riess 1997 , bahcall 1999 ] .according to gr , accelerated expansion implies the universe is dominated by something with an equation of state satisfying , which is not true for ordinary matter , radiation or dark matter . * _ flatness of the universe : _ an independent measure of the dark energy comes from the observed temperature fluctuations in the cmb . because the cmb photons traverse the entire observable universe before reaching us , their properties on arrival depend on the geometry of the universe as a whole ( and so also , according to gr , on its total energy density ) .agreement with observations implies the total energy density is larger than the ordinary and dark matter abundances , which fall short by an amount consistent with the amount of dark energy required by the acceleration of the universe s expansion [ komatsu 2009 ] .again the theoretical options are the existence of a new form of energy density , or a modification of gr at long distances .although there are phenomenological proposals for modifications that can cause the universe to accelerate ( such as [ dvali 2000 ] ) , all of the previously described problems with long - distance modifications to gr also apply here . by contrast, there is a very simple energy density that does the job , consisting simply of a cosmological constant _i.e. _ a constant ev in eq .( [ gravaction ] ) , for which .this is phenomenologically just what the doctor ordered , and agrees very well with the observations .the theoretical difficulty here is that a cosmological constant is indistinguishable from the energy density of a lorentz - invariant vacuum , . whether this , together with supersymmetry , can solve the problem is under active study [ burgess 2005 ] . ] since both contribute to the stress tensor an amount . in principle, this should be a good thing because we believe we can compute the vacuum energy .the problem is that ordinary particles ( like the electron ) contribute such an enormous amount the electron gives ev that agreement with the observed value requires a cancellation [ weinberg 1989 ] to better than one part in .dark matter and dark energy are two forms of exotic matter , whose existence is inferred purely from their gravitational influence on visible objects .it is tempting to replace the need for two new things with a single modification to gravity over very large distances . yetthe preponderance of evidence again argues against this point of view .first , it is difficult to modify gr at long distances without introducing pathologies .second , it is difficult to find modifications that account for more than one of the several independent lines of evidence ( particularly for dark matter ) .by contrast , it is not difficult to make models of dark matter ( wimps ) or dark energy ( a cosmological constant ) . for dark energythis point of view runs up against the cosmological constant problem , which might indicate the presence of observably large extra dimensions , but for which no consensus yet exists . in summary ,modifications to general relativity are widely mooted over both large and small distances .this chapter argues that modifications at small distances are indeed very likely , and well worth seeking .but unless the modification takes place just beyond our present experimental reach ( m ) [ arkani - hamed 1998 , antoniadis 1998 , burgess 2005 ] , it is also likely to be very difficult to test experimentally .the basic obstruction is the decoupling from long distances of short - distance physics , a property most efficiently expressed using effective field theory methods .the good news is that this means that the many observational successes of gr are insensitive to the details of whatever the modification proves to be .modifications to gr over very long distances are also possible , and have been argued as more economical than requiring the existence of two types of unknown forms of matter ( dark matter and dark energy ) .if so , consistency constraints seem to restrict the possibilities to supplementing gr by other very light spin-0 or spin-1 bosons ( possibly in higher dimensions ) . the experimental implications of such modifications are themselves best explored using effective field theories .unfortunately , no such a modification has yet been found that accounts for all of the evidence for dark matter or energy in a way that is both consistent with other tests of gr and is more economical than the proposals for dark matter or energy themselves . to the extent that the utility of effective field theory relies on decoupling, one might ask : what evidence do we have that planck - scale physics decouples ? there are two lines of argument that bear on this question .first , once specific modifications to gravity are proposed it becomes possible to test whether decoupling takes place .perhaps the best example of a consistent modification to gravity at short distances is string theory , and all the present evidence points to decoupling holding in this case .but more generally , if sub - planckian scales do _ not _ decouple , one must ask : why has science made progress at all ?after all , although nature comes to us with many scales , decoupling is what ensures we do nt need to understand them all at once .if sub - planckian physics does not decouple , what keeps it from appearing everywhere , and destroying our hard - won understanding of nature ?i thank the editors for their kind invitation to contribute to this volume , and for their patience in awaiting my contribution .my understanding of this topic was learned from steven weinberg , who pioneered effective field theory techniques , and was among the first to connect the dots explicitly about gravity s interpretation as an effective field theory .my research is funded by the natural sciences and engineering research council of canada , as well as by funds from mcmaster university and perimeter institute .aghababaie , y. , burgess , c.p . ,parameswaran , s.l . andquevedo , f. ( 2004 ) nucl .b * 680 * , 389 [ arxiv : hep - th/0304256 ] .antoniadis , i. , arkani - hamed , n. , dimopoulos , s. and dvali , g. ( 1998 ) phys .b * 436 * 257 [ arxiv : hep - ph/9804398 ] . arkani - hamed , n. , dimopoulos , s. and dvali , g. ( 1998 ) phys .b * 429 * 263 [ arxiv : hep - ph/9803315 ] . arkani - hamed , n. , dimopoulos , s. , kaloper , n. and sundrum , r. ( 2000 ) phys . lett .b * 480 * 193 , [ hep - th/0001197 ] .burgess , c.p .( 2005 ) aip conf .proc . * 743 * , 417 [ arxiv : hep - th/0411140 ] .carroll , s.m . and m. m. guica , m.m . , [ arxiv : hep - th/0302067 ] .cline , j.m ., jeon , s. and moore , g.d .( 2004 ) phys .d * 70 * 043543 [ arxiv : hep - ph/0311312 ] .deser , s. ( 1970 ) gen .* 1 * 9 [ arxiv : gr - qc/0411023 ] .dewitt , b.s .( 1967 ) _ phys .rev . _ * 162 * 1239 .gell - mann , m. and low , f.e .( 1954 ) phys .rev . * 95 * 1300 .groot nibbelink , s. and pospelov , m. ( 2005 ) phys .lett . * 94 * 081601 [ arxiv : hep - ph/0404271 ] .kachru , s. , schulz , m.b . andsilverstein , e. ( 2000 ) phys . rev .d * 62 * 045021 , [ hep - th/0001206 ] .kostelecky , v. a. ( 2004 ) phys .d * 69 * 105009 [ arxiv : hep - th/0312310 ] .maldacena , j.m .( 1998 ) adv .* 2 * ( 1998 )231 [ int . j. theor .* 38 * ( 1999 ) 1113 ] [ arxiv : hep - th/9711200 ] .mattingly , d. ( 2005 ) living rev .* 8 * 5 [ arxiv : gr - qc/0502097 ] .milgrom , m. ( 1983 ) _ ap .j. _ * 270 * 365 - 370 ; 371 - 283 ; 384 - 389 . | we live at a time of contradictory messages about how successfully we understand gravity . general relativity seems to work very well in the earth s immediate neighborhood , but arguments abound that it needs modification at very small and/or very large distances . this essay tries to put this discussion into the broader context of similar situations in other areas of physics , and summarizes some of the lessons which our good understanding of gravity in the solar system has for proponents for its modification over very long and very short distances . the main message is that effective theories , in the technical sense of ` effective ' , provide the natural language for testing proposals , and so are also effective in the colloquial sense . |
tuberculosis ( tb ) detection and treatment saved 22 million of lives , between 1995 and 2012 , following the 2013 report of the world health organization ( who ) .however , in 2012 , there were 8.6 million of new tb cases and 1.3 million of tb deaths .tb prevention , diagnosis and treatment , requires adequate funding , sustained over many years , which represents a worldwide scale challenge .mathematical dynamic models are an important tool in analyzing the spread and control of infectious diseases . many tb mathematical modelshave been developed see , e.g. , and the references cited therein .the main differences of the models proposed in are the way they represent reinfection , since there is no consensus on wether a previous infection gives or not protection .the way recently infected individuals progress to active disease is not the same in all models : they can be `` fast progressors '' or `` slow progressors '' . in some models, it is assumed that only 5 to 10% of the infected individuals are fast progressors .the remaining models consider that individuals are able to contain the infection asymptomatic and non infectiously ( latent individuals ) , having a much lower probability of developing active disease by endogenous reactivation .more recent models also assume exogenous reinfection of latent and treated individuals , based on the fact that infection and/or disease do not confer full protection .this assumption has an important impact on the efficacy of interventions . in this paper, we consider a tb mathematical model from , where exogenous reinfection is considered . without treatment , tb mortality rates are hight .different interventions are available for tb prevention and treatment : vaccination to prevent infection ; treatment to cure active tb ; treatment of latent tb to prevent endogenous reactivation . in this work, we study the implementation of two post - exposure interventions that are not widely used : treatment of early latent individuals with anti - tb drugs ( e.g. , treatment of recent contacts of index cases ) and prophylactic treatment / vaccination of the persistent latent individuals .we propose an optimal control problem that consists in analyzing how these two control measures should be implemented , for a certain time period , in order to reduce the number of active infected individuals , while controlling the interventions implementation costs .optimal control is a branch of mathematics developed to find optimal ways to control a dynamic system .other authors applied optimal control theory to tb models ( see , e.g. , ) .this approach allows the study of the most cost - effective intervention design by generating an implementation design that minimizes an objective function .the intensity of interventions can be relaxed along time , which is not the case considered in most models , for which interventions are modeled by constant rates .the paper is organized as follows . in section [ sec : model ]we present the mathematical model for tb that will be study in this paper .two control functions and are then added to the original model from .section [ sec : oc : problem ] is dedicated to the formulation of the optimal control problem .we prove the existence of an unique solution and derive the expression for the optimal controls according to the pontryagin maximum principle .section [ sec : numericalresults ] has four subsections dedicated to a numerical and cost - effectiveness analysis of the optimal control problem .we start by illustrating the problem solutions for a particular case ( section [ subsec : example ] ) .we then introduce some summary measures in section [ subsec : smeasures ] to describe how the results change when varying transmission intensity ( section [ subsec : beta ] ) and protection against reinfection ( section [ subsec : sigma ] ) . in section [ subsec :oc : strategies ] , we analyze the cost - effectiveness of three intervention strategies : applying or separately and applying the two control measures simultaneously .we end with section [ sec : discussion ] of discussion .following the model proposed in , population is divided into five categories : susceptible ( ) ; early latent ( ) , , individuals recently infected ( less than two years ) but not infectious ; infected ( ) , , individuals who have active tb and are infectious ; persistent latent ( ) , , individuals who were infected and remain latent ; and recovered ( ) , , individuals who were previously infected and treated .we assume that at birth all individuals are equally susceptible and differentiate as they experience infection and respective therapy . the rate of birth and death , , are equal ( corresponding to a mean life time of 70 years ) and no disease - related deaths are considered , keeping the total population , , constant with .parameter denotes the rate at which individuals leave compartment ; is the proportion of infected individuals progressing directly to the active disease compartment ; and are the rates of endogenous reactivation for persistent latent infections ( untreated latent infections ) and for treated individuals ( for those who have undergone a therapeutic intervention ) , respectively .parameters and are factors that reduce the risk of infection , as a result of acquired immunity to a previous infection , for persistent latent individuals and for treated patients , respectively .these factors affect the rate of exogenous reinfection . as in , in our simulationswe consider three different cases for the protection against reinfection conferred by treatment : same protection as natural infection ( ) ; lower protection than conferred by infection ( ) ; and higher protection than conferred by infection ( ) , see section [ subsec : sigma ] .parameter is the rate of recovery under standard treatment of active tb , assuming an average duration of infectiousness of six months .the values of the rates , , , , and are taken from and the references cited therein ( see table [ parameters ] for the values of the parameters ) .additional to standard treatment of infectious individuals , we consider two post - exposure interventions targeting different sub - populations : early detection and treatment of recently infected individuals ( ) and chemotherapy or post - exposure vaccine of persistent latent individuals ( ) .these interventions are applied at rates and .we consider , without loss of generality , that the rate of recovery of early latent individuals under post - exposure interventions is equal to the rate of recovery under treatment of active tb , , and greater than the rate of recovery of persistent latent individuals under post - exposure interventions , .since we are interested in studying these interventions along time , we add to the original model two control functions , and , which represent the intensity at which these post - exposure interventions are applied at each time step . the dynamical control system that we propose is given by the assumption that the total population is constant , allows to reduce the control system from five to four state variables .we decided to maintain the tb model in form , using relation as a test to confirm the numerical results ..parameter values for the control system .[ cols= " < , < , < " , ] we align the remaining alternative strategies by increasing effectiveness and recompute the icer : icer(b)=acer(b)=5.7 and icer(a)= .hence , we conclude that strategy * b * has the least icer and therefore is more cost - effective than strategy * a*. for this illustration we have considered the same cost for both interventions ( ) .results should depend strongly on the choice of these parameters , however this discussion is out of the scope of our present work .in this work we study the potential of widespread of two post - exposure interventions that are not widely used : treatment of early latent individuals and prophylactic treatment / vaccination of persistent latent individuals .we propose an optimal control problem that consists in analysing how these two control measures should be implemented , for a certain time period , in order to reduce the number of active infected individuals , while controlling the interventions implementation costs .this approach differs from others since it allows intensity of intervention to be changed along time .as previous suggested , interventions impact can be sensitive to transmission intensity and reinfection .we choose a dimensionless measure of effectiveness to compare different scenarios : assuming different transmission intensity ( ) or assuming different assumptions on protection against reinfection conferred by treatment ( ) .effectiveness of optimal intervention decreases with transmission .there is a change in the intervention profile from low to high transmission . in high transmission settings ,the intensity of treatment of persistent latent individuals for the optimal solution is reduced . since treatment of persistent latent individuals reduces the reactivation rate ( from to ) , when reinfection is very common and it overcomes reactivation impact , the advantage of treating this population group is less pronounced .the susceptibility to reinfection after treatment is still an open question . in one hand, treatment can reduce the risk of tb by reducing the amount of bacteria present in the lungs .on the other hand , we can argue that latent infection boosts immunity by constant stimulation of the immune system , so treatment could reduce protection .we vary parameter to explore these two possible scenarios : when treatment enhances protection and when treatment impairs protection .results show that treatment of persistent latent individuals should be less intense or even absent for the case where treatment impairs protection .similar results were obtained for the case of constant treatment rates in .in fact , for the correspondent case with maximum intensity ( and ) , we can have an increase of the equilibrium proportion of infectious individuals ( ) .we can conclude that reinfection has an important role in the determination of the optimal control strategy , by diminishing the intervention intensity on persistent latent individuals : first when transmission is very high corresponding to a very high reinfection rate and secondly when this population group has a lower susceptibility to reinfection ( ) .interestingly , the reinfection threshold of the model with no controls still marks a change in the model behaviour . even though , we are comparing equilibrium results to transient short time interventions .cost - effectiveness analysis of alternative combinations of the two interventions is conducted . for , treatment of only early latent individualsis the more cost - effective strategy , despite of treatment of both early latent and persistent latent individuals having a higher effectiveness .the total cost associated with treatment of persistent latent individuals is very high , especially because this population group can be very big in comparison to the others .it is believed that about one third of world s population is latent infected with tb . here , for simplicity, we have considered the cost parameters both equal to one .however , this depends greatly on the type of intervention used and results can be changed . for example , if intervention on persistent latent individuals could be done by vaccination , then the per person unit cost could be significantly reduced . plus ,treatment of early latent individuals implies contact tracing of index cases and prophylactic treatment , which can also be very expensive .the hamiltonian associated to the problem in is given by where is the _ adjoint vector_. according to the pontryagin maximum principle , if is optimal for problem with the initial conditions given in table [ icbeta100 ] and fixed final time , then there exists a nontrivial absolutely continuous mapping \to \mathbb{r}^5 ] .moreover , the transversality conditions hold .[ lem : thm ] for problem with fixed initial conditions , , , and and fixed final time , there exists adjoint functions , , , and such that \dot{\lambda^*_2}(t ) = \lambda^*_2(t)\left(\delta + \tau_1 + \mu\right ) - \lambda^*_3(t ) \phi \delta - \lambda^*_4(t ) ( 1 - \phi ) \delta - \lambda^*_5(t)\tau_1 u^*_1(t ) \\[0.1 cm ] \dot{\lambda^*_3}(t ) = -w_0 + \lambda^*_1(t ) \frac{\beta}{n } s^*(t ) - \lambda^*_2(t ) \frac{\beta}{n}(s^*(t ) + \sigma l_2^*(t ) + \sigma_r r^*(t))\\ \qquad \quad + \lambda^*_3(t ) \left(\tau_0 + \mu\right ) + \lambda^*_4(t)\sigma \frac{\beta}{n } l_2^*(t ) - \lambda^*_5(t)\left(\tau_0 - \sigma_r \frac{\beta}{n } r^*(t ) \right ) \\[0.1 cm ] \dot{\lambda^*_4}(t ) = - \lambda^*_2(t ) \frac{\beta}{n}i^*(t ) \sigma - \lambda^*_3(t ) \omega + \lambda^*_4(t)\left(\sigma \frac{\beta}{n } i^*(t ) + \omega + \tau_2 u^*_2(t ) + \mu\right)\\ \qquad \quad - \lambda^*_5(t)\left ( \tau_2 u^*_2(t ) \right ) \\[0.1 cm ] \dot{\lambda^*_5}(t ) = -\lambda^*_2(t ) \sigma_r \frac{\beta}{n}i^*(t ) - \lambda^*_3(t ) \omega_r + \lambda^*_5(t)\left(\sigma_r \frac{\beta}{n } i^*(t ) + \omega_r + \mu\right ) \ , , \end{cases}\ ] ] with transversality conditions furthermore , system is derived from the pontryagin maximum principle ( see , ) and the optimal controls come from the minimality condition . for small final time ,the optimal control pair given by is unique due to the boundedness of the state and adjoint functions and the lipschitz property of systems and ( see and references cited therein ) .existence of an optimal solution associated to an optimal control pair comes from the convexity of the integrand of the cost function with respect to the controls and the lipschitz property of the state system with respect to state variables ( see , , ) . for small final time , the optimal control pair is given by that is unique by the lemma above . because the state system is autonomous , uniqueness is valid for any time and not only for small time .we fix and and the remaining parameters according to table [ parameters ] and vary .results for the proportion of infectious individuals are shown in the figure [ fig : i : beta100:sigual : tf ] . with .parameters according to table [ parameters ] , and . ]the general behaviour do not change significantly with .the proportion of infected individuals slightly increases towards the end of the intervention for .this tendency is more pronounced for higher .figure [ fig : secanal : wi ] shows the results for different combination of the weight constants on the objective functional .we fix and and the remaining parameters according to table [ parameters ] and vary , and .efficacy decreases when the costs and increase , corresponding to an earlier relaxation of the intensity of treatment in the optimal solution due to cost restrictions .+ the change in efficacy is more pronounced for the cases where the weight associated with infectious individuals change in comparison to the weights associated with the controls ( figures [ fig : secanal : w12 ] and [ fig : secanal : w0 ] ) .results are less sensitive to the variation between the weight controls and ( figures [ fig : secanal : w1 ] and [ fig : secanal : w2 ] ) .this work was partially supported by the portuguese foundation for science and technology ( fct ) through the : _ centro de matemtica e aplicaes _ , project pest - oe / mat / ui0297/2014 ( rodrigues ) ; _center for research and development in mathematics and applications _ ( cidma ) , project pest - oe / mat / ui4106/2014 ( silva and torres ) ; post - doc fellowship sfrh / bpd/72061/2010 ( silva ) ; project ptdc / eei - aut/1450/2012 , co - financed by feder under pofc - qren with compete reference fcomp-01 - 0124-feder-028894 ( torres ) . c. castillo - chavez and z. feng , _ mathematical models for the disease dynamics of tuberculosis _ , in : advances in mathematical population dynamics - molecules , cells and man ( eds .m. a. horn , g. simonett and g. f. webb ) , vanderbilt university press , 1998 , 117128 .m. g. m. gomes , p. rodrigues , f. m. hilker , n. b. mantilla - beniers , m. muehlen , a. c. paulo and g. f. medley , _ implications of partial immunity on the prospects for tuberculosis control by post - exposure interventions _ , j. theoret .( 2007 ) , 608617 .a. v. rie , v. zhemkov , j. granskaya , l. steklova , l. shpakovskaya , a. wendelboe , a. kozlov , r. ryder and m. salfinger , _ tb and hiv in st petersburg , russia : a looming catastrophe ? _ , int .j. tuberc .lung dis . 9( 2005 ) , 740745 .h. s. rodrigues , m. t. t. monteiro and d. f. m. torres , _ optimal control and numerical software : an overview _ , in : systems theory : perspectives , applications and developments ( ed .f. miranda ) , nova science publishers , new york , 2014 , 93110 .arxiv:1401.7279 p. rodrigues , c. j. silva and d. f. m. torres , _ optimal control strategies for reducing the number of active infected individuals with tuberculosis _ ,proceedings of the siam conference on control and its applications ( ct13 ) , san diego , california , usa , july 8 - 10 , 2013 , pp . 4450 .s. verver , r. m. warren , n. beyers , m. richardson , g. d. van der spuy , m. w. borgdorff , d. a. enarson , m. a. behr and p. d. van helden , _ rate of reinfection tuberculosis after successful treatment is higher than rate of new tuberculosis _ , am .j. respir .171 ( 2005 ) , 14301435 .r. m. warren , t. c. victor , e. m. streicher , m. richardson , n. beyers , n. c. g. pittius and p. d. helden , _ patients with active tuberculosis often have different strains in the same sputum specimen _ , am .j. respir .169 ( 2004 ) , 610614 . | we propose and analyse an optimal control problem where the control system is a mathematical model for tuberculosis that considers reinfection . the control functions represent the fraction of early latent and persistent latent individuals that are treated . our aim is to study how these control measures should be implemented , for a certain time period , in order to reduce the number of active infected individuals , while minimizing the interventions implementation costs . the optimal intervention is compared along different epidemiological scenarios , by varying the transmission coefficient . the impact of variation of the risk of reinfection , as a result of acquired immunity to a previous infection for treated individuals on the optimal controls and associated solutions , is analysed . a cost - effectiveness analysis is done , to compare the application of each one of the control measures , separately or in combination . * keywords : * tuberculosis ; optimal control ; post - exposure interventions ; efficacy function ; cost effort . * mathematics subject classification 2010 : * 92d30 ; 49m05 . |
although it is well known that the wavelike phenomena of classical physics are ruled by hyperbolic equations , there are at least two modern motivations for studying the scalar wave equation on curved lorentzian four - manifolds .they are as follows . 0.3 cm ( i ) in the course of studying the einstein vacuum equations in four dimensions , i.e. it was conjectured in ref . that they admit local cauchy developments for initial data satisfying eq .( 1.1 ) such that the metric induced by on a given spacelike hypersurface and the extrinsic - curvature tensor of are prescribed . ]sets with locally finite curvature and locally finite norm of the first covariant derivatives of .this means that the spacetime constructed by evolution from smooth data can be smoothly continued , together with a time foliation , as long as the curvature of the foliation and the first covariant derivatives of its extrinsic curvature remain -bounded on the leaves of the foliation .the proof that this is indeed the case relies on a number of technical ingredients , including the construction of a parametrix ( an approximate green function of the wave operator , that provides a progressive wave representation for solutions of the wave equation ) for solutions of the homogeneous wave equation on a fixed einstein vacuum background .one has then to obtain control of the parametrix and of its error term by using only the fact that the curvature tensor is bounded in .note that , at a deeper level , the metric can be viewed to determine the elliptic or hyperbolic nature is a connected , four - dimensional , hausdorff four - manifold of class , a linear partial differential operator is a linear map with coefficients given by functions of class .characteristic polynomial _ of the operator at a point is where is a cotangent vector at .the cone in the cotangent plane at defined by is called the characteristic cone ( or conoid ) . by construction ,such a cone is independent of the choice of coordinates , because the higher order terms ( also called leading or principal symbol ) of transform into higher - order terms by a change of coordinates .the operator is said to be hyperbolic at if there exists a domain , a convex open cone in , such that every line through cuts the characteristic cone in real distinct points . in particular , second - order differential operators with higher - order terms are hyperbolic at if and only if the cone defined by is convex , i.e. , if the quadratic form has signature .] of the operator , where can denote covariant differentiation with respect to the levi - civita connection on spacetime , or on a vector bundle over spacetime , depending on our needs . when is riemannian , i.e. positive - definite , this operator is minus the laplacian , whereas if is lorentzian , one gets the wave operator .note also that , in four - dimensional manifolds , our lorentzian world lies in between two other options , i.e. a riemannian metric with signature and elliptic operator , and a ultrahyperbolic metric with signature and ultrahyperbolic operator . in the so - called euclidean ( or riemannian )framework used by quantum field theorists in functional integration , where the metric is positive - definite , the most fundamental differential operator is however the dirac operator , obtained by composition of clifford multiplication with covariant differentiation .its leading symbol is therefore clifford multiplication , and it generates all elliptic symbols on compact riemannian manifolds .this reflects the better known property according to which , out of the dirac operator and its ( formal ) adjoint , one can define two operators of laplace type , as well as powers of these operators . 0.3 cm ( ii ) recent work on the self - dual road to noncommutative gravity with twist has found it useful to start from a classical , undeformed spacetime which is a self - dual solution of the vacuum einstein equation , e.g. a kasner spacetime . within that framework , it is of interest to solve first the scalar wave equation in such a kasner background .since such a task was only outlined in ref . , we find it appropriate to develop a systematic calculus in the present paper .relying in part upon ref . , we begin by considering the scalar wave equation ( 1.2 ) for a classical scalar field when the kasner parameters take the values , respectively , i.e. where admits the integral representation one can then set where has to solve , for consistency , the equation (\xi_{1},\xi_{2},\xi_{3},t)=0 , \label{(1.6)}\ ] ] and the term in the factorization ( 1.5 ) ensures that , in eq .( 1.6 ) , the first derivative of is weighed by a vanishing coefficient .this is a sort of canonical form of linear second - order ordinary differential equations with variable coefficients ( see section 10.2 of ref . ) , and eq .( 1.6 ) can be viewed as a -parameter family of such equations , the parameters being the triplet .section ii relates eq .( 1.6 ) to the bessel functions , studies a specific choice of cauchy data and eventually solves eq .( 1.3 ) through an integral representation that relies upon integration in the complex domain .section iii evaluates the bicharacteristics in kasner spacetime , relating them to elliptic integrals , while sec .iv builds the parametrix of our scalar wave equation through a pair of integral operators where the integrand consists of amplitude and phase functions . concluding remarks and open problemsare presented in sec .v , while relevant background material is described in the appendices .from now on , we therefore study until the end of next subsection the ordinary differential equation (t)=0 . \label{(2.1)}\ ] ] this is a particular case of the differential equation (t)=0 , \label{(2.2)}\ ] ] which is solved by the linear combination by comparison of eqs .( 2.1 ) and ( 2.2 ) we find and hence , in light of what we pointed out at the end of sec .i , our partial differential equation ( 1.4 ) is solved by replacing and in ( 2.3 ) by some functions and , whose form depends on the choice of cauchy data , i.e. ( see sec .iii ) the bessel function is not regular at and hence , by using this representation , we are considering an initial time .we use the linearly independent bessel functions and which describe accurately the time dependence of the integrand in eq .note that the three choices are equivalent , since the three coordinates in the scalar wave equation are on equal footing . only the calculational details change .more precisely , on choosing , one finds whereas , upon choosing , one finds the task of solving our wave equation ( 1.3 ) can be accomplished provided that one knows the cauchy data indeed , from our eqs .( 1.4 ) , ( 1.5 ) and ( 2.5 ) , one finds ( denoting by an overdot the partial derivative with respect to ) .\label{(2.8)}\end{aligned}\ ] ] equations ( 2.7 ) and ( 2.8 ) are a linear system of algebraic equations to be solved for and , and they can be studied for various choices of cauchy data . for example , inspired by the simpler case of scalar wave equation in two - dimensional minkowski spacetime , we may consider the cauchy data where has dimension of length .thus , by virtue of the identity , \label{(2.11)}\ ] ] we obtain from ( 2.7 ) and ( 2.9 ) , \label{(2.12)}\ ] ] while ( 2.8 ) and ( 2.10 ) yield an interesting generalization of the cauchy data ( 2.9 ) and ( 2.10 ) might be taken to be since it reduces to ( 2.9 ) and ( 2.10 ) at , which is indeed the value of initial time assumed in the minkowski spacetime example considered in ref . ( whereas in kasner spacetime we take so far to have enough equations to determine and ) .hereafter , to avoid cumbersome formulas , we keep choosing the cauchy data ( 2.9 ) and ( 2.10 ) . at this stage ,( 2.7 ) , ( 2.8 ) , ( 2.12 ) and ( 2.13 ) lead to where ( 2.12 ) should be used to express .the integrand of eq .( 1.4 ) is therefore expressed in factorized form through bessel functions , decaying exponentials and oscillating functions , but the evaluation of the integral is hard , even in this simple case .note now that the original hyperbolic equation ( 1.3 ) is a particular case of the general form \equiv \left[{\partial^{2}\over \partial t^{2}}-\left ( \sum_{j , k=1}^{n}a^{jk } { \partial^{2}\over \partial x^{j } \partial x^{k}}+b { \partial \over \partial t } + \sum_{j=1}^{n}b^{j}{\partial \over \partial x^{j}}+c \right)\right]u=0 . \label{(2.18)}\ ] ] in the general theory , is a symmetric tensor , is a vector field and is a scalar field . in our case, we have and thus , for all ( as we said before , we avoid , which is a singularity of the kasner coordinates ) , we can exploit the integral representation ( see appendix a ) of the solution of hyperbolic equations with variable coefficients , while remarking that eq .( 1.3 ) is also of a type similar to other hyperbolic equations for which the mathematical literature ( see appendix a ) has proved that the cauchy problem is well posed . on referring the reader to chapters and of ref . for the interesting details , we simply state here the main result when eqs .( 2.18 ) and ( 2.19 ) hold . 0.3 cm * theorem 2.1 * the solution of the scalar wave equation ( 1.3 ) with cauchy data ( 2.9 ) and ( 2.10 ) at admits the integral representation , \label{(2.20)}\ ] ] where is a fundamental solution ( see appendix b ) of the adjoint equation =0 , \label{(2.21)}\ ] ] being the adjoint operator acting , in our case , as while the integrand ] , the elliptic integral of the first kind , here denoted by ] , the elliptic integral of the second kind , ], the task remains of finding the phase function by writing and solving the four components of eq .( 4.19 ) . to sum up, we have proved the following original result .0.3 cm * theorem 4.1 * for any lorentzian spacetime manifold , the amplitude functions and phase functions in the parametrix ( 4.4 ) for the scalar wave equation can be obtained by solving , first , the linear condition ( 4.18 ) of vanishing divergence for a covariant vector .all nonlinearities of the coupled system are then mapped into solving the nonlinear equation ( 4.20 ) for the amplitude function .eventually , the phase function is found by solving the first - order linear equation ( 4.19 ) . in kasner spacetime ,( 4.18 ) takes indeed the form this suggests considering and such that so that eq .( 4.22 ) leads to the equation this is precisely the vanishing divergence condition satisfied by retarded potentials in minkowski spacetime in the coordinates .their integral representation is well known to be of the form ( recall that we work in units ) where and hence eqs .( 4.23 ) , ( 4.25 ) and ( 4.26 ) solve completely the problem of finding the auxiliary covariant vector in kasner spacetime .we should now solve eq .( 4.20 ) for .the reader might wonder what has been gained by turning the task of solving the scalar wave equation into the task of solving eq .( 4.20 ) . in this equation, we can first get rid of the part linear in in the operator by setting which leads to { \tilde \alpha}_{jk}=t^{2}\psi_{\gamma}\psi^{\gamma}. \label{(4.29)}\ ] ] next , we can get rid of powers of by setting , which yields and the same formula holds with replaced by . thus , upon choosing , we obtain eventually the amplitude functions by solving the following nonlinear equation for : + 4t^{2}\bigr[(\psi_{0})^{2}-\sum_{l=1}^{3}t^{-2p_{l}}(\psi_{l})^{2}\bigr ] .\label{(4.31)}\end{aligned}\ ] ] the form ( 4.31 ) of the equation for makes it possible to apply the powerful adomian method for the solution of nonlinear partial differential equations . for this purpose , inspired by ref . , we define the four linear operators occurring in , i.e. the remainder ( i.e. , lower order part ) of the linear operator , i.e. the nonlinear term ( hereafter we omit the subscripts for simplicity of notation ) , \label{(4.34)}\ ] ] while the part of the right - hand side which is independent of is here denoted by , i.e. . \label{(4.35)}\ ] ] hence the nonlinear equation ( 4.31 ) can be re - expressed in the form the idea is now to apply the inverse of , or , or , or to this equation , which , upon bearing in mind the identities the s being constants fixed by the initial and boundary conditions , leads to the following four equations : now we add these four equations , and upon defining , \label{(4.45)}\end{aligned}\ ] ] + g\eta , \label{(4.47)}\ ] ] we arrive at the fundamental formula at this stage , if the function has a poincar asymptotic expansion , which can be convergent or divergent and is written in the form we point out that ( 4.49 ) leads in turn to a poincar asymptotic expansion of the nonlinear term defined in eq .( 4.34 ) in the form +a_{1}[f_{0},f_{1}]+ ... + a_{k}[f_{0}, ... ,f_{k}]+ ... , \label{(4.50)}\ ] ] where , by virtue of the formula we can evaluate the poincar asymptotic expansion of squared logarithmic derivatives according to {,l } \right \}^{2 } \nonumber \\ & \sim & \left[{(f_{{0},l}+f_{{1},l})\over f_{0 } } -{f_{1}f_{{0},l}\over ( f_{0})^{2 } } -{f_{1}f_{{1},l}\over ( f_{0})^{2 } } + { ( f_{1})^{2}f_{{0},l}\over ( f_{0})^{3}}+ ... \right]^{2}. \label{(4.52)}\end{aligned}\ ] ] in light of ( 4.34 ) and ( 4.52 ) we find ={3 \over 4f_{0}}\left[\sum_{l=1}^{3}t^{-2p_{l}}(f_{{0},l})^{2 } -(f_{{0},t})^{2}\right ] , \label{(4.53)}\ ] ] ={3 \over 4f_{0}}\left(f_{1}\left(1-{f_{1}\over f_{0}}\right)\right)^{2 } \left \ { \sum_{l=1}^{3}t^{-2p_{l } } \left[{f_{{1},l}\over f_{1 } } -{f_{{0},l}\over f_{0}}\right]^{2 } -\left[{f_{{1},t}\over f_{1 } } -{f_{{0},t}\over f_{0}}\right]^{2}\right \ } , \label{(4.54)}\ ] ] plus a countable infinity of other formulas for ] ... .note that , unlike the case of simpler nonlinearities , the functionals involve division by .the solution algorithm is now completely specified , because eq . ( 4.48 ) yields the recursive formulas \ ; \forall n=0,1, ...,\infty , \label{(4.55)}\ ] ] and hence where , by exploiting the partial sum of the geometric series , we find . \label{(4.57)}\ ] ] since the operator is built from the inverses of differential operators , it is a pseudo - differential operator , and it remains to be seen whether , for sufficiently large values of , it only contributes to the terms in the parametrix ( 4.4 ) , so that we only need the limit .\label{(4.58)}\ ] ] the adomian method we have used is well suited to go beyond weak nonlinearity and small perturbations , but of course the nontrivial technical problem is whether the series for the unknown function is convergent , and also how fast . if it were necessary to consider hundreds of terms , the algorithm would be of little practical utility .an interesting alternative , which can not be ruled out at present , is instead the existence of an asymptotic expansion of involving only finitely many terms , whose rigorous theory is described in a monograph by dieudonn . in such a case we might write -ga_{1}[f_{0},f_{1}=kf_{0}-ga_{0}[f_{0 } ] ] , \label{(4.59)}\ ] ] which is fully computable by virtue of eqs .( 4.45)-(4.47 ) and ( 4.53)-(4.55 ) .we find it therefore encouraging that an exact solution algorithm has been obtained for the scalar parametrix in kasner spacetime .last , but not least , eqs .( 4.19 ) for the gradient of phase functions can be integrated to find bearing in mind that and eq .( 4.23 ) , while the functions may be fixed by demanding consistency with eq . ( 4.5 ) .this method leads to the following formulas for the complete evaluation of phase functions : where denotes the triplet deprived of the -th coordinate , and no summation over is performed on the right - hand side .the work in ref . succeeded in the difficult task of setting up a solution algorithm for defining and solving self - dual gravity field equations to first order in the noncommutativity matrix .however , precisely the first building block , i.e. the task of solving the scalar field equation in a classical self - dual background was only briefly described .this incompleteness has been taken care of in the present paper for the case of kasner spacetime , first with a particular choice of kasner parameters : . the physics - oriented literature had devoted efforts to evaluating quantum propagators for a massive scalar field in the kasner universe , but the relevance for the classical wave equation of the mathematical work in refs . had not been appreciated , to the best of our knowledge . as far as we know , our original results in secs .iii and iv are substantially new .we have indeed evaluated the bicharacteristics of kasner spacetime in terms of elliptic integrals of first , second and third kind , while the nonlinear system for obtaining amplitude and phase functions in the scalar parametrix has been first mapped into eqs .( 4.18)-(4.20 ) , a set of equations that holds in any curved spacetime .furthermore , the nonlinear equation ( 4.20 ) has been mapped into eq .( 4.31 ) , and the latter has been solved with the help of the adomian method , arriving at eqs .( 4.53)-(4.59 ) .there is however still a lot of work to do , because the proof that the asymptotic expansion of is of the poincar type or , instead , only involves finitely many terms , might require new insight from asymptotic and functional analysis .this adds evidence in favour of noncommutative gravity needing the whole apparatus of classical mathematical physics for a proper solution of its field equations ( see also the work in ref . , where noether - symmetry methods have been used to evaluate the potential term for a wave - type operator in bianchi i spacetime ) .g. e. and e. d. g. are grateful to the dipartimento di fisica of federico ii university , naples , for hospitality and support .as we know from sec . v , eq .( 1.3 ) is a particular case of the wave equation ( 3.1 ) . the operator in eq .( 3.1 ) is an example of what is called , in the mathematical literature , a fuchsian hyperbolic operator with weight with respect to . in general , the weight is , and such fuchsian hyperbolic operators read as ( hereafter \times { \bf r}^{n} ] , there exists a unique solution \times { \bf r}^{n}) ] , and such that for the operator in eq . ( 3.1 ) one finds indeed thus , bearing in mind that , when the kasner parameters are all nonvanishing , one of them is negative and the other two are positive , one obtains ( on defining for all ) and hence condition ( a5 ) of the general theory is fulfilled .this is also the case of the operator in eq .( 1.3 ) , for which which implies that in other words , the hyperbolic equation studied in our paper can always rely upon the tahara theorem on the cauchy problem .if instead we resort to the garabedian technique of integration in the complex domain , strictly speaking , we need to assume analytic coefficients , which is not fulfilled , for example , by in ( 2.19 ) if we replace by a complex and want to consider also the value . however , ref . describes the way out of this nontrivial technical difficulty . for this purpose, one considers first a more complicated , inhomogeneous equation =f \label{(a10)}\ ] ] with analytic coefficients and analytic right - hand side , from which one can write down a direct analogue of the solution ( 2.20 ) in the form + \int_{d}(pf -u m[{\cal p}]){\rm d}\tau \wedge { \rm d}y^{1 } \wedge { \rm d } y^{2 } \wedge { \rm d } y^{3 } \right ] , \label{(a11)}\ ] ] where is called a_ parametrix _( i.e. a distribution that provides an approximate inverse ) and is given by in terms of the world function of appendix b. the notation means that the manifold of integration is supposed to approach the real domain in such a way that it folds around the characteristic conoid without intersecting it .equation ( a11 ) defines a volterra integral equation for the solution of the cauchy problem .it follows that varies continuously with the derivatives of the coefficients of eq .similarly , the second partial derivatives of depend continuously on the derivatives of the coefficients of a high enough order .thus , when they are _ no longer analytic , we may replace these coefficients by polynomials approximating an appropriate set of their derivatives _ in order to establish the validity of ( a11 ) in the general case by passage to the limit .note also that the integral equation ( a11 ) has a meaning in the real domain even where the partial differential equation ( a10 ) is not analytic , since the construction of the parametrix and of the world function only requires differentiability of the coefficients of a sufficient order .more precisely , for coefficients possessing partial derivatives of all orders , we introduce a polynomial approximation that includes enough of these derivatives to ensure that the solution of the corresponding approximate equation ( a11 ) converges together with its second derivatives .the limit has therefore to be a solution of the cauchy problem associated with the more general coefficients , and must itself satisfy the volterra integral equation ( a11 ) .in his analysis of partial differential equations , hadamard discovered the importance of the _ world function _ , which can be defined as the square of the geodesic distance between two points with respect to the metric in the analysis of second - order linear partial differential equations =\left[\sum_{i , j=1}^{n}a^{ij}{\partial^{2}\over \partial x^{i } \partial x^{j } } + \sum_{i=1}^{n}b^{i}{\partial \over \partial x^{i}}+c \right]u=0 , \label{(b2)}\ ] ] the first - order nonlinear partial differential equation ( cf .( 3.5 ) ) for the world function reads as where the coefficients are the same as those occurring in the definition of the operator ( this is naturally the case because the wave or laplace operator can be always defined through the metric , whose signature determines the hyperbolic or elliptic nature of the operator , as we stressed in sec .the world function can be used provided that the points and are so close to each other that no caustics occur . a _ fundamental solution _ of eq .( b2 ) is a distribution , and can be defined to be a solution of that equation in its dependence on possessing , at the parameter point , a singularity characterized by the representation where are supposed to be regular functions of in a neighbourhood of , with at , and where the exponent depends on the spacetime dimension according to .the sources of nonvanishing are either a mass term in the operator or a nonvanishing spacetime curvature .the term plays an important role in the evaluation of the integral ( 2.20 ) , as is stressed in sec .6.4 of ref . . in kasner spacetime, the hadamard green function ( b4 ) has been evaluated explicitly only with the special choice of parameters in ref . . in that case , direct integration of the geodesic equation ( appendix c ) yields eventually an exact formula for the hadamard - ruse - synge world function in the form having defined following our remarks at the end of sec .ii , we expect that the choice of kasner parameters made in sec .ii would still lead to a formula like ( b5 ) for the world function , but with however , as far as we know , the extension of these formulas to generic values of kasner parameters is an open problem .the calculation in ref . is so enlightening and relevant for our purposes that it deserves a brief summary .to begin , the geodesic equation in a kasner spacetime with metric is the following coupled system of nonlinear differential equations : where is the affine parameter of the geodesic. equation ( c3 ) can be solved for , because it yields which implies having denoted by three integration constants .the constancy along the geodesic of the ( pseudo-)norm squared , where , yields ( being negative ( resp .positive ) for timelike ( resp .spacelike ) geodesics ) from which we obtain on the other hand , the world function is the square of the geodesic distance between the points and , say , i.e. moreover , following ref . , one defines upon considering the particular choice , and defining these formulae make it possible to re - express the integration constants in the form where .we can now square up the product from ( c12 ) , finding eventually on the other hand , the geodesic distance in ( c8 ) becomes in our case , \label{(c14)}\ ] ] and if we square it up and then exploit ( c13 ) we obtain because the terms involving products of square roots cancel each other . at this stage, we can re - express the squares of and from ( c11 ) , i.e. by virtue of ( c15 ) and ( c16 ) , we find eventually the result ( b5 ) , where the role played by in the formulas has been made explicit .s. kleinerman , geom .special volume gafa2000 , 1 ( 2000 ) .s. kleinerman , int . j. mod .d * 22 * , 1330012 ( 2013 ) .f. treves , _ introduction to pseudodifferential and fourier integral operators .volume 2 : fourier integral operators _ ( plenum press , new york , 1980 ) .j. szeftel ,arxiv:1204.1769 [ math.ap ] . g. esposito , _ dirac operators and spectral geometry _, cambridge lecture notes in physics vol .* 12 * ( cambridge university press , cambridge , 1998 ) .e. di grezia , g. esposito , and p. vitale , phys .d * 89 * , 064039 ( 2014 ) ; phys .d * 90 * , 129901 ( 2014 ) .e. t. whittaker and g. n. watson , _ modern analysis _ ( cambridge university press , cambridge , 1927 ) .j. d. jackson , _ classical electrodynamics _ ( wiley , new york , 1999 ) .p. r. garabedian , _ partial differential equations _( chelsea , new york , 1964 ) .o. a. oleinik , comm .pure appl . math .* 23 * , 569 ( 1970 ) .a. menikoff , amer .j. math . * 97 * , 548 ( 1975 ) .f. g. friedlander , _ the wave equation in curved space - time _ ( cambridge university press , cambridge , 1975 ) .l. vitagliano , int . j. geom .11 * , 1460039 ( 2014 ) . y.choquet - bruhat , in _ battelle rencontres _ , edited by c. m. dewitt and j. a. wheeler( benjamin , new york , 1968 ) .t. levi civita , _ caratteristiche dei sistemi differenziali e propagazione ondosa _( zanichelli , bologna , 1931 ) . h. nariai , nuovo cimento b * 35 * , 259 ( 1976 ) . v. p. ermakov , univ .kiev , series * iii * 9 , 1 ( 1880 ) .e. pinney , proc .* 1 * , 681 ( 1950 ) .h. r. lewis and w. b. riesenfeld , j. math .* 10 * , 1458 ( 1969 ) .g. adomian , j. math .appl . * 135 * , 501 ( 1988 ) .h. poincar , acta math .* 8 * , 295 ( 1886 ) .j. dieudonn , _ calcul infinitesimal _( hermann , paris , 1980 ) .h. tahara , proc .japan acad .a * 54 * , 92 ( 1978 ) .p. r. garabedian , j. math . mec . * 9 * , 241 ( 1960 ) .b. s. dewitt , phys .lett . * 4 * , 317 ( 1960 ) .a. paliathanasis , m. tsamparlis , and m. t. mustafa , int . j. geom .* 12 * , 1550033 ( 2015 ) , arxiv:1411.0398 [ math - ph ] .j. hadamard , _ lectures on cauchy s problem in linear partial differential equations _( dover , new york , 1952 ) .h. s. ruse , proc .lond . math .32 * , 87 ( 1931 ) .j. l. synge , proc .soc . * 32 * , 241 ( 1931 ) .b. s. dewitt , _ dynamical theory of groups and fields _ ( gordon & breach , new york , 1965 ). g. bimonte , e. calloni , l. di fiore , g. esposito , l. milano , and l. rosa , class .quantum grav .* 21 * , 647 ( 2004 ) . | the scalar wave equation in kasner spacetime is solved , first for a particular choice of kasner parameters , by relating the integrand in the wave packet to the bessel functions . an alternative integral representation is also displayed , which relies upon the method of integration in the complex domain for the solution of hyperbolic equations with variable coefficients . in order to study the propagation of wave fronts , we integrate the equations of bicharacteristics which are null geodesics , and we are able to express them , for the first time in the literature , with the help of elliptic integrals for another choice of kasner parameters . for generic values of the three kasner parameters , the solution of the cauchy problem is built through a pair of integral operators , where the amplitude and phase functions in the integrand solve a coupled system of partial differential equations . the first is the so - called transport equation , whereas the second is a nonlinear equation that reduces to the eikonal equation if the amplitude is a slowly varying function . remarkably , the analysis of such a coupled system is proved to be equivalent to building first an auxiliary covariant vector having vanishing divergence , while all nonlinearities are mapped into solving a covariant generalization of the ermakov - pinney equation for the amplitude function . last , from a linear set of equations for the gradient of the phase one recovers the phase itself . this is the parametrix construction that relies upon fourier - maslov integral operators , but with a novel perspective on the nonlinearities in the dispersion relation . furthermore , the adomian method for nonlinear partial differential equations is applied to generate a recursive scheme for the evaluation of the amplitude function in the parametrix . the resulting formulas can be used to build self - dual solutions to the field equations of noncommutative gravity , as has been shown in the recent literature . |
in classical mechanics , lagrangian and hamiltonian formulations are completely the same description of a dynamical system . usually more attention to the hamiltonian formulation is paid because it has properties of a canonical system . in post - newtonian( pn ) mechanics of general relativity , the two formulations are still adopted . are they completely equivalent ?ten years ago two independent groups [ 1,2 ] answered this question .they proved the complete physical equivalence of the third - order post - newtonian ( 3pn ) arnowitt - deser - misner ( adm ) coordinate hamiltonian approach to and the 3pn harmonic coordinate lagrangian approach to the dynamics of spinless compact binaries .this result was recently extended to the inclusion of the next - to - next - to - leading order ( 4pn ) spin - spin coupling [ 3 ] . however , there are two different claims on the chaotic behavior of compact binaries with one body spinning and spin effects restricted to spin - orbit ( 1.5pn ) coupling .that is , the 2pn harmonic coordinate lagrangian dynamics allow the onset of chaos [ 4 ] , but the 2pn adm hamiltonian dynamics are integrable , regular and non - chaotic [ 5,6 ] .an explanation to the opposite results was given in [ 7 ] .in fact , the 2pn hamiltonian and lagrangian formulations are not exactly equal but are only approximately related . as its detailed account , the equations of motion for the lagrangian formulation use lower - order terms as approximations to higher - order acceleration terms in the euler - lagrange equations , while these approximations do not occur in the equations of motion for the hamiltonian formulation . it is natural that the lagrangian has approximate constants of motion but the hamiltonian contains exact ones .these facts were regarded as the essential point for the two formulations having different dynamics . in this sense , the two claims that seem to be explicitly conflicting were thought to be correct .recently , the authors of [ 8 ] revisited the equivalence between the hamiltonian and lagrangian formulations at pn approximations .they found that the two formulations at the same pn order are nonequivalent in general and have differences .three simple examples of pn lagrangian formulations , including a relativistic restricted three - body problem with the 1pn contribution from the circular motion of two primary objects , a spinning compact binary system with the newtonian term and the leading - order spin - orbit coupling [ 8 ] and a binary system of the newtonian term and the leading - order spin - orbit and spin - spin couplings [ 9 ] , were used to show that the differences are not mainly due to the lagrangian having the approximate euler - lagrange equations and the approximate constants of motion but come from truncation of higher - order pn terms between the two formulations transformed . an important result from the logicis that an equivalent hamiltonian of a lower - order lagrangian is usually at an infinite order from a theoretical point of view or at a higher enough order from numerical computations .based on this , the integrability or non - integrability of the lagrangian can be known by that of the hamiltonian .more recently , chaos in comparable mass compact binary systems with one body spinning was completely ruled out [ 10 ] .the reason is that a completely canonical higher - order hamiltonian , which is equivalent to a lower - order conservative lagrangian and holds four integrals of the total energy and the total angular momentum in an eight - dimensional phase space , is typically integrable [ 11 ] .this result is useful to clarify the doubt on the absence of chaos in the 2pn adm hamiltonian approach [ 5,6 ] and the presence of chaos in the 2pn harmonic coordinate lagrangian formulation [ 4 ] . as a point to illustrate ,two other doubts about different chaotic indicators resulting in different dynamical behaviours of spinning compact binaries among references [ 12 - 15 ] and different descriptions of chaotic parameter spaces and chaotic regions between two articles [ 4,16 ] have been clarified in [ 17 - 19 ] .it is worth noting that the logic result on the equivalence of the pn hamiltonian and lagrangian approaches at different orders is not easy to check because the exactly equivalent hamiltonian of the lagrangian is generally expressed as an infinite series whose convergence is unknown clearly in most cases . to provide enough evidence for supporting this result , we select a part of the 1pn lagrangian formulation of relativistic circular restricted three - body problem [ 20 ] , where the euler - lagrange equations can be described by a converged taylor series and the equivalent hamiltoniancan also be written as another converged taylor series . for our purpose ,the hamiltonian is derived from the lagrangian in sect .2 . then in sect .3 numerical methods are used to evaluate whether various pn order hamiltonians and the 1pn lagrangian with various pn order euler - lagrange equations are equivalent . finally , the main results are concluded in sect .as in classical mechanics , a lagrangian formulation and its hamiltonian formulation satisfy the legendre transformation in pn mechanics .this transformation is written as here and are coordinate and velocity , respectively .canonical momentum is taking a special pn circular restricted three - body problem as an example , now we derive the hamiltonian from the lagrangian in detail .the circular restricted three - body problem means the motion of a third body ( i.e. a small particle of negligible mass ) moving around two masses and ( ) .the two masses move in circular , coplanar orbits about their common center of mass , and have a constant separation and the same angular velocity .they exert a gravitational force on the particle but the third body does not affect the motion of the two massive bodies . taking the unit of mass , we have the two masses and . the unit of length requires that the constant separation of the two bodies should be unity . the common mean motion , the newtonian angular velocity , of the two primaries is also unity . in these unit systems ,the two bodies are stationary at points and with and in the rotating reference frame .state variables of the third body satisfy the following lagrangian formulation .\end{aligned}\ ] ] in the above equations , the related notations are specified as follows . is of the form where the distances from body 3 to bodies 1 and 2 are stands for the newtonian circular restricted three - body problem . is a 1pn contribution due to the relativistic effect to the circular motions of the two primaries . is also a 1pn contribution from the relativistic effect to the third body , and is only a part of that in [ 20 ] for our purpose . is the 1pn effect with respect to the angular velocity of the primaries and is given by in fact , the separation is a mark of and as the 1pn effects when the velocity of light , , is taken as one geometric unit in later numerical computations . the lagrangian ( 3 ) is a function of velocities and coordinates , therefore , its equations of motion are the ordinary euler - lagrange equations : since the momenta and of the forms are linear functions of velocities and , accelerations can be solved exactly from eq .they have detailed expressions : the newtonian terms and and the 1pn terms and are , \\y_1 & = & 2\omega_1(y-\dot{x})+\frac{u_y}{u}l_2+\frac{3}{a}[u(y-2\dot{x } ) \nonumber \\ & & -(\dot{x}u_x+\dot{y}u_y)(x+\dot{y})],\end{aligned}\ ] ] where and .considering that is at the 1pn level , eqs .( 12 ) and ( 13 ) have the taylor expansions + \frac{x_1}{c^2}[\sum\limits_{j=0}^{k-1}(-1)^{j}(\frac{\delta}{c^2})^{j}],\\ \ddot{y } & \approx & y_0[\sum\limits_{i=0}^{k}(-1)^{i}(\frac{\delta}{c^2})^{i } ] + \frac{y_1}{c^2}[\sum\limits_{j=0}^{k-1}(-1)^{j}(\frac{\delta}{c^2})^{j}].\end{aligned}\ ] ] they are the euler - lagrange equations with pn approximations to an order , labeled as . as a point to illustrate ,the case of with corresponds to the newtonian euler - lagrange equations , marked as . from a theoretical viewpoint , as , is strictly equivalent to given by eqs .( 12 ) and ( 13 ) , namely , .note that for the generic case in [ 8 ] , the momenta are highly nonlinear functions of velocities , so no exact equations of motion similar to eqs .( 12 ) and ( 13 ) but approximate equations of motion can be obtained from the euler - lagrange equations ( 9 ) .this means that we do not know what the pn approximations like eqs .( 18 ) and ( 19 ) are converged as .the velocities and obtained from eqs .( 10 ) and ( 11 ) are expressed as of course , they can be expanded to the order +(1+\frac{\omega_1}{c^2}){y},\\ \dot{y } & \approx & p_{y}[\sum\limits_{i=0}^{k}(-1)^{i}(\frac{\delta}{c^2})^{i}]-(1+\frac{\omega_1}{c^2}){x}.\end{aligned}\ ] ] as mentioned above , eqs .( 22 ) and ( 23 ) are exactly identical to eqs .( 20 ) and ( 21 ) when . in light of eqs .( 1 ) , ( 20 ) and ( 21 ) , we have the following hamiltonian its taylor series at the order is of the form it is clear that with is the newtonian hamiltonian formulation , and can be expressed in terms of the jacobian constant as .additionally , is closer and closer to as gets larger . without doubt, the exact equivalence between and should be .of course , what is converged as is still unknown for the general case in [ 8 ] .it should be emphasized that is the order pn approximation to the euler - lagrange equations that is exactly derived from the 1pn lagrangian , and is the order pn approximation to the hamiltonian .because of the exact equivalence between and , is the order pn approximation to the hamiltonian , and is the order pn approximation to the euler - lagrange equations . additionally , and are exactly equivalent , i.e. , .however , it would be up to a certain higher enough finite order rather than up to the infinite order that the equivalence can be checked by numerical methods . see the following numerical investigations for more detailsbesides the above analytical method , a numerical method is used to estimate whether these pn approaches have constants of motion and what the accuracy of the constants is . above all, we are interested in knowing whether these pn approaches are equivalent . an eighth- and ninth - order runge kutta fehlberg algorithm of variable time - steps is used to solve each of the above euler - lagrange equations and hamiltonians .parameters and initial conditions are , , and . note that the initial positive value of is given by the jacobian constant .this orbit in the newtonian problem is a kolmogorov - arnold - moser ( kam ) torus on the poincar section with in fig .1(a ) , therefore , it is regular and non - chaotic. the integrator can give errors of the energy for the lagrangian in the magnitude of order or so .the long - term accumulation of energy errors is explicitly present in fig .1(b ) because the integration scheme itself yields an artificial excitation or damping .if this accumulation is neglected , the energy should be constant .this shows that the energy is actually an integral of the lagrangian .however , the existence of this excitation or damping does not make the numerical results unreliable during the integration time of due to such a high numerical accuracy . in this sense , not only the integrator does not necessarily use manifold correction methods [ 21 - 23 ] , but also it gives true qualitative results as a symplectic integration algorithm [ 24 - 27 ] does . when the pn terms and are included , what about the accuracy of energy integrals given by the related pn approximations ?let us answer this question .taking the separation between the primaries , , we plot fig .2(a ) in which the errors of energies of the 1pn euler - lagrange equations and hamiltonian are shown .it is worth noting that the error of energy is estimated by means of , where is regarded as the energy of at time and is the initial energy .obviously , the error for is larger in about 10 orders of magnitude than that for .this result should be very reasonable because differences between and exist explicitly but the canonical equations are exactly given by the 1pn hamiltonian , as shown in the above analytical discussions .in other words , the difference between and is at 1pn level .of course , the higher the order gets , the smaller the difference between and becomes .this is why we can see from figs .2(a ) and 2(b ) that the error of the 8pn euler - lagrange equations and hamiltonian is typically smaller than that of the 1pn euler - lagrange equations and hamiltonian . without doubt , and should be the same in the energy accuracy if no roundoff errors exist in fig .in addition to evaluating the accuracy of energy integrals of these pn approaches , evaluating the quality of these pn approaches to the euler - lagrange equations or the hamiltonian is also necessary from qualitative and quantitative numerical comparisons . see the following demonstrations for more information .besides the method of poincar sections , the method of lyapunov exponents is often used to detect chaos from order .it relates to the description of average exponential deviation of two nearby orbits .based on the two - particle method [ 28 ] , the largest lyapunov exponent is calculated by where and are distances between the two nearby trajectories at times 0 and , respectively . a globally stable orbit is said to be regular if but chaotic if . generally speaking, it costs a long enough time to obtain a stabilizing value of from the limit . instead , a quicker method to find chaos is a fast lyapunov indicator [ 29,30 ] , defined as the globally stable orbit is chaotic if this indicator increases exponentially with time but ordered if this indicator grows polynomially .it can be seen clearly from the poincar section of fig .3(a ) that the dynamics of or in fig .2(c ) is chaotic .this result is supported by the lyapunov exponents in figs .3(b ) and 3(c ) and the flis in fig .3(d ) and 3(e ) . what about the dynamics of these various pn approximations ?the key to this question can be found in figs .3(b)-3(e ) . hereare the related details . as shown in fig .3(b ) , lower order pn approximations to the euler - lagrange equations , such as the 1pn euler - lagrange equations and the 4pn euler - lagrange equations , are so poorer that their dynamics are regular , and are completely unlike the chaotic dynamics of . with increase of the pn order , higher order pn approximations to the euler - lagrange equations become better and better .for example , the 8pn euler - lagrange equations allows the onset of chaos , as does .seen particularly from the evolution curve on the lyapunov exponent and time , the 12pn euler - lagrange equations seems to be very closer to .these results are also suitable for the pn hamiltonian approximations to the hamiltonian in fig .when the lyapunov exponents in figs .3(b ) and 3(c ) are replaced with the flis in figs .3(d ) and 3(e ) , similar results can be given .when the separation is instead of in fig .3(a ) , an ordered kam torus occurs .that means that the dynamics is regular and non - chaotic . in figs 3(f)-3(i ) , lower order pn approximations such as ( or ) have chaotic behaviors , but higher order pn approximations such as ( or ) have regular behaviors . in short , the above numerical simulations seem to tell us that the euler - lagrange equations ( or the hamiltonian approaches ) at higher enough pn orders have the same dynamics as the euler - lagrange equations ( or the hamiltonian ) .there is a question of whether these results depend on the separation . to answer it, we fix the above - mentioned orbit but let begin at 10 and end at 250 in increments of 1 . for each given value of ,the fli is obtained after integration time . in this way, we have dependence of flis on the separations in several pn lagrangian and hamiltonian approaches , plotted in fig .5.5 is referred as a threshold value of fli for distinguishing between the regular and chaotic cases at this time .that is to say , an orbit is chaotic when its fli is larger than threshold but ordered when its fli is smaller than threshold . in light of this, we do not find that there are dramatic dynamical differences between the euler - lagrange equations ( or the hamiltonian ) and the various pn approximations such as the 1pn hamiltonian and the 1pn euler - lagrange equations .however , it is clearly shown in table 1 that regular and chaotic domains of smaller separations in the lowest pn approaches and are explicitly different from those in or . as claimed above, this result is of course expected .when the order gets higher and higher , and have smaller and smaller dynamical differences compared with or .two points are worth noting .first , the same order pn approaches like and ( but unlike and ) are incompletely equivalent in the dynamical behaviors for smaller values of .second , all the pn approaches , , , , , and can still have the same dynamics when is larger enough .the two points are due to the differences among these approaches from the relativistic effects depending on ; smaller values of result in larger relativistic effects but larger values of lead to smaller relativistic effects .now we are interested in quantitative studies on the various pn approximations to the hamiltonian and the various pn approximations to the euler - lagrange equations . in other words , we want to know how the deviation between the position coordinate for ( or ) and the position coordinate for ( or ) varies with time . to provide some insight into the rule on the deviation with time, we should consider the regular dynamics in various pn approximations because the chaotic case gives rise to exponentially sensitive dependence on initial conditions .for the sake of this purpose , the parameters and initial conditions unlike the aforementioned ones are , and .when is given in fig .5(a ) , the curve is used to estimate the accuracy of numerical solutions between and , which begins in about the magnitude of and is in about the magnitude of at time .the difference numerical solutions between and is rather large . with increase of , is soon closer to .for instance , is basically consistent with after time , and is almost the same as .similarly , this rule is suitable for the approximations to the euler - lagrange equations in fig .after the integration time reaches 10000 for each $ ] in figs .5(c ) and 5(d ) , appropriately larger separation and higher enough order are present such that and are identical to or . in a word ,it can be seen clearly from fig .5 that and are equivalent as is sufficiently large .in general , pn lagrangian and hamiltonian formulations at the same order are nonequivalent due to higher order terms truncated .a lower order lagrangian is possibly identical to a higher enough order hamiltonian .it is difficult to check this equivalence because the euler - lagrange equations are not exactly but approximately derived from the lagrangian . to cope with this difficulty , we take a simple relativistic circular restricted three - body problem as an example and investigate the equivalence of pn lagrangian and hamiltonian formulations . this dynamical problem is described by a 1pn lagrangian formulation , in which the euler - lagrange equations not only are exactly given but also can be expressed as a converged infinite pn order taylor series .the lagrangian has an exactly equivalent hamiltonian , expanded to another converged infinite pn order taylor series .numerical results support the equivalence of the 1pn lagrangian with the euler - lagrange equations at a certain specific higher order and the pn hamiltonian approach to a higher enough order . in this way, we support indirectly the general result of [ 8,10 ] that a lower order lagrangian approach with the euler - lagrange equations at some sufficiently higher order can be equivalent to a higher enough order hamiltonian approach . | it was claimed recently that a low order post - newtonian ( pn ) lagrangian formulation , which corresponds to the euler - lagrange equations up to an infinite pn order , can be identical to a pn hamiltonian formulation at the infinite order from a theoretical point of view . this result is difficult to check because in most cases one does not know what both the euler - lagrange equations and the equivalent hamiltonian are at the infinite order . however , no difficulty exists for a special 1pn lagrangian formulation of relativistic circular restricted three - body problem , where both the euler - lagrange equations and the equivalent hamiltonian not only are expanded to all pn orders but also have converged functions . consequently , the analytical evidence supports this claim . as far as numerical evidences are concerned , the hamiltonian equivalent to the euler - lagrange equations for the lower order lagrangian requires that they both be only at higher enough finite orders . |
during the last two , three decades , the field of crowd dynamics has emerged as the _ natural _sciences reaction to questions arising from _ social _ sciences , population biology and urban planning .see e.g. for an example of a problem addressed in psychology , or for an illustration of the civil engineering aspects .the roots and philosophy of crowd dynamics are very much in the spirit of statistical mechanics , molecular dynamics , interacting particle systems methods and the theory of granular matter , as such treating individual humans nearly as non - living material ( cf .e.g. the nice overview and references cited therein ) .a justification for this approach lies in the fact that the individuals personal will is more or less averaged out if one looks at the crowd as a whole . from this perspective, it can be considered as ( stochastic ) noise , superimposed on some ` clean ' ( deterministic ) dynamics .+ to illustrate the thin borderlines between several fields of study , the reader is referred to e.g. for the dynamics of non - living particles , for studies of tumbling or self - propelled living particles ( like bacteria ) , or for crowd dynamics .although these fields all focus their own specific real - world scenario , their way of thinking and posed questions are very much alike . + + however , an evident and important difference between people and molecules or grains ( apart from people s own opinions , irritations etc . )is the fact that people clearly have front and back sides .our degree of perceiving our surroundings highly depends on the direction of looking .we mainly base our walking behaviour on what we see , and clearly what happens in front of us thus has more influence than what happens behind us ( this statement is also supported e.g. by ) .a modification or extension of physics - inspired models is needed to incorporate this kind of anisotropy in the interactions between individuals .this paper investigates the effect of anisotropy on the global behaviour of a group of pedestrians .+ + our focus is on the simulation of a scenario where pedestrians move in a long corridor .we might relate this situation to evacuation of people from a building ( cf .it is sane to assume that these evacuees have an intrinsic _ drive _ to move towards the exit ( i.e. one side of the corridor ) , and moreover that there view is focused in the same direction .investigating the effect of anisotropy on the large - scale behaviour of the crowd therefore relates to assessing the escape process .+ + in section [ sec : level1 ] of this paper the model is presented and explained .section [ sec : sim setup and results ] is the main part of the paper .it describes the exact scenario of our simulations and the definitions of the quantities we use for assessing the results ( polarization index , projected density , morisita index ) .moreover , in this section the simulation results are presented and discussed .conclusions and an outlook on possible future work are given in section [ sec : conclusion ] .we represent pedestrians by point particles having masses .they are located in a long corridor of length and width . here, the word ` long ' refers to the fact that at the time scales we focus on , the pedestrians are not able to reach the end of the corridor .interactions between pedestrians are short - ranged .we therefore suppose that the correlation length in the system is less than or equal to a certain and we can subdivide the corridor in an array of rectangles ( width and length ) , which are all duplicates of each other .we thus have a scenario with periodic boundary conditions .our domain of interest is therefore a rectangular box \times [ -\frac{b}{2 } , \frac{b}{2 } ] , \ ] ] with periodic boundary conditions in one direction and impermeable walls in the other direction .the corridor contains pedestrians .for all and , the vector represents the position of the -th pedestrian at time .we denote its velocity by .+ + we assume that the governing equation of motion is the equation describes the motion of the -th individual , which has mass and which moves with velocity . however , he / she _ tries _ to move according to its desired velocity . here, is the characteristic relaxation time related to attaining the desired velocity .its actual velocity is moreover perturbed by two ` forces ' .the word ` force ' is used since ( [ newton_overdamped ] ) can be regarded as an overdamped limit of a newton - like equation ( cf . for this newton - like way of modelling ) .+ one could argue whether the social force is the right concept to use to drive the pedestrians , or maybe ideas like social pressure ( as in a darcy - like law ) or cognitive - based heuristics ( see e.g. ) are more appropriate . herewe avoid any polemic by deciding to choose a framework based on social forces and leave for later any further developments of other possible approaches .+ + there is a physical force that acts on the individual to describe the effect of the non - living environment ( geometry ) . in this paperwe only take into account the influence of walls on pedestrians , that is : = .furthermore , pedestrian experiences a so - called social force due to the presence of other individuals , which influences the motion of this particular pedestrian .+ + individuals are influenced by the walls as soon as they come too close , i.e. within a distance .we model these impermeable walls by means of a strong repulsive force acting on pedestrian : here , is the unit normal pointing from the corresponding wall into the corridor , is the strength of the repulsive force and is the distance to the wall for pedestrian .the word ` strong ' here implies that this force is not just a contact force , but has a longer range .typically , this makes individuals avoid walls before touching them . for small is the same as in the interactions between individuals ( cf .( [ fsoc])([u_attr_rep ] ) ) , be it with different parameters . ] + + furthermore , very much in the spirit of , we specify the social force by where : * is the collection of the position vectors of all individuals which are within a distance to pedestrian .in other words , pedestrians interact only when they are close enough to each other ; * we assume that the interaction potential depends only on the relative position of the two pedestrians and and not on their relative velocity .( 0,0)(0,1)node[anchor = north east] ; ( 0,0)(1,0)node[anchor = north east] ; ( 0,0)(1,2)node[anchor = north east] ; ( 0,0)(3,4)node[anchor = north east ] ; ( 1,2)(3,4 ) node[midway , sloped , above] node[anchor = north west] ; ( 1,2)(3.5,2 ) node[midway , sloped , below] ; ( 2.0 , 2 ) arc ( 0:45:1.0 ) ; at ( 2.25,2.5 ) ; specifically , takes the form , here , ] ?some preliminary comments in this direction are given in [ appendix ] .+ if increases , the natural thing to do is to consider the discrete - to - continuum limit ( i.e. construct educated procedures to derive mean - field limit equations ) .does such limit exist , can we derive it , and can we compare the effect of anisotropy in the limit to the observations of the current work ? * how much does the large time behaviour of the crowd depend on the initial conditions ?the initial distribution of pedestrians in this paper is not a realistic situation .people starting to enter a corridor are in real life never distributed in a crystalline structure . however , for an escape situation ( for example in the case of fire ) it seems reasonable to assume that a group of people starts , being clustered , at one side of a corridor .therefore , as an extension of this research , we propose to use as initial distribution a more realistic configuration in which people are placed at one side of the corridor , with their positions ( slightly ) perturbed from the grid points .averaging over a large collection of such perturbed initial distributions , will lead to effective results . are these averaged results comparable to the ones presented in this paper ?in other words : is averaging the results basically the same as removing the fluctuations from the initial conditions ?+ in the paper we have included some preliminary results ( in the ends of sections [ sect : results polarization ] and [ sect : results morisita ] ) in this direction .there we took the other extreme : random initial conditions over the whole corridor . *what happens if we try to make our model more realistic : e.g. change the shape of the domain , or allow variation in the direction and magnitude of individuals desired velocity ? including more sophisticated active parts in the boundary ( doors ) or impermeable objects within the domain , automatically leads to questions about the efficiency of the flow ( such issues are also addressed e.g. in ) .which geometry leads to the fastest evacuation ?first steps in this direction have been made in .the issues addressed in this paper show that anisotropy related to perception has nontrivial effects on the global dynamics of a crowd . certainly , these effects can not be neglected .more work , both numerically and analytically , is needed to extend and formalize our results .the authors thank c. storm , f. toschi , f. van de ven , h. wyss ( all with tu eindhoven ) and r. fetecu ( simon fraser univ .canada ) for a series of fruitful discussions on the dynamics of self - propelled particles with anisotropic motion potentials .they are also grateful to j. lega ( univ . of arizona usa ) for sharing her thoughts .moreover , they thank the anonymous referees for their comments and suggestions for improvements .je kindly acknowledges the financial support of the netherlands organisation for scientific research ( nwo ) , graduate programme 2010 .let us start by saying that simulation of large numbers of individuals is beyond the scope of this paper .our current implementation is inadequate for simulating system sizes one is used to in molecular dynamics . in our perspective this paper aims primarily at getting insight about what features to expect .a second stage ( and follow - up paper ) is to optimize the implementation and increase the system size .+ + looking ahead , we provide here some preliminary results for in the . in figures[ p1000 ] and [ i 1000 ] we show the time - averaged polarization and the morisita index , respectively , as a function of . compared to figures [ npar ] and [ niar ] , the graphs have been continued by incorporation of the values at . note the logarithmic scale of the horizontal axes . + as a function of the number of pedestrians .results in the , for several different values of at time .this is an extension of figure [ niar ] including the value for with logarithmic scaling on the horizontal axis . ] as a function of the number of pedestrians .results in the , for several different values of at time .this is an extension of figure [ niar ] including the value for with logarithmic scaling on the horizontal axis . ]note moreover that we have extended figures [ npar ] and [ niar ] by only one data point each .the linear interpolation between the values at and is therefore probably not very meaningful .what we are interested in , is the general trend .+ + for the polarization , the ordering as a function of remains as we observed it .what requires more investigation is the increasing trend : that is , increasing for increasing . in figure [ npar ]the graphs seem to stop growing as goes towards .an issue here might be that the time interval of 100 s is simply too short for larger . + + in the morisita plots we see a downward trend .we see that the curve for remains beneath the other two , also for .we can thus regard the absence of ordering of the curves for smaller than as an exception . +a remark needs to be made about the fact that , at , the morisita index for is smaller than for .the values are for , for and for .the ordering of the curves therefore seems to be lost here also , be it that the difference is small compared to the magnitude of .99 u. ascher and l. r. petzold , computer methods for ordinary differential equations and differential - algebraic equations , philadephia , siam , 1988 . | we consider a microscopic model ( a system of self - propelled particles ) to study the behaviour of a large group of pedestrians walking in a corridor . our point of interest is the effect of anisotropic interactions on the global behaviour of the crowd . the anisotropy we have in mind reflects the fact that people do not perceive ( i.e. see , hear , feel or smell ) their environment equally well in all directions . the dynamics of the individuals in our model follow from a system of newton - like equations in the overdamped limit . the instantaneous velocity is modelled in such a way that it accounts for the angle under which an individual perceives another individual . + we investigate the effects of this perception anisotropy by means of simulations , very much in the spirit of molecular dynamics . we define a number of characteristic quantifiers ( including the polarization index and morisita index ) that serve as measures for e.g. organization and clustering , and we use these indices to investigate the influence of anisotropy on the global behaviour of the crowd . the goal of the paper is to investigate the potentiality of this model ; extensive statistical analysis of simulation data , or reproducing any specific real - life situation are beyond its scope . _ keywords _ : traffic and crowd dynamics , interacting agent models , self - propelled particles , pattern formation ( theory ) |
molecular reaction dynamics studies aim at understanding chemical reactions and inelastic collisions at the atomic scale . in other words ,this field of research draws much of the conceptual framework in which chemical reactivity , in a broad sense , can be thought . quantum state - resolved integral and differential cross sections ( icss and dcss ) , measured in supersonic molecular beam experiments , are among the most fundamental observables of molecular reaction dynamics .this paper deals with their classical mechanical description in a semi - classical spirit .most processes considered up to now involve three or four atoms , on purpose .this allows both measurements at an amazing level of detail and accurate theoretical descriptions of the observables from first principles .additionally , planetary atmospheres and interstellar clouds are mainly made of small species which dynamics should be understood . nowadays , however , much of molecular science is polarized on larger systems , like nano - objects or molecules of biological interest , and the natural trend in molecular reaction dynamics is also to move towards increasing complexity .more and more polyatomic processes are thus under scrutiny .state - of - the - art descriptions of state - resolved icss and dcss are in principle performed within the framework of exact quantum scattering approaches ( eqs ) . however , despite the impressive progress of computer performance achieved in the last decades , these approaches can hardly be applied to larger than three or four - atom systems as the basis sizes necessary for converging the calculations turn prohibitive .a popular alternative is the quasi - classical trajectory method ( qctm ) .this approach is intuitive , relatively easy to implement , much less time consuming than eqs approaches and therefore , quite appealing for studying polyatomic processes .the price to pay is obviously a loss in accuracy as compared to eqs approaches .nevertheless , significant advances have been made in the last few years through the replacement of the standard binning ( sb ) procedure by the gaussian weighting ( gw ) one . in the sb method , each trajectory has the same statistical weight . on the other hand ,the gw procedure consists in weighting each trajectory by a gaussian - like coefficient such that the closer the final actions to integer values , the larger the coefficient .this procedure proves to be especially efficient when few vibrational levels are available in the final products .though initially proposed on the basis of rather intuitive arguments , the gw procedure can be shown to find its roots in classical matrix theory , the former semi - classical approach of molecular collisions pioneered by miller and marcus in the early seventies .central quantities of chemical reaction theory are ( 1 ) the state - to - state reaction probabilities , where and are reagent and product quantum states , ( 2 ) the densities , where is the scattering angle or any given angle of the problem and ( 3 ) the capture probabilities for processes involving long - lived intermediate complexes . from these quantities , any state - resolved ics and dcs can be determined . to calculate the previous probabilities ( or density of ) , one must generate classical dynamical conditions corresponding to quantum state .such a generation is readily performed in angle - action coordinates as these are in close correspondence with quantum numbers . on the other hand ,angle - action variables should not be used to run trajectories as contrary to cartesian coordinates , they lead to strong numerical instabilities . the transformation from angle - action variables to cartesiancoordinates is therefore a crucial step of qctm . for atom - diatom ( semi ) collisions , this transformation can be found in the book by whittaker and in a paper by miller .however , we have not been able to find in the literature the analogous transformation for a generic type of collision .the goal of the paper is to thus to derive it .in this work , a prototype system is presented , namely a five - atom molecule made of a triatomic ( abc ) and a diatomic ( de ) .the former can be used as a model for a non - linear polyatomic fragment while the latter constitutes a simpler case very commonly found in practice . the transformation provided herewill therefore be relevant for a generic pair of molecular fragments , _e.g. _ diatom + diatom , asymmetric top + diatom , asymmetric top + asymmetric top , etc after straightforward generalizations .these , along with the transformations in allow thus to treat any case of interest .we suppose the fragments are to be studied in the low energy regime where only the lowest vibrational states can be populated , thus the harmonic description of their vibrations is a reasonably accurate approximation .anharmonic corrections can be introduced when necessary . throughout this work ,the usual convention of boldfacing vector magnitudes is used .cartesian frames centered on a generic point p are represented as .a given vector in such a frame will be rewritten as if we refer it to instead .calligraphic letters are used for representing matrices and second - rank tensors .some standard transformations , _e.g. _ that of normal modes to cartesian coordinates , are included for completeness .finally , the two fragments , abc and de , are numbered 1 and 2 and so are their associated magnitudes . the system is schematically represented in fig .[ diag:5atom ] .three cartesian frames of reference are used : ( 1 ) the laboratory frame which origin is at the molecular center of mass g and is in uniform translation so that the total center - of - mass movement can be effectively removed , and ( 2 , 3 ) the two body - fixed , non - inertial reference frames with origins at each fragment s center of mass , denoted g and g .the cartesian coordinates to which transformation from angle - actions is made are defined as the complete set of nuclei positions , in the space , plus their conjugate momenta , with x \{a , b , , e}. the total number of such coordinates yields , of course , . of the 30 variables chosen , 8 are not angle - actions ,_ i.e. _ ( 1 , 2 ) the distance between the fragments centers of mass g and g and its conjugate momentum ; and ( 38 ) the position and momentum vectors for the molecular center of mass , g. the 22 angle - action variables are thus : 1.3 cm 1.2 cm 0.3 cm the vibrational phase of the normal mode of abc , . the vibrational action of the normal mode of abc , . the vibrational phase of de . the vibrational action of de .the modulus of the total angular momentum .the angle conjugate to . the algebraic value of the projection of on the laboratory axis .the angle conjugate to .the modulus of the orbital angular momentum .the angle conjugate to .the modulus of the rotational angular momentum of abc .the angle conjugate to .the modulus of the rotational angular momentum of de .the angle conjugate to .the modulus of the total rotational angular momentum .the angle conjugate to . the algebraic value of the projection of on one of the three axes of inertia of abc . the angle conjugate to .the six triatomic normal mode coordinates fully specify the three position vectors , and , in the plane of the body - fixed frame of abc . is arbitrarily made to coincide with one of the abc axes of inertia when it happens to be in its equilibrium geometry .these six normal mode coordinates also define the three momentum vectors , and , conjugate to the three previous position vectors , _i.e. _ 12 coordinates as a whole .note that these twelve cartesian coordinates are deduced from the six normal modes plus six constraints due to the fact that abc is neither in translation nor in rotation in the plane . the total angular momentum , its -component , their conjugate angles and as well as the orbital and total rotational angular momenta are represented in fig .[ diag : jlkgen ] .the unit vectors along the and axes are respectively denoted and .we wish to emphasize here that the three axes , and used at this point have nothing to do with the primed axes introduced in the previous paragraph .several primed frames will be defined in the following which will be different from each other . is the angle between and while is the angle between and ., the intermolecular jacobi vector and its conjugate momentum .,width=321 ] is represented in fig .[ diag : lpr ] together with the jacobi vector between g and g . is the angle between and .the momentum conjugate to is also depicted .like , lies in the plane orthogonal to . is represented in fig .[ diag : j1k1gen ] together with , defined as the projection of on the axis of the previously specified body - fixed frame of abc . is the angle between and . is represented in fig .[ diag : j2pr ] together with the jacobi vector between the d and e atoms . is the angle between and .the momentum conjugate to is also represented .both and lie in the plane orthogonal to . , the de interatomic jacobi vector and its conjugate momentum .,width=321 ] the link between , and is isomorphic to the one between , and , as easily seen from the comparison between fig .[ diag : kj1j2 ] and fig .[ diag : jlkgen ] . is thus the angle between and .calling the unit vector along the axis of the abc body - fixed frame , the algebraic value equals plus ( minus ) when and make an angle lower ( larger ) than . finally , is the angle between and the axis in fig .[ diag : j1k1gen ] .the algorithm for computing initial conditions from the title transformation will vary slightly according to the specific application ( _ e.g. _ unimolecular dissociation , bimolecular collision ) and/or the experimental conditions to be reflected .the transformations , however , are intrinsically general so we assume in what follows that all angle - action variables , as well as and , are either known or can be computed by the time they are referred to during the process .the transformation can be decomposed in 11 steps , each making the subject of one of the following sections .it is important to note that the ordering given here is somewhat arbitrary and need for reordering may arise in specific applications . in fig .[ diag : jlk ] , the vectors , and are represented in the plane as deduced from fig .[ diag : jlkgen ] .the relation between these angular momenta can be written as squaring each side of the previous equality and rearranging leads to , equal to , is thus given by , equal to , _i.e. _ , to ( given the convention adopted , is necessarily positive ) , is therefore given by ^{1/2}. \label{eq : t4}\ ] ] at last , is zero . , orbital and rotational angular momenta.,width=226,height=245 ] is deduced from by the standard euler rotation where , for a given angle , and indeed , fig .[ diag : jlkgen ] shows that one goes from to by a rotation of around the axis followed by a rotation of around the resulting , ` new ' axis and a final rotation of around the ` new ' axis .one may easily check that these transformations are achieved by the and matrices combined as in eq .[ eq : t5 ] . is given by and , necessarily positive as $ ] , is given by ^{1/2}. \label{eq : t9}\ ] ] , its cartesian components and conjugate angles.,width=321 ] from fig .[ diag : lpr ] and following the same reasoning as above , can be shown to satisfy where represents the vector .[ diag : lblthl ] shows how the angles and relate to . is given by and , necessarily positive , by ^{1/2}. \label{eq : t12}\ ] ] is given by and by where is the modulus of the projection of on the plane , as depicted in fig .[ diag : lblthl ] . since , lies in the plane of fig .[ diag : lpr ] , has already been denoted , equals and is zero . is then obtained with we still consider fig .[ diag : jlk ] and rewrite the relation between , and as squaring each side of the previous equality and rearranging leads to , equal to , is thus given by , equal to , _i.e. _ , to ( given the convention adopted , is necessarily negative ) , is therefore given by ^{1/2 } \label{eq : t20}\ ] ] ( one may check that is the just the opposite of ) .at last , is zero . is then obtained from by the same transformation that relates to ( see eq .[ eq : t5 ] ) as already seen , the determination of and is in complete analogy with that of and ( compare fig .[ diag : kj1j2 ] and fig .[ diag : jlkgen ] ) . following the developments in sections [ sec3:l ] and [ sec3:k ], we then arrive at where , ^{1/2 } , \label{eq : t23}\ ] ] and in addition , is given by and by ^{1/2}. \label{eq : t28}\ ] ] is given by and by where is the modulus of the projection of on the plane . in the harmonic limit , the de bond length is given in terms of and by the expression ^{1/2}\sin{q_4}. \label{eq : t32}\ ] ] here, is the equilibrium bond length of the diatomic , its reduced mass and its vibrational frequency ( which is readily determined from a quadratic fitting of its interaction potential ) .although is sometimes called action , _ stricto sensus _, this is only true in units .the problem of the determination of is then analogous to that of . from fig .[ diag : j2pr ] and following section [ sec3:r ] , we find where represents the vector . is given by and by ^{1/2}. \label{eq : t35}\ ] ] is given by and by where is the modulus of the projection of on the plane .again , the problem of the determination of is analogous to that of the determination of . following section [ sec3:p ] ,we arrive at where and ^{1/2}\cos{q_4 } \label{eq : t40}\ ] ] in the harmonic approximation . and inertial - axis component angular momenta.,width=226,height=245 ] and represented in fig .[ diag : j1k1gen ] and fig .[ diag : j1k1 ] .the coordinates of in are given by , ^{1/2 } \label{eq : t41}\ ] ] and ( the last equation comes from the fact that the cosine of the angle between and is equal both to and , as is obvious from fig .[ diag : j1k1 ] ) . proceeding as previously, we find is given by and by ^{1/2}. \label{eq : t45}\ ] ] is given by and by where is the modulus of the projection of on the plane .we start by determining the position vectors for in the frame ( fig .[ diag:5atom ] ) . within the harmonic approximation ,this task is accomplished by the standard normal mode analysis ( a generalization of the procedure used in the diatomic case ; compare this and the following with sections [ sec3:rp ] and [ sec3:prp ] ) .first , the eigenvalues and eigenvectors of the hessian matrix are determined . for an -atom molecule ,six of the former correspond to the center - of - mass movement and overall rotation and thus are theoretically zero ( negligibly small in practice ) .the non - zero eigenvalues , associated with the molecule internal vibrational modes , relate to their angular frequencies simply by .quasi - classical normal mode energies are then computed from the corresponding vibrational actions as which allows the calculation of the normal mode displacements cartesian mass - weighted displacements are determined with , where is the eigenvector matrix and that of normal mode coordinates .the position vectors are thus where are the equilibrium position vectors , is the mass of atom x and is extracted from according to the location given to the x - atom coordinates in . from fig .[ diag : k1angles ] , we have which holds for x = a , b or c. and some angles.,width=321 ] when is positive , the frames in fig .[ diag : k1angles ] and fig .[ diag:5atom ] exactly coincide .therefore , the dependence of on is of the same kind as in the previous sections . if , on the other hand , is negative , in fig .[ diag : k1angles ] is different from its equivalent in fig .[ diag:5atom ] .in fact , in this case the and axes are oriented in the exact opposite directions as in the previous one .the term , which equals , takes this difference into account by flipping the vector before it is identified as . in eq .[ eq : t52 ] , is given by and by ^{1/2}. \label{eq : t54}\ ] ] is given by and by where is , as usual , the modulus of the projection of on the plane .the momenta , with x = a , b or c , can be decomposed into a _ purely _ translational ( vibrational ) and a rotational components . based on the very definition of the body - fixed frame ( fig .[ diag:5atom ] ) , the former is directly related to . to calculate these ,normal - mode velocities are first computed using the conservation of energy the sign being selected according to the value of the vibrational phase .cartesian mass - weighted velocities are thus from which it is important to stress that the anharmonicity of the real potential energy has been deliberately neglected within the normal - mode approximation .to correct for its possible spurious consequences , relatively sophisticated recipes can be used at this stage .the reader is thus referred to the available literature , _ e.g. _ , as it is not our objective to reproduce them here .the rotational component is determined in the standard fashion .the triatomic angular velocity is computed as the inertia tensor of abc , which can be calculated at this point since its configuration has been determined from which , the corresponding linear velocities are given by finally , the transformation relating and is isomorphic to eq .[ eq : t52 ] , so the desired general expression for computing the former reads at this point it is a simple task to finally express all cartesian vectors in the laboratory frame . for x= a , b or c , is given by the general expression while if x = d or e , in these equations , stands for the mass of fragment and is the system total mass .similar relations hold for the cartesian momenta . for x= a , b or c , these are computed using the general expression at last , the diatomic momenta are given by and photo - fragmentation of ketene ( ch ) has been intensively investigated for over two decades , both experimental and theoretically ( _ e.g. _ ) .following photo - excitation to the states , the molecule undergoes either intersystem crossing or fast internal conversion to the low lying triplet and singlet electronic states . from these , dissociation into methylene and carbon monoxide occurs . despite the triplet threshold lies below the singlet , the fact that it presents a small barrier to dissociation of a few cents of inverse centimeters makes the singlet channel statistically dominant from excess energies as low as .such conditions make the system an effective prototype for a _ barrierless polyatomic _unimolecular reaction on a _ single _ potential energy surface ( pes ) . in direct correspondence with the model transformation we introduced above, the molecule constitutes a five - atom system which dissociates into a triatomic and diatomic fragments .additionally , the experimental excitations are compatible with the harmonic normal mode approximation for the ch and co products . in what followswe briefly report on the application of the title transformation to the study of this process .full details and results will be given in a separate work so we simply introduce it here as a corroboratory test case ., after excitation with a 308 nm laser .comparison with the experiment.,width=321 ] in fig .[ fig : petjc4 ] we compare our calculations with the most recent experimental results for the products translational energy distributions , in correlation with the rotational state of co. a 308 nm laser is used in the experiment , corresponding to an excess energy of 2350 .the theoretical results are obtained using the so - called _ exit - channel corrected phase - space theory _ , proposed by hamilton and brumer .this method basically consist in generating microcanonical initial conditions _ at the products _ and then propagate the trajectories backwards in time , the statistics being performed with those reaching the inner transition state ( ts ) .the photo - excited ketene molecule is supposed to be long lived prior to its fragmentation , thereby justifying the use of a microcanonical distribution .we employed the high - level _ ab initio _ pes and transition state locations recently reported .the theoretical predictions are in very good agreement with the experiment , as can be seen in fig .[ fig : petjc4 ] .the curve has been artificially smoothed by using a convolution with an ` apparatus ' function , _i.e. _ to recover the experimental tails .the two peaks correlating with the and ch scissor - mode states are fairly well reproduced . in order to further verify the validity of the transformation provided ,we have calculated the determinant of the jacobian matrix .the original is not a square matrix , with dimensions 30 .therefore , for being able to calculate the determinant we introduced an additional transformation to a set of jacobi coordinates , from which the ( null ) center - of - mass coordinates and momenta are later removed .the calculation starts with the transformation from angle - actions to cartesian and from these to jacobi coordinates .the center - of - mass jacobi vectors are then removed and the determinant of the resulting 24 jacobian matrix , from angle - actions to ( reduced ) jacobi coordinates , is computed .we confirmed that it yields 1 within numerical accuracy .we have presented the transformation from angle - action to cartesian coordinates , for polyatomic systems . in the quasi and semi - classical approaches ,this provides an expeditious way to generate initial conditions in close correspondence with nowadays experiments and yet , solve the equations of motion using the ` ideal ' cartesian coordinates .the methodology and expressions provided here can either be directly used or straightforwardly generalized to deal with any case of interest , ranging from the study of bimolecular collisions to polyatomic unimolecular dissociations .preliminary results of the particular application to the study of the unimolecular dissociation of ketene in the singlet electronic state , have been discussed .a very good agreement is observed between the experimental values and theoretical predictions for correlated translational energy distributions .the validity of the transformation have been further verified by numerical computation of the determinant of the jacobian matrix , which yields unity within reasonable accuracy .support from an inter - university agreement on international joint doctorate supervision between the instituto superior de tecnologas y ciencias aplicadas , cuba and the universit bordeaux 1 , france , as well as the pnap/7/3 project of the cuban institution , are gratefully acknowledged . | the transformation from angle - action variables to cartesian coordinates is a crucial step of the ( semi ) classical description of bimolecular collisions and photo - fragmentations . the basic reason is that dynamical conditions corresponding to experiments are ideally generated in angle - action variables whereas the classical equations of motion are ideally solved in cartesian coordinates by standard numerical approaches . to our knowledge , the previous transformation is available in the literature only for triatomic systems . the goal of the present work is to derive it for polyatomic ones . |
frequency analysis of signals is a classical problem that has broad applications ranging from communications , radar , array processing to seismology and astronomy .grid - based sparse methods have been vastly studied in the past decade with the development of compressed sensing ( cs ) which exploit signal sparsity the number of frequency components is small but suffer from basis mismatches due to the need of gridding of the frequency interval .its research has been recently advanced owing to the mathematical theory of super - resolution introduced by cands and fernandes - granda , which refers to recovery of fine details of a sparse frequency spectrum from coarse scale time - domain samples only .they propose a gridless atomic norm ( or total variation norm ) technique , which can be cast as semidefinite programming ( sdp ) , and prove that a continuous frequency spectrum can be recovered with infinite precision given a set of regularly spaced samples . the technical method and theoretical result were then extended by tang _ to the case of partial / compressive samples , showing that only a number of random samples are sufficient for the recovery with high probability via atomic norm minimization ( anm ) .moreover , yang and xie study the multiple - measurement - vector ( mmv ) case , which arises naturally in array processing applications , with similar results proven using extended mmv atomic norm methods .however , a major problem of existing atomic norm methods is that the frequency spectrum can be recovered only when the frequencies are sufficiently separated , prohibiting commonly known high resolution the capability of resolving two closely spaced frequency components. a sufficient minimum separation of frequencies is in theory .empirical evidences in suggest that this number can be reduced to , while according to it also depends on , and the number of measurement vectors . in this paper, we attempt to propose a high resolution gridless sparse method for super - resolution to break the resolution limit of existing atomic norm methods .our method is motivated by the formulations and properties of atomic norm and the atomic norm in .in particular , the atomic norm has no resolution limit but is np hard to compute .to the contrary , as a convex relaxation the atomic norm can be efficiently computed but suffers from a resolution limit as mentioned above .we propose a novel sparse metric and theoretically show that the new metric fills the gap between the atomic norm and the atomic norm .it approaches the former under appropriate parameter setting .with the sparse metric we formulate a nonconvex optimization problem and present a locally convergent iterative algorithm for super - resolution .the algorithm iteratively carries out anm with a sound reweighting strategy , which determines preference of frequency selection based on the latest estimate and enhances sparsity and resolution , and is termed as reweighted atomic - norm minimization ( ram ) . to the best of our knowledge , ram implements the first reweighting strategy in the continuous dictionary setting while existing reweighted algorithms ( see , e.g. , ) are for the discrete setting .extensive numerical simulations are carried out to demonstrate the high resolution performance of ram with application to doa estimation compared to existing arts .we consider the super - resolution problem in the most general case with partial samples and mmvs .in particular , we observe the samples of the data matrix on the rows indexed by of size , denoted by .the element of is ( corrupted by noise in practice ) where denotes a discrete complex sinusoid with frequency $ ] , and is the coefficient vector of the sinusoid .that is , each column of is superimposed by discrete sinusoids .we are interested in recovering the frequencies given .meanwhile , it is also of interest to recover the complete data matrix .the resulting problem is known as continuous / off - grid cs according to , which differs from existing cs framework in the sense that every frequency can take any continuous value in rather than constrained on a finite discrete grid .the single - measurement - vector ( smv ) case where is known as line spectral estimation .the mmv case where is common in array processing .therein the sampling index set refers to sensor placement of a linear sensor array and a smaller sample size means use of less sensors . consists of measurements of the sensor array and each column vector corresponds to one data snapshot .each frequency corresponds to the direction of one source .therefore , the frequency estimation problem is known as direction of arrival ( doa ) estimation .the super - resolution or continuous cs problem is tackled from the perspective of signal recovery .the frequencies are then retrieved from the computational result . in particular , we seek a _ frequency - sparse _candidate , which is composed of a few frequency components , in a feasible domain defined by the observed samples . to do this, we first define a sparse metric of and then optimize the metric over the feasible domain .a direct sparse metric is the smallest number of frequency components composing , known as the atomic norm and denoted by . according to be characterized as the following rank minimization problem : the first constraint in ( [ formu : atom0norm ] ) imposes that lies in the range space of a ( hermitian ) toeplitz matrix whose first row is specified by the transpose of .the frequencies composing are encoded in .once an optimizer of , say , is obtained the frequencies can be retrieved from according to the vandermonde decomposition lemma ( see , e.g. , ) , which states that any positive semidefinite ( psd ) toeplitz matrix can be decomposed as , where the order and ( see a method for realization of the decomposition in ( * ? ? ?* appendix a ) ) .the atomic norm directly enhances sparsity , however , it is nonconvex and np - hard to compute and encourages computationally feasible alternatives . in this spirit , the atomic ( ) norm , denoted by ,is introduced as a convex relaxation of and has the following semidefinite formulation : from the perspective of low rank matrix recovery ( lrmr ) , ( [ formu : an_sdp ] ) attempts to recover the low rank matrix by relaxing the pseudo - rank norm in ( [ formu : atom0norm ] ) to the nuclear norm or equivalently the trace norm for a psd matrix .the atomic norm is advantageous in computation compared to the atomic norm , however , it suffers from a resolution limit due to the relaxation which is not shared by the latter .inspired by the link between continuous cs and lrmr demonstrated above , we propose the following sparse metric of : where is a regularization parameter that avoids the first term being when is rank deficient .note that the log - det heuristic is a common smooth surrogate for the matrix rank ( see , e.g. , ) . from the perspective of lrmr ,the atomic norm minimizes the number of nonzero eigenvalues of while the atomic norm minimizes the sum of the eigenvalues .in contrast , the new metric penalizes , where denotes the eigenvalues .we plot the function with different s in fig .[ fig : sparsity ] , according to which we expect that the new metric bridges and when varies from to .formally , we have the following results and we provide their proofs in an extended journal paper . with different .the plotted curves include the and norms corresponding to and respectively , and corresponding to with and . is translated and scaled such that it equals 0 and 1 at and 1 respectively for better illustration.,width=278 ] let . then , i.e. , they are equivalent infinitesimals .[ thm : epsilontoinf ] let .then , we have the following results : 1 . if , then i.e. , they are equivalent infinities . otherwise , is a positive constant depending only on ; 2 .let be the optimizer of to the optimization problem in ( [ formu : nonconvexrelax ] ) .then , the smallest eigenvalues of are either zero or approach zero as fast as ; 3 . for any cluster point of at , denoted by , there exists an atomic decomposition of order such that .[ thm : epsilontozero ] theorem [ thm : epsilontoinf ] shows that the new metric plays the same role as in the limiting scenario when , while theorem [ thm : epsilontozero ] says that it is equivalent to as .consequently , it fills the gap between and and enhances sparsity and resolution compared to as gets small .moreover , theorem [ thm : epsilontozero ] characterizes the properties of the optimizer as including the convergent speed of the smallest eigenvalues and the limiting form of via the vandermonde decomposition .in fact , we always observe via simulations that the smallest eigenvalues of become zero once is modestly small .with the proposed sparse metric , we solve the following optimization problem for signal and frequency recovery : or equivalently , where denotes the feasible domain of .for example , in the noiseless case , it is the set .since the log - det term is a concave function of , the problem is nonconvex and no efficient algorithms can guarantee to obtain the global optimum . a majorization - maximization ( mm )algorithm is introduced as follows .let denote the iterate of the optimization variable .then , at the iteration we replace by its tangent plane at the current value . as a result ,the optimization problem at the iteration becomes since is strictly concave in , at each iteration its value decreases by an amount greater than the decrease of its tangent plane .it follows that the objective function in ( [ formu : problem ] ) monotonically decreases at each iteration and converges to a local minimum . to interpret the optimization problem in ( [ formu : problem_j ] ) , let us define a _ weighted continuous dictionary _ w.r.t . the original continuous dictionary , where is a weighting function .for , we define its _ weighted atomic norm _ as its atomic norm induced by : according to the definition above , specifies preference of the atoms . to be specific ,an atom , , is more likely selected if is larger .moreover , the atomic norm is a special case of the weighted atomic norm with a constant weighting function ( i.e. , without any preference ) according to .suppose that with .then , [ thm : weightan ] let and . by theorem [ thm : weightan ] we can rewrite the optimization problem in ( [ formu : problem_j ] ) as the following _ weighted atomic norm minimization _ problem : as a result, the proposed iterative algorithm can be interpreted as _ reweighted atomic - norm minimization _ ( ram ) . if we let be a constant function or equivalently , , such that there is no preference of the atoms at the first iteration , then the first iteration coincides with the anm . from the second iteration ,the preference is defined by the weighting function specified above .note that corresponds to the power spectrum of capon s beamforming ( see , e.g. , ) if is interpreted as the covariance of the noiseless data and as the noise variance .therefore , the reweighting strategy makes the frequencies around those estimated by the current iteration preferable at the next iteration and thus enhances sparsity . at the same time, the preference leads to finer details of the frequency spectrum in that area and enhances resolution .since the `` noise variance '' can be translated as the confidence level in the current estimate , from this perspective we should gradually decrease and correspondingly increase the confidence in the solution during the algorithm .in this subsection , we study the success rate of ram in super - resolution compared to anm . in particular , we fix and with the sampling index set being generated uniformly at random .we vary the duo and at each combination we randomly generate frequencies such that they are mutually separated by at least .we randomly generate the amplitudes independently and identically from a standard complex normal distribution . after obtaining the noiseless samples , we carry out super - resolution using anm and ram , both implemented by an off - the - shelf sdp solver sdpt3 .the recovery is called successful if both the relative mse of signal recovery and the mse of frequency recovery are less than . at each combination , the success rateis measured over 20 monte carlo runs . in ram , we first scale the measurements such that and compensate the recovery afterwards .we start with and as default .we halve when beginning a new iteration until . we terminate ramif the relative change ( in the frobenius norm ) of the solution at two consecutive iterations is less than or the maximum number of iterations , set to 20 , is reached .we plot the success rates of anm and ram with in fig .[ fig : phasetrans_1 ] , where it is shown that successful recovery can be obtained with more ease with a smaller and a larger frequency separation , leading to a phase transition in the sparsity - separation domain .it is shown that ram significantly enlarges the success phase and hence enhances sparsity and resolution compared to anm .at we did not find a single failure in our simulation whenever and .the phase transitions of both anm and ram are not sharp since the frequencies are separated by _ at least _ and a set of _ well separated _ frequencies can be possibly generated at a small value of .it is also observed that ram tends to converge in less iterations with a smaller and a larger .( top ) and ( bottom ) , and .the grayscale images present the success rates , where white and black colors indicate complete success and complete failure , respectively.,title="fig:",width=158 ] ( top ) and ( bottom ) , and .the grayscale images present the success rates , where white and black colors indicate complete success and complete failure , respectively.,title="fig:",width=156 ] ( top ) and ( bottom ) , and .the grayscale images present the success rates , where white and black colors indicate complete success and complete failure , respectively.,title="fig:",width=158 ] ( top ) and ( bottom ) , and .the grayscale images present the success rates , where white and black colors indicate complete success and complete failure , respectively.,title="fig:",width=156 ] we apply the proposed ram method to doa estimation .in particular , we consider a 10-element sparse linear array ( sla ) with sensors positions indexed by , where the distance between the first two sensors is half the wavelength .hence , we have that and .we consider that narrowband sources impinge on the sensor array from directions corresponding to frequencies , , and , and powers , , and , respectively . it is challenging to separate the first two sources which are separated by only .complex normal noise is added to the samples with variance and is defined as , where ( mean + twice standard deviation ) upper bounds the noise energy with high probability .we consider both the cases of uncorrelated and correlated sources while the later case is usually considered to be more difficult with existing methods such as music ( see , e.g. , ) . in the latter case ,sources 1 and 3 are set to be coherent ( completely correlated ) .assume that data snapshots are collected which are corrupted by i.i.d .gaussian noise of unit variance .we propose a dimension reduction technique to reduce the order of the sdp matrix from to and accelerate the computational speed , which is detailed in .we terminate ram within maximally 10 iterations and consider music and anm for comparison .our simulation results of 100 monte carlo runs are presented in fig .[ fig : noisy_noiselevel ] ( only the first 20 runs are presented for music for better illustration ) . in the absence of source correlations, music has satisfactory performance in most scenarios .however , its power spectrum exhibits only a single peak around the first two sources ( i.e. , the two sources can not be separated ) in at least 3 out of the first 20 runs ( indicated by the arrows ) .moreover , music is sensitive to source correlations and can not detect source 1 when it is coherent with source 3 .anm can not separate the first two sources in the uncorrelated source case and always produces many spurious sources .in contrast , the proposed ram always correctly detects 4 sources near the true locations , demonstrating its capabilities in enhancing sparsity and high resolution .anm and ram take and on average , respectively , while these numbers can be greatly decreased with more sophisticated algorithms ( see ) .in this paper , we studied the spectral super - resolution problem with partial samples and mmvs .motivated by its connection to the topic of lrmr , we proposed reweighted atomic - norm minimization ( ram ) for achieving high resolution compared to currently prominent atomic norm minimization ( anm ) and validated its performance via numerical simulations .z. yang and l. xie , `` on gridless sparse methods for line spectral estimation from complete and incomplete data , '' revised version submitted to _ ieee transactions on signal processing _ , _ available online at http://arxiv.org/abs/1407.2490_ , 2014 .m. fazel , h. hindi , and s. p. boyd , `` log - det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices , '' in _ american control conference _ , vol .3.1em plus 0.5em minus 0.4emieee , 2003 , pp . | the super - resolution theory developed recently by cands and fernandes - granda aims to recover fine details of a sparse frequency spectrum from coarse scale information only . the theory was then extended to the cases with compressive samples and/or multiple measurement vectors . however , the existing atomic norm ( or total variation norm ) techniques succeed only if the frequencies are sufficiently separated , prohibiting commonly known high resolution . in this paper , a reweighted atomic - norm minimization ( ram ) approach is proposed which iteratively carries out atomic norm minimization ( anm ) with a sound reweighting strategy that enhances sparsity and resolution . it is demonstrated analytically and via numerical simulations that the proposed method achieves high resolution with application to doa estimation . |
one of the many suprising applications of shared entanglement is superdense coding introduced by bennett and wiesner . in the simplest example of this protocol, two people ( alice and bob ) share a pair of entangled qubits ( spin half particles or any other two state systems ) in a bell state .alice can then perform any of the four unitary operations given by the identity or the pauli matrices and on her qubit .each of these four unitary operations map the initial state of the two qubits to a different member of the bell state basis .clearly , these four orthogonal and therefore fully distinguishable states can be used to encode two bits of information . after encoding her qubit, alice sends it off to bob who can extract these two bits of information by performing a joint measurement on this qubit and his original qubit .this apparent doubling of the information conveying capacity of alice s qubit because of prior entanglement with bob s qubit is referred to as superdense coding ( which we shall henceforth refer to as just dense coding ) .dense coding has been implemented experimentally with polarization entangled photons .some generalizations of the scheme to pairs of entangled n - level systems in non - maximally entangled states ( as opposed to qubits in bell states ) and to distributed multiparticle entanglement have also been studied . in the simplest example, the power of the method stems from the accessibility of all the four possible bell states through local operations done by alice alone on her qubit .obviously this accessibility is related to the fact that the qubits shared by alice and bob are in an _ entangled _ state .so an important question to ask is : _ in what way does the capacity of conveying information through dense coding depend upon the degree of entanglement of the initial shared pair of qubits _ ?barenco and ekert and hausladen _ et al ._ have investigated this question when the initial shared entangled state is a pure state .their analysis shows that the amount of information conveyed by the dense coding procedure decreases monotonically from its maximum value ( two bits per qubit ) with the decrease of the magnitude of the shared entanglement .it becomes one bit per qubit when the entanglement becomes zero .recently , there has been a lot of work in quantifying the entanglement of systems in mixed states .these measures of entanglement have their physical interpretation in the process of entanglement purification . a natural question to askis : _ in what way does the capacity to do dense coding depend on the degree of entanglement as specified by these measures ?_ answering such a question , in effect , would mean _ linking up the apparently disconnected concepts of purification and channel capacity_. in this paper we take a step in this direction by obtaining bounds on the number of bits of information conveyed per qubit ( let us call this * c * after capacity ) during dense coding in terms of the various measures of entanglement. it should be noted that * c * is a classical capacity as it quantifies number of bits of information , but the carriers of the information are quantum ( qubits ) .we also invert our results so that the value of * c * gives us information about the range within which the initial shared entanglement lies . in other words ,bounds on entanglement measures ( and hence on purification procedures ) can be expressed in terms of the classical capacity * c*. this might be helpful , because , as we shall show , the classical capacity for dense quantum coding is readily calculable for certain special classes of dense coding protocols .suppose , alice has a set of mixed quantum states at her disposal to convey some classical information ( sequences of ones and zeros ) to bob .each state can be regarded as a separate letter .also suppose that alice sends the state to bob with an _ a priori _probability .the ensemble that alice uses to communicate with bob is therefore given by in the above case , the average number of bits of information that alice can convey to bob per transmission of a letter state is bounded from above by the holevo function where denotes the von neumann entropy of the state ( here , and throughout the rest of the paper , stands for ) . using a different notation ,the holevo function can be rewritten as where stands for and is known as the quantum relative entropy between the states and .it was recently shown that this bound can be achieved in the limit of an infinite ensemble by appropriate block coding ( grouping together and pruning long strings of letter states to represent messages ) at alice s end and appropriate measurements at bob s end .now consider the case of dense coding .alice and bob initially share an entangled pair of qubits in some state , which may be mixed .alice then performs local unitary operations on her qubit to put this shared pair of qubits in either of the states or . in general, alice may use a completely arbitrary set of unitary operations to generate these states : in the above equation , acts on alice s qubit and acts on bob s qubit . by sending herencoded qubit to bob , alice is essentially communicating with bob using the states and as separate letters . the number of bits she can communicate to bob using this procedure is thus bounded by the holevo function given in eq.([holv ] ) .moreover , if some block coding is done on a large enough collection of qubits in addition to the dense coding , then the number of bits of information communicated is equal to the holevo function .we will thus take assuming that any additional necessary block coding will automatically be done to supplement the dense coding .exactly the same assumption has been used in ref. to calculate the capacity for dense coding in the case of pure letter states .eqs.([cg ] ) and ( [ see ] ) define the most general version of dense coding and we shall refer to this as completely general dense coding ( cgcd ) .a simpler example of dense coding is the case when the letter states are generated from the initial shared state by in the above set of equations , the first operator of the combination acts on alice s qubit and the second operator acts on bob s qubit .we shall refer to this case ( i.e when the letter states are generated by eqs.([w0])-([w1 ] ) ) as simply general dense coding ( gdc ) .the generality present in gdc is that alice is allowed to prepare the different letter states with unequal probabilities . in other words, one has to use eqs.([avg])-([see ] ) to estimate the capacity * c*. in the more special case when alice not only generates the four letter states according to eqs.([w0])-([w1 ] ) ) but also with equal probability , the ensemble is given by and the capacity becomes we shall call this simplest case special dense coding ( sdc ) . among all the possible ways of doing gdc , sdc is the optimal way to communicate when is a pure state ( as we shall show in the next section ) or a bell diagonal state .however , we do not know the optimal way to communicate when is a completely general state and cgcd is allowed . for most of our paper, we shall obtain bounds on the classical capacity * c * for sdc only .but we shall point out those results which are valid for gdc and cgdc as well .though the main aim of this paper is to establish bounds on the capacity when the letters are mixed states , we shall begin with a calculation of for pure letter states .consider the initial shared pure state to be , )-([w1 ] ) , the other letter states are given by from which we obtain . as all are pure states we have thus from eqs.([holv1 ] ) and ( [ see ] ) we have we will consider only the case of sdc .thus the ensemble used is obtained from eq.([avg1 ] ) to be thus from eq.([sw ] ) for the capacity * c * , we get now we should recall that a good measure of entaglement for a pure state of a system composed of two subsystems a and b is given by the von neumann entropy of the state of either of the subsystems . let us call this measure the von neumann entropy of entanglement and label it by .thus where stands for partial trace over states of system a. therefore , for all the states , thus , we now prove that for pure states , sdc ( using all alphabet states with equal _ a priori _ probability ) is the optimal way to communicate among all possible ways of doing gdc ( i.e when the letter states are generated by eqs.([w0])-([w1 ] ) ) . consider the general case when the states are sent with probabilities . then from eq.([avg ] ) we have where and therefore , and form separate blocks inside the matrix and eq.([sw ] ) indicates that we need to choose the probabilities in such a way that is maximized .density matrices with same diagonal elements have the highest von neumann entropy when the nondiagonal elements are zero .applying this fact to eqs.([ro1 ] ) and ( [ ro2 ] ) we get using eqs.([nond1 ] ) and ( [ nond2 ] ) in the expressions for and and calculating the entropy gives where the normalizations of the state amplitudes and the probabilities have been used . from analysis of von neumann entropies it is well known that the expression has a maximum value when both and are equal .thus , and hence the classical capacity * c * is maximized when thus , among all the possible ways of performing gdc , sdc is the optimal way to communicate when pure states are being used as letters . from the above result and eq.([pure1 ] ) we can conclude that in the case of gdc with pure letter states we have this result ( eq.([pure2 ] ) ) had also been obtained in ref. from logical arguments . following a procedure analogous to the above proofit can be shown that for bell diagonal letter states sdc is again the optimal way to communicate among all the possible ways of doing gdc .in order to derive a lower bound on the classical capacity * c * for sdc we need to prove a crucial lemma concerning the nature of the ensemble used in sdc : _ for states defined in accordance to eqs.([w0])-([w1 ] ) , the ensemble for sdc as defined in eq.([avg1 ] ) is a disentangled state irrespective of the nature of . to prove this , we have to start by assuming to be a most general state of two qubits .this is given by ,\end{aligned}\ ] ] where the indeces and take on values from to . substituting from eq.([wo1 ] ) into eqs.([w0])-([w1 ] ) we get , \\w_2 & = & \frac{1}{4}[i\otimes i + r_2 \sigma_2 \otimes i- \sum_{m \neq 2 } r_m \sigma_m \otimes i + i\otimes \sum_m s_m \sigma_m \nonumber \\&+ & \sum_{n } t_{2n } \sigma_2 \otimes\sigma_n -\sum_{m\neq 2,n } t_{mn } \sigma_m \otimes \sigma_n ] , \\w_3 & = & \frac{1}{4}[i\otimes i + r_3 \sigma_3 \otimes i- \sum_{m \neq 3 } r_m \sigma_m \otimes i + i\otimes \sum_m s_m \sigma_m \nonumber \\&+&\sum_{n } t_{3n } \sigma_3 \otimes \sigma_n - \sum_{m\neq 3,n } t_{mn } \sigma_m \otimes \sigma_n ] .\label{w12}\end{aligned}\ ] ] using expressions for from eqs.([wo1])-([w12 ] ) in eq.([avg1 ] ) we get .\ ] ] this is very clearly a disentangled state .a plausible physical argument to support this result can be as follows .we know that an equal mixture of the four bell states is a disentangled state .so it seems highly likely that an equal mixture of less entangled states will be disentangled as well .this result is , of course , valid only for sdc , where the equal probabilities of the letter states result in the careful cancellation of all the entanglement carrying terms .it allows us to easily derive a lower bound on * c * for sdc in terms of one of the measures of entanglement .to investigate quantitatively the relationship between entanglement and * c * for arbitrary mixed letter states , it is necessary to use some measure of entanglement for mixed states .one such measure of entanglement , the relative entropy of entanglement , has been introduced in ref. .it has been shown to have a statistical interpretation as well as a physical interpretation in forming bounds on the process of entanglement purification .we will use the symbol to represent this measure of entanglement . for an arbitrary mixed state it is given by where is the set of disentangled states . a property which has to be satisfied by any legitimate entanglement measureis its invariance under local unitary operations . is thus invariant under local unitary operations of the state .as the state of the ensemble used for sdc has been shown to be a disentangled state in the previous section , we have using the above inequality in eq.([c ] ) for * c * of sdc we get as each of the states are derived from via local unitary operations only , we have for all values of . combining eqs.([eq ] ) and ( [ zeq ] ) we get this means that the classical capacity * c * for sdc is bounded from below by the relative entropy of entanglement of the initial shared mixed state .this result is however , not generalizable to gdc as it relies crucially on for sdc being a disentangled state .in this section , we are going to divert a little bit from the main theme linking * c * to entanglement measures and point out another interesting bound on * c * stemming from the mutual distinguishability of the letter states .it is known that the relative entropy is a kind of statistical measure of the distinguishability between the quantum states and .thus for gdc one can define an average mutual distinguishability function as we show below that this average mutual distinguishability function forms an upper bound on * c*. putting from eq.([avg ] ) into eq.([see ] ) we obtain where the factor has been inserted before in the second step as its value equals unity .we now use the joint convexity property of the relative entropy to expand the right hand side of eq.([c1 ] ) and obtain note that the above result is completely general because neither does it require the special class of local unitary operations given by eqs.([w0])-([w1 ] ) , nor does it require the probabilities to be uniform . thus the classical capacity for cgdcis bounded from above by the average mutual distinguishability function .note that we are pointing this out just as an interesting bound and it is not linked to the main theme of the paper ( relating entanglement measures stemming from purification procedures to dense coding capacities ) .in this section we find an upper bound on * c * in terms of yet another measure of entanglement , namely , the entanglement of formation . consider a decomposition of an arbitrary mixed state in terms of pure states : then the entanglement of formation of this state is defined as where the minimum is taken over all decompositions of of the type given by eq.([dec ] ) .the initial shared state used in dense coding , will , in general , have several decompositions in terms of pure states .let the particular decomposition from which its entanglement of formation is calculated ( referred to as the entanglement minimizing decomposition ) be where are pure states and are probabilities . as eq.([decw ] ) gives the entanglement minimizing decomposition , we have while the normalization of the probabilities imply as each of the signal states are derived from by local unitary operations , they can be decomposed as where each pure state is connected to the pure state by exactly the same local unitary operation as that which connects to . as any legitimate measure of entanglement has to remain invariant under local unitary operations , we have and > from eqs.([im ] ) and ( [ i ] ) we have now , in the case of sdc , the capacity ( from eq.([c ] ) ) is where we have put using joint convexity ( eq.([convex ] ) ) to expand the right hand side of eq.([grhl ] ) , we get now , for each value of , the expression can be regarded as the classical capacity * c * for sdc with the states as the four letter states . as each of the states ,are pure , eq.([pure1 ] ) implies putting eq.([onee ] ) into eq.([less ] ) we get using eqs.([norm ] ) and ( [ equa ] ) to simplify the right hand side of eq.([qm1 ] ) we get the above bound is valid for gdc as well . to see this one just has to repeat the above proof starting with and replace eq.([ensm ] ) with eq.([less ] ) then gets replaced by now note the fact that is the expression for the classical capacity * c * for cgdc with being the letter states . when the states are generated from according to eqs.([w0])-([w1 ] ) ( i.e. when gdc protocol is being followed ) , then the purity of the states guarantees that * c * is less than ( as shown in section [ purer ] ) .using this fact in eq.([ii0 ] ) we again end up with eq.([upbnd ] ) .thus even in the case of gdc , the capacity * c * is bounded by .having analytically proven that is an upper bound on * c * for gdc , we now proceed to check whether the even smaller ( as proved in ref. ) quantity is also an upper bound . however , we do not attempt to prove this analytically for a completely general initial shared state .instead we calculate the capacity * c * ( for sdc only ) for those specific classes of the initial shared state whose relative entropy of entanglement is already known .we then plot this capacity * c * as a function of the relative entropy of entanglement for each of these classes of states and check whether the curve lies below the plot of .at first we have a look at mixed states of the type where is one of the four bell states which are defined by ) as lambda states of the type a. for them , the relative entropy of entanglement is while the * c * for sdc is the plot of the classical capacity * c * for these states as a function of of the state has been shown in figure [ lam1 ] and it is indeed found that the equality holds true only at the two ends of the graph , namely at maximal entanglement ( when approaches a bell state ) and zero entanglement ( when approaches ) .next we look at states of the type which we call lambda states of the type b. for these where are the eigenvalues of .the * c * for sdc in this case is given by as is clear from a simple comparison of eqs.([b1 ] ) and ( [ b2 ] ) , for lambda states of the type b , thus , a curious feature of lambda states of type b is that the capacity for sdc is actually always _ equal _ to .we will discuss a bit more about this curious aspect later in the section .now we consider another special class of states called the werner states parameterized by a number f called the fidelity and given by the relative entropy of entanglement of these states is given by and the * c * for sdc is calculated to be the plot * c * versus has been drawn in fig.[wer ] and it is found that even in this case , now consider the case of a general bell diagonal state it has been proved in ref. that when all ] . from eqs.([zero ] ) and ( [ ccc ] ) we find that where has been used to proceed from the first to the second step and ( because $ ] ) has been used to proceed from the second to the third step . now consider the complementary case ( ) . from eqs.([fint ] ) and ( [ ccc ] ) we find that where in order to proceed from the third to the fourth step we have used the simple fact that is greater than either of the terms , or .thus for all bell diagonal states we have now we will point out a curious fact about the situation when any two of the eigenvalues of a bell diagonal state ( say and ) are zero .when ( which means ) , we have * c* ( using eq.([ccc ] ) ) . in all the other cases ( for which we use eqs.([fint ] ) and ( [ ccc ] ) ) we have where we have used the simple fact that .thus for all bell diagonal states with only two nonzero eigenvalues we have on the basis of the results obtained in this section let us conjecture : * conjecture * : _ the * * for sdc with completely general ( possibly mixed ) states is bounded from above by . a great deal of empirical evidence has been presented in this section in support of the conjecture . in the next sectionwe proceed to give a heuristic justification in support of our conjecture .in fact , we will try to justify an even stronger upper bound on * c*.to justify the conjecture of the previous section , we will have to examine the following interesting question : how does the capacity * c * change if alice and bob first locally purify their ensemble and distill bell states ( following the optimal purification procedure ) and follow this up by sdc ?various purification procedures have been described in refs. .here we assume that alice and bob follow the optimal purification process : one which helps them to distill the maximum fraction of bell states from the initial ensemble .they will , after optimal purification , have a fraction ( where is called the _ entanglement of distillation _ ) shared pairs in bell states and a fraction pairs in a disentangled state .they now complete their purification process by converting the final subensemble of disentangled pairs to pure states by projective measurements .we refer to such a protocol as _complete purification_. after a complete purification , alice can use the fraction of bell pairs to send bob information at the rate of 2 bits / pair and the fraction of pure disentangled pairs to send information at the rate of 1 bit / pair . thus if alice and bob initially shared pairs , the classical capacity after a complete and optimal purification procedure is note that the above result is only asymptotically true ( ) .naively , one might expect this * c * to be lower than the * c * before purification .this is because , as mentioned in ref. , entanglement concentration is a more destructive process than quantum data compression .some amount of shared entanglement is destroyed in the process of purification .so it might be expected that the final ensemble after a purification will be able to convey less classical information than the original ensemble . on the contrary , as we will justify , the capacity for sdc with the purified ensemble is greater than that with the unpurified ensemble when an optimal and complete purification protocol is used .the gain comes from the fact that now there are two separate sub - ensembles instead of a single ensemble . in this section, we will try to justify , albeit heuristically , the following statement ( as a more fundamental conjecture than the one presented in the previous section ) : + * conjecture * : _ the capacity for sdc is more when it is preceeded by a complete and optimal purification procedure_. we will first consider the special case of bell diagonal mixed states , for which the proof of the increase in the sdc capacity on a complete and optimal purification ( our conjecture ) can be rigorously proved . for bell diagonal states with entropy , there is a purification protocol called hashing ( with the distillable fraction being ) .> from eq.([ccc ] ) we see that for these bell diagonal states , the * c * for sdc is equal to . as hashing may not necessarily be the optimal protocol , we have .this immediately implies . for the complementary case of bell diagonal states with , we have ( from eq.([ccc ] ) for sdc ) , .this is obviously less than for any finite value of the entanglement of distillation ( i.e for all inseperable states ) .thus for all bell diagonal states the * c * for sdc can be improved by a prior optimal and complete purification of the ensemble of shared states . for the more general case of arbitrary mixed entangled states , the proof will , essentially , be heuristic. our approach will be to split the change in the classical information capacity due to complete and optimal purification into two parts .the first part is positive ( an increase in capacity ) and due to the addition of classical side channels during the purification procedure .these side channels are used by alice ( a ) to communicate the results of her measurements to bob ( b ) , or vice versa .as this communication during purification is already conveying information from a to b ( or vice versa ) , the channel capacity of the classical side channels should be directly added to the classical capacity ( of course , for this , we have to implicitly assume the additivity of classical capacities ) .the information conveyed from a to b ( or vice versa ) during purification is actually used by the two parties to precisely identify their entangled and disentangled subensembles .there is also a negative contribution ( decrease in capacity ) during the purification as a fraction of shared pairs loose all their entanglement .of course a part of this entanglement is pumped into the entangled subensemble , but the remaining is lost .our job will be to argue that the positive contribution in channel capacity due to addition of classical side channels outweighs the negative contribution due to loss of entanglement in an optimal and complete purification procedure .consider the process of purification as a process of information gain by a and b. before the purification , both a and b have an equal amount of knowledge about their shared system ( they both know the full density matrix ) .finally , each of their shared states are pure ( because of the completeness of the purification procedure ) and both of them have equal knowledge ( i.e each know which of the pairs form the disentangled subensemble and which of the pairs form the maximally entangled subensemble ) .thus they have both gained an equal amount of information /pair about their shared pairs during the process of purification .now , acquiring a part of this information may not cause any destruction of shared entanglement ( lets call it /pair ) , while the remaining part ( lets call it /pair ) does cause a lowering of shared entanglement .we now contend , that this part , /pair has to be equal to the number of classical communication channels used in the optimal purification procedure .the logic follows from the fact that the optimal strategy would be for a and b to collaborate in such a way that both of them gain the entire information /pair with the least destruction of entanglement .they must then each acquire only a fraction of the information complementary to the fraction acquired by the other . in this way ,the total information acquired by a and b from the shared pairs through direct entanglement degrading measurements is /pair .they can then use a minimum of classical side channels per pair to communicate their fraction of the acquired information to the other party . on the other hand , if each wanted to acquire a fraction of the total information by direct measurements which was not entirely complementary to the part acquired by the other , they would destroy more entanglement than is really necessary .thus we would expect that the classical side channels contribute to boosting the capacity up by at least an amount /pair in the optimal purification procedure .now consider how much the capacity decreases due to the degradation of shared entanglement during the purification procedure .the classical capacity of a fraction of the pairs drops from _ at most _ 2 bits / pair to 1 bit / pair ( due to the _ completeness _ of the purification procedure , the final capacity can not be lower than 1 bit / pair ) . thus the drop in classical capacity of each of the finally disentangled pairsis not more than 1 bit .this implies that the net decrease in classical capacity of the entire ensemble due to loss of entanglement hasnt been more than bits / pair .when the information is greater than 1 bit / pair , the proof of our statement is straightforward . the increase in capacity due to classical side channels ( i.e /pair ) is more than 1 bit / pair , while the decrease in capacity due to loss of entanglement is less than bits / pair .as is a fraction ( i.e ) , the increase in capacity on purification overrides the inevitable decrease in capacity due to degradation of shared entanglement .clearly , thus the capacity to do sdc increases on prior purification when bit .now , we have to show that no matter what the value of is , it is _ possible _ to find a purification protocol for which is greater than bits .in other words , if initially there are shared pairs ( with being large ) , then the number of shared pairs which loose all their entanglement on gaining an information of /pair can be made less than by an appropriate choice of the purification protocol . from such a statementit directly follows that the increase in capacity on purification overrides the inevitable decrease in capacity due to degradation of shared entanglement .a simple way in which one may try to justify the above proposition is from the fact that one pair generally allows the extraction of upto two bits of information ( one bit from each of a s and b s qubit ) .so , it should be possible , in general , to extract amount of information from measurements on less than shared pairs ( whose entanglement is degraded ) .this intutive understanding leaves open the question as to whether the information /pair _ relevant _ to purification can be acquired in this way . forthat we resort to explicit exemplification .consider the class of mixed states ( having nonzero entropy ) and a diagonal decomposition in terms of the pure states .also suppose that this diagonal decomposition does not coincide with the entanglement minimizing decomposition of .for such states , acquiring the information about the entropy ( /pair ) is essential during purification , as the final ensemble is pure .we now contend that acquiring this /pair of information _ necessarily _ destroys some amount of shared entanglement .we will justify this by the method of contradiction .suppose , it was really possible to acquire the information about the entropy without destroying any entanglement . on acquiring an information of the amount equal to /pair , a and b will be able to divide their initial ensemble to four seperate pure subensembles , each being comprised of one of the states .the weight of each of these subensembles will be equal to the eigenvalues .the average entanglement shared by a and b after extracting the information /pair is thus .however , for the class of states that we are considering , is necessarily greater than the initial shared entanglement ( as quantified by the entanglement of formation ) . thus by purely local actions and classical communications , a and b have been able to increase the shared entanglement for these classes of mixed states .this contradicts the definition of entanglement as a quantity that can not be increased by local actions and classical communications .mixed states with the diagonal decomposition not coincident with the entanglement minimizing decomposition can not be locally purified without necessarily destroying some shared entanglement_. thus the information about the entropy of these states is of the type ( i.e acquiring the information necessarily causes degradation of entanglement ) .a corollary which immediately follows ( though not directly relevant to the main point of the paper ) , is : _ the optimal purification of mixed states with the diagonal decomposition not coincident with the entanglement minimizing decomposition necessarily has a nonzero and thus necessarily requires the use of classical side channels_. now consider such states when their entropy , and thereby , exceeds bit .thus the total information needed to be acquired is greater than bits .however , purifiability ( nonzero ) implies that the total entanglement loss can be entirely concentrated to the entanglement loss of pairs which is less than pairs .therefore , it is _ possible _ to gain the relevant bits of information by destroying the entanglement of less than pairs .now we really need an unproven extension of the above statement : _ even when _ bit and _ irrespective of the nature of the decomposition of _ , it is _ possible _ to gain the relevant amount of information by destroying the entanglement of less than pairs .though this may seem a large extrapolation , it seems highly _ plausible _ because one shared pair allows the extraction of upto two bits of information .thus we would expect that , no matter what the value of is , the increase in capacity on purification overrides the inevitable decrease in capacity due to degradation of shared entanglement .thus the capacity to do sdc with the two pure subenembles ( maximally entangled and fully disentangled ) after a complete and optimal purification , is expected to be greater than the initial mixed ( and uniformly entangled ) ensemble .we thus conjecture , with the help of eq.([cppd ] ) , the following bound on the capacity of doing sdc with any mixed entangled state this is an even stronger upper bound on the classical capacity than the bound conjectured in the previous section as the relative entropy of entanglement is necessarily greater than . from eq.([bnded ] ) , it directly follows that ( our previous conjecture ) .note that in this section , this conjecture has been rigourously proved for all bell diagonal states , heuristically proved for all states with entropy bit ( i.e bit ) and shown to be highly plausible for other types of states .in this paper we have obtained bounds on the capacity of doing dense coding in terms of the different measures of entanglement stemming from purification procedures .the rigorously proved part of the results of this paper is that for sdc one always has on the other hand if we allow for a conjecture ( well supported by examples and heuristically justified in section.[pff ] ) , we have the stronger result what is the significance of bounds on the * c * for sdc when it can be readily calculated ?the importance can be realized when we invert the above equation and write thus by calculating ( or measuring ) the channel capacity for sdc , we can draw an inference about the range in which the entanglement of the shared states ( as quantified by ) lies .also one of the bounds , namely , continues to hold even when we are allowed to vary the a priori probability of the various signal states ( gdc ) .thus , our inequality allows one to impose a readily calculable ( from the expression for in ref. ) limit on the capacity for gdc , without having to optimize gdc over all possible a priori signal probabilities .the physical interpretation of the upper bounds becomes clear from the considerations given in section.[pff ] . during optimal and complete entanglement purification procedures onegenerally enhances the classical capacity much more due to added classical side channels than the inevitable decrease in capacity brought by the loss of entanglement .the information transferred through these classical side channels during purification helps to identify the entangled and disentangled subensembles .it is through this identification that they directly play the role of boosting the capacity . as after an optimal and complete entanglement purification procedure ,the capacity is , this post - purification capacity is an upper bound on the pre - purification capacity . as both the entanglement measures and are upper bounds on , all our upper bounds follow immediately .it is interesting to note that the upper and the lower bounds are trivial in complementary regimes . when , the upper bounds are trivial , but the lower bound ( ) is non trivial . on the other hand when , the upper bounds are non trivial while the lower bound is trivial .we believe that this paper imparts physical significance to the various measures of entanglement from the viewpoint of forming bounds on certain kinds of dense coding procedures .this direction of physical interpretation of the entanglement measures is very different from the standard interpretations which stem from entanglement dilution and distillation processes . in a sense, this paper _ links up the apparently disconnected notions of entanglement purification and dense coding_. the considerations of the previous section implies the following lower bound on the distillable entanglement : .this is important because as yet there is no explicit formula for for a general mixed entangled state .thus the readily calculable quantity ( in the case of sdc ) may offer a convinient lower bound on the distillable entanglement of a given mixed state .the interesting question to examine is what apart from entanglement is involved in determining the dense coding capacity .this may be some measure of distinguishability between the letter states ( hinted by the fact that the average mutual distinguishability function forms an upper bound on the classical capacity * c * ) .an aim of further research should be to work towards a complete formula for the capacity of dense coding . moreover , working with a similar motivation as this paper, one can also try to relate entanglement measures stemming from purification procedures to the other uses of shared entanglement such as teleportation and secret key distribution .we would like to thank vladimir buzek , daniel jonathan and peter knight for valuable discussions .this work is supported by the inlaks foundation , elsag - bailey , hewlett - packard , the european tmr networks erb 4061pl951412 and erbfmrxct96066 , the uk engineering and physical sciences research council , the european science foundation and the leverhume trust .99 for a review see : plenio , m. b. , and vedral , v. , 1998 , _ cont ._ , * 39 * , 431 .bennett , c. h. , and wiesner , s. , 1992 , _ phys ._ , * 69 * , 2881 .mattle , k. , weinfurter , h. , kwiat , p. g. , and zeilinger , a. , 1996 , _ phys ._ , * 76 * , 4656 .barenco a. , and ekert , a. k. , 1995 , _ j. mod . opt ._ , * 42 * , 1253 .hausladen , p. , jozsa , r. , schumacher , b. , westmoreland , m. , and wootters , w. k. , 1996 , _ phys . rev .a _ , * 54 * , 1869 .bose , s. , vedral , v. , and knight , p. l. , 1998 , _ phys .a _ , * 57 * , 822 .bennett , c. h. , divincenzo , d. p. , smolin , j. a. , and wootters , w. k. , 1996 , _ phys .a _ , * 54 * , 3824 .vedral , v. , plenio , m. b. , rippin , m. a. , and knight , p. l. , 1997 , _ phys ._ , * 78 * , 574. vedral , v. , plenio , m. b. , jacobs , k. , and knight , p. l. , 1997 , _ phys .a _ * 56 * , 4452 .vedral , v. , plenio , m. b. , 1998 , _ phys ._ , a * 57 * , 1619 .bennett , c. h. , brassard , g. , popescu , s. , schumacher , b. , smolin , j. a. , and wootters , w. k. , 1996 , _ phys ._ , * 76 * , 722 .deutsch , d. , ekert , a. , jozsa , r. , macchiavello , c. , popescu , s. , and sanpera , a. , 1996 , _ phys ._ , * 77 * , 2818 .bennett , c. h. , bernstein , h. j. , popescu , s. , and schumacher , b. , 1996 , _ phys .a _ , * 53 * , 2046 .kholevo , a. s. , 1973 , _ probl .peredachi inf _ , * 9 * , 177 [ kholevo , a. s. , 1973 , _ problems of information transmission _ , * 9 * , 177 ] ; the holevo bound was first conjectured by : gordon , j. p. , 1964, quantum electronics and coherent light , _ proceedings of the international school of physics `` enrico fermi , '' course xxxi _ , edited by p. a. miles( academic , new york ) , 156 - 181 ; levitin , l. b. , 1964 , information , complexity and control in quantum physics , edited by p. a. miles , ( academic , new york ) , 111 - 115 .ohya , m. , 1989 , _ rep . math ._ , * 27 * , 19 ; hiai , f. , and petz , d. , 1991 , _ comm ._ , * 143 * , 99 ; donald , m. j. , 1986 , _ comm . math ._ , * 105 * , 13 ; donald , m. j. , 1987 , _ math ._ , * 101 * , 363 .popescu , s. , and rohrlich , d. , 1997 , _ phys .a _ , * 56 * , r3319 .horodecki , r. , horodecki , p. , and horodecki , m. , 1995 , _ phys .a _ , * 200 * , 340 . rains , e. m. , entanglement purification via separable superoperators , 1997 , lanl e - print quant - ph/9707002 .wootters , w. k. , 1998 , _ phys ._ , * 80 * , 2245 .werner , r. f. , 1989 , _ phys .a _ , * 40 * , 4277 .horodecki , m. , horodecki , p. , and horodecki , r. , 1997 , _ phys ._ , * 78 * , 574 .bennett , c. h. , brassard , g. , crepeau , c. , jozsa , r. , peres , a. , and wootters , w. k. , 1993 , _ phys ._ , * 70 * , 1895 .ekert , a. k. , 1991 , _ phys ._ , * 67 * , 661 . | ideal dense coding protocols allow one to use prior maximal entanglement to send two bits of classical information by the physical transfer of a single encoded qubit . we investigate the case when the prior entanglement is not maximal and the initial state of the entangled pair of qubits being used for the dense coding is a mixed state . we find upper and lower bounds on the capability to do dense coding in terms of the various measures of entanglement . our results can also be reinterpreted as giving bounds on purification procedures in terms of dense coding capacities . |
precise timing of astrophysical events is one of the fundamental tools of astronomy , and is an important component of essentially every area of study .there are two basic sources of uncertainty in timing : the astrophysical data characterizing the event , and the time stamp with which the event is referenced .unfortunately , since the accuracy of the time stamp is something that is often taken for granted , the improvements in data are sometimes not accompanied ( or are not uniformly accompanied ) by the requisite improvements in accuracy of the time stamp used .this situation can lead to confusion , or even spurious inferences .timing plays a particularly important role in the study of exoplanets .indeed , many of the ways in which exoplanets are discovered involve the detection of transient or time - variable phenomena , including the radial velocity , transit , microlensing , and astrometry techniques .furthermore , in some cases much can be learned about planetary systems from the precise timing of these phenomena . as examples ,the measurement of terrestrial parallax in microlensing allows one to infer the mass of the primary lens and so the planetary companion ( e.g. , * ? ? ?* ) , and one can constrain the eccentricity of transiting planets by comparing the times of primary transits and secondary eclipses ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?possibly the most promising application of timing in exoplanets , however , comes from transit timing variations ( ttvs ) . with an exquisitely periodic phenomena like transiting planets, we will be able to measure many effects using the departures from strict periodicity , such as the gravitational perturbations from additional planets , trojans , and moons , stellar quadrupoles , tidal deformations , general relativistic precession , orbital decay ( e.g. , * ? ? ?* ) , and proper motion .because of their great potential , ttvs have become the focus of many groups .the typical data - limited transit timing precisions of most observations are around 1 minute , with the best transit time precision yet achieved of a few seconds . however , as discussed above , accuracies of transit times are limited not only by the data themselves , but also by the time stamp used . in order to make these difficult measurements useful , it is critical that a time stamp be used that is considerably more accurate than the uncertainty due to the data themselves . furthermore , since thorough characterization of ttvs will require the use of all available data spanning many years from several groups , this time stamp must be stable in the long term , and all groups must clearly convey how it was calculated .unfortunately , we have discovered that , in the exoplanet community , the julian date ( jd ) and its geocentric ( gjd ) , heliocentric ( hjd ) , and barycentric ( bjd ) counterparts are currently being quoted in several different , and often unspecified , time standards . in addition , the site arrival time and its time standard is not quoted .this general lack of homogeneity and specificity leaves quoted time stamps ambiguous at the 1 minute level .more alarmingly , the most commonly - used time standard , the coordinated universal time ( utc ) , is discontinuous and drifts with the addition of each leap second roughly each year .the pulsar community has solved the problem of precise timing well beyond the level that is currently necessary for exoplanet studies , and we can benefit from the techniques they have developed over the past 40 years .in particular , their current state of the art program ( tempo2 ) models pulsar arrival times to 1 ns precision . this program is highly specialized and generally can not be applied outside of pulsar timing observations , but many of the effects they consider are relevant to optical observers in the exoplanet community . in this article , we summarize the effects one must consider in order to achieve timing accuracy of 1 well beyond the accuracy that will likely be required by the exoplanet community for the foreseeable future .section [ theory ] provides the background required to understand each of the effects that could change the arrival time of a photon .they are listed in order of decreasing magnitude , so latter subsections can be ignored for low - precision measurements .section [ practice ] discusses the practical limitations to achieving high - precision timing .we begin with the effects which may cause errors that are comparable to or exceed the bjd correction .these should be read and understood by everyone .we continue with remaining effects , in order of decreasing magnitude , which can be ignored for low - precision ( ms ) measurements .we conclude 3 by listing additional effects , the errors due to which are negligible ( ) .we begin [ sec : calculating ] by detailing the procedure one must follow in order to calculate the bjd , which is designed to be a useful reference for those already familiar with the concepts of precision timing . in the latter part of this section , we describe our particular idl and web - based implementation of this procedure .lastly , in the appendix , we discuss some of our specific findings about the time stamps currently in use and how these are calculated throughout the exoplanet community .while we focus on the effects of timing on the optical / infrared exoplanet community , timing precision of order 1 minute is necessary for many other areas , such as the study of rapidly rotating white dwarfs .this article should be equally applicable in such cases .the biggest source of confusion comes from the fact that time standards and reference frames are independent from one another , even though there are many overlapping concepts between the two .we will use the following terminology : `` reference frame '' will refer to the geometric location from which one could measure time different reference frames differ by the light - travel time between them ; `` time standard '' will refer to the way a particular clock ticks and its arbitrary zero point , as defined by international standards ; and `` time stamp '' is the combination of the two , and determines the timing accuracy of the event .the bjd , the time stamp we advocate , can be calculated using the equation : where jd is the julian date in coordinated universal time ; is the rmer delay , discussed in [ sec : roemer ] ; is the clock correction discussed in [ sec : clock ] ; is the shapiro delay discussed in [ sec : shapiro ] ; and is the einstein delay , discussed in [ sec : einstein ] .the order of these terms is such that they are of decreasing magnitude , so one need only keep the terms up to the precision required .the timing precision required by current exoplanet studies ( 1 s ) requires only the terms up to and including . because future solar system ephemerides may enable more precise calculations of the arrival time at the barycenter , or in order to allow others to check that the original conversion was done accurately enough for their purpose , the site arrival time ( e.g. , the jd ) should always be quoted in addition to the bjd . due to the finite speed of light ,as the earth travels in its orbit , light from an astrophysical object may arrive early or be delayed by as much as 8.3 minutes from the intrinsic time of the extraterrestrial event .this is called the rmer delay , , in honor of ole rmer s demonstration that the speed of light is finite .since most observers can not observe during daylight , a bias is introduced and in practice the delay ( as distinct from the early arrival time ) is only as much as 7 minutes , for a peak - to - peak variation of 15 minutes .figure [ fig : bjdvjd ] shows an example of this effect for a maximally affected object on the ecliptic . in order to show the observational bias , our example assumes the object is at 0 right ascension and 0 declination .this curve shifts in phase with ecliptic longitude and in amplitude with ecliptic latitude .we also place our observer at the earth s equator , but note that the asymmetry will be larger at different latitudes . and the uncorrected jd ( see text for definitions ) over the course of a year .we plot the correction for a maximally - affected object on the ecliptic for an observer at latitude of zero degrees .we exclude all points where the object has an airmass greater than three and the sun is higher than -12 in order to highlight observing biases.,width=312 ] the solution to this problem is to calculate the time when a photon would have arrived at an inertial reference frame .this time delay is the dot product of the unit vector from the observer to the object , , and the vector from the origin of the new reference frame to the observer , where is the speed of light and can be written in terms of its right ascension ( ) and declination ( ) , equation is general as long as , , and are in the same coordinate system ( e.g. , earth mean equator j2000 ) and the object located at ( , ) is infinitely far away . other forms of this equation in the literature assume that we have the angular coordinates of the new origin or that the earth and the new origin are in the same plane ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) , but we explain in [ sec : calculating ] why this form is most practical for calculating the delay .the hjd , which uses the sun as the origin of the new reference frame , is only accurate to 8 s because of the acceleration of the sun due primarily to jupiter and saturn ( fig .[ fig : bjdvhjd ] ) .it was popular when people first began considering this effect because it is relatively simple to calculate from tables without a computer , and remains popular because self - contained algorithms exist to approximate it without any external tables ( e.g. , * ? ? ? * ) .however , because the hjd is not useful when accuracies of better than 8 s are needed , most of the algorithms in use today use approximations that are only precise at the 1 s level , and it becomes impossible to back out the original jd from the hjd unless we know the exact algorithm used . and the hjd for a maximally - affected object on the ecliptic .the primary periodicity is due to jupiter and the secondary periodicity is due to saturn.,width=312 ] because of these problems , the hjd was formally deprecated by international astronomical union ( iau ) resolution a4 in 1991 , in favor of the bjd , a time referenced to the solar system barycenter ( ssb ) . the analogous correction to the rmer delay in our solar systemcan also be significant in the target system .we refer to this as . for example, for transiting planets with au , can be as large as 30 s. in general , the position of the planet during primary transit has become the unspoken standard reference frame for transiting planets , while the host star s photosphere is the unspoken standard for radial velocity ( rv ) planets . in theory , the timing would be much more stable in the target s barycentric reference frame , but the accuracy with which we can convert to this frame depends on the measurements of the system . since different observers may use different values as measurements improve , quoting the jd in the frame of the target s barycenter may obfuscate the long term reliability of timing .therefore , we argue it is better to quote julian date in the ssb reference frame , and correct for only when comparing observations at different phases in the planet s orbit .this correction is not necessary for ttvs of the primary transit , since the planet is always in the same phase .nevertheless , we should explicitly state the object s reference frame to avoid any potential ambiguity , particularly when comparing any combination of primary transits , secondary transits , rvs , and another primary transit of a different planet in the same system , when it may not be obvious which origin is being used . for rv measurements , which are taken at many different phases ,the effect is much smaller and can generally be ignored because the star s orbit around the barycenter is small . for a typical hot jupiter , ( i.e. , a jupiter mass planet in a 3 day orbit around a solar mass star ) ,the maximum time difference in the rv signal ( for an edge - on orbit ) is 20 ms , which would change the measured rv by m s .while planets farther out will cause a larger timing offset , the difference in the measured rv is even smaller . to be clear, the jd can be specified in many time standards , and while the iau has made no explicit statement regarding the allowed time standards of the gjd , hjd , or bjd , their meaning in any given time standard is unambiguous .unfortunately , they have been specified in many standards , usually implicitly. however , the particular time standard used affects how useful the time stamp is as an absolute reference . we must be careful not to directly compare bjds or hjds in different time standards , as each has different offsets , periodic terms , and/or rates , which can introduce systematic errors of over 1 minute .for this reason , it is critical that any stated bjd or hjd also specify the time standard used when one - minute accuracies are important , and the uncertainty of a time that is quoted without a standard should be assumed to be at least 1 minute .first , it may be useful to summarize the relevant standards of time : universal time , ut1 : : defined by the mean solar day , and so drifts forward and backward with the speeding and slowing of the earth s rotation .generally , it slows due to the tidal braking of the moon , though changes in the earth s moment of inertia and complex tidal interactions make its exact behavior unpredictable .it is rarely used directly in astronomy as a time reference , but we mention it for context .international atomic time , tai : : based on an average of atomic clocks all corrected to run at the rate at the geoid at 0 k , with 1 s equal to `` 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom , '' as defined by resolution 1 of the thirteenth meeting of the confrence gnrale des poids et mesures ( cgpm ) in 1967 .this definition is based on the duration of the ephemeris time second , which was previously defined as 1/31,556,925.9747 of the tropical year for 1900 january 0 at 12 hours ephemeris time by resolution 9 of the eleventh cgpm in 1960 .tai is the fundamental basis for many other time standards , and is the default time standard of the sloan digital sky survey .coordinated universal time , utc : : runs at the same rate as tai , except that it is not allowed to differ from ut1 by more than 0.9 s. every 6 months , at the end of 31 december and 30 june , the international earth rotation and reference systems service ( iers ) may elect to add ( or subtract ) a leap second to utc in order to keep it within 0.9 s of ut1 .utc is therefore discontinuous and drifts relative to tai with the addition of each leap second , which occur roughly once per year . as of january 2009 , the current number of leap seconds , , is 34 .the full table of leap seconds is available online and is typically updated several months in advance of when an additional leap second is to be added .utc is the current international standard for broadcasting time . as a result ,when a modern , network - connected computer s clock is synchronized to a network time protocol ( ntp ) server , it will be in utc .thus , this is the system of time most familiar to astronomers and non - astronomers alike ( modulo time zones and daylight savings time ) .universal time , ut : : an imprecise term , and could mean ut1 , utc , or any of several other variations .in general , such imprecise language should be avoided , as the potential ambiguity is up to 1 s. in the context of a time stamp , it is likely utc , but some people may intentionally use ut to imply 1 s accuracy . while explicitness is preferred ( i.e. , utc 1 s ) , any time stamp quoted in ut should be assumed to be uncertain at the 1 s level unless the time standard has been independently verified .terrestrial time , tt(tai ) : : a simple offset from tai of 32.184 s released in real time from atomic clocks and never altered .this offset is to maintain continuity between it and its predecessor , the ephemeris time ( et ) .terrestrial time , tt(bipm ) : : a more precise version of tt(tai ) . the international bureau of weights and measures ( bipm )reanalyzes tt(tai ) and computes a more precise scale to be used for the most demanding timing applications .the current difference between tt(tai ) and tt(bipm ) is , and must be interpolated from a table maintained by the bipm and published online with a 1 month delay .terrestrial time , tt : : sometimes called terrestrial dynamical time ( tdt ) , can refer to either tt(tai ) or tt(bipm ) . from this point on , we will not make the distinction , but when accuracies of better than 30 are required , tt(bipm ) must be used .barycentric dynamical time , tdb : : corrects tt for the einstein delay to the geocenter , , which is the delay due to time dilation and gravitational redshift from the motions of the sun and other bodies in the solar system .the conversion from tt to tdb can not be written analytically , but is usually expressed as a high - order series approximation . the difference is a predominantly a periodic correction with a peak - to - peak amplitude of 3.4 ms and a period of 1 yr .tdb was slightly modified by iau resolution b3 in 2006 , converging on the same definition as the jpl ephemeris time , t , also called coordinate time ( ct ) in the jpl ephemerides of solar system objects .barycentric coordinate time , tcb : : physically and mathematically equivalent to the tdb as defined in 2006 , and differs only by an offset and rate of about 0.5 s yr due primarily to time dilation in the sun s gravitational potential .tdb and tcb were roughly equal at 1977 jan 1.0 tai , and now differ by about 16 s. caution must be always be exercised , however , as these definitions are subject to change at any point , though usually with a few year s notice . assuming the time is measured according to current definition of utc , then the clock correction , from utc to tdb , can be written as the sum of the corrections from utc to tai , tai to tt , and tt to tdb : of course , if one wishes to express the bjd in another time standard ( or start with something other than utc ) , the clock correction would change accordingly . however , not every time standard is well - suited to precise , astrophysical time stamps , and the use of any time standard other than tdb should be viewed simply as an adequate approximation to tdb .most readily available programs that calculate the time stamp assume that the user has already applied the utc - to - tt part of this clock correction , which is often not true .we feel this assumption has contributed the widespread confusion regarding time stamps .as this last point is our primary motivation for writing this article , we elaborate here on the effects of time standards on the reliability of time stamps . for the sake of simplicity, we only discuss the effects of time standards on the bjd .each of these effects also applies to the hjd , though the improvement in accuracy of the time stamp is negligible compared to the accuracy of the hjd reference frame for all but the utc time standard .the least preferred , though most commonly used , time standard for the bjd is utc ( bjd ) , and is equivalent to ignoring altogether . because utc is discontinuous and drifts with the addition of each leap second ,comparing two bjd time stamps could result in spurious differences if any leap seconds have been introduced between observations .therefore , 1 s timing accuracies can not be achieved using the bjd over a span that straddles the addition of one or more leap seconds ( roughly 1 yr ) .figure [ fig : tdbvutc ] shows the difference between the bjd time stamp with the uniform bjd time stamp ( described below ) from 1961 jan 1 , when utc was defined ( though its definition has evolved over the years ) , to 2010 december 31 , the furthest future date for which the value of utc can be accurately predicted at this writing . and the bjd .it shows the discontinuities and slow drift in bjd due to the addition of leap seconds . without correcting for these , relative timing between two reported values of bjd only be trusted over short time scales.,width=312 ] bjd in tt ( bjd ) , which is equivalent to ignoring the ( tdb - tt ) term in equation ( [ eq : utc2tdb ] ) , corrects for the discontinuity and drift introduced by leap seconds and is appropriate for timing accurate to 3.4 ms .bjd in tdb ( bjd ) is usually the best time stamp to use in practice , as it further corrects the bjd for all known effects on the motions , and therefore rates , of our atomic clocks . while bjd is not perfect , any more accurate time stamp is unique for each target .bjd in tcb ( bjd ) corrects for the gravitational potential , primarily from the sun , which causes the clock to run slower than it otherwise would .however , if one is concerned about effects of this magnitude , the analogous correction of putting it in the gravitational potential of each observed object is also required .since these two rates are small ( ) , and opposite in sign , we believe it is best to ignore bjd except perhaps as an intermediate step in calculating the target - specific frame . technically , the use of tcb was recommended by iau resolution b1.3 in 2000 .however , because of the greater practicality of using tdb ( see [ sec : calculating ] ) , and the drifting difference between tcb and tdb and tt , we believe its use will only lead to confusion without any foreseeable benefit to the exoplanet community .the shapiro delay , , is a general relativistic effect in which light passing near a massive object is delayed . for an object at an angle from the center of the sun ,the shapiro delay is this can be as large as 0.1 ms for observations at the limb of the sun , but for objects more than 30 from the sun , the correction is less than 20 .there is also the analogous correction , , for the target system .similar to , depends on the measurements of the target , which may be refined over time .therefore , the time should generally be quoted without , but include it when comparing times where this could be significant . as we discussed in [ sec : clock ] , relativity dictates that the motion of the observer influences the rate at which the observed clock ticks .the use of tdb corrects for an observer moving with the geocenter , but in reality we observe from the surface of the earth or from a satellite , for which there is an additional term to : here , is the location of the observer with respect to the geocenter , and is the velocity of the geocenter .again , there is an analogous correction , , for the target system , which should be ignored when quoting the time but included when comparing times if necessary .of course , the accuracy of the output time stamp is only as good as our assumptions and the inputs needed to specify the time standard and reference frame .here we discuss their effects on the accuracy of the time stamps .the first four subsections , through [ sec : planewave ] , have no reasonable upper bound . in each case , the accuracy of the inputs must be evaluated depending on the accuracy of time stamp required .the later three subsections through [ sec : double ] are organized in decreasing magnitude , all of which can be ignored for accuracies no better than 21.3 ms . finally , [ sec : negligible ] discusses effects of less than 1 that we have ignored .an error of 1 in the position of the target amounts to a timing error of as much as 0.28 s in the bjd ( fig .[ fig : bjdra ] ) .such error would be common if the coordinates of the field center are used instead of the specific object s coordinates . in particular , if doing a survey , one may wish to assign the same bjd to all objects in a given frame .however , with a 10 offset , which is possible with some wide - field transit surveys , the error can be as large as 200 s for objects at the edge of the field .an error of 0.25 will yield 1 ms timing offsets , and 0.25 mas accuracy is necessary for 1 timing .calculated for an object at r.a .= 0 and an object at r.a .= 0 0 4 observed at the same time .this difference is as large as 200 s for a 10 offset.,width=312 ] the accuracy of a typical computer clock depends on its intrinsic stability , the computer workload , its operating system , and the reliability of the network connection .older computers with a parallel port ccd interface may produce unreliable timing because the clock may slow or stop completely during ccd readout .without any special effort , a modern windows machine with a network connection is accurate to seconds , and with third - party software like dimension 4 , we have found it to be stable to 0.1 s. an ntp - synchronized linux machine is typically accurate to ms . of course , the stability of the clock only sets a lower limit on the absolute accuracy of the time recorded in the fits image header .ntp synchronization attempts to measure and compensate for network latency , but the accuracy of time stamps also depends on the particular software package taking the image and the hardware it uses , which is difficult to calibrate . unless independently verified , the time recorded in image headers should not be trusted to better than 0.25 s. however , various solutions exist for higher precision timing , such as gps - triggered shutters . in particular , it is worth emphasizing the 1 s error in the _ hubble space telescope _ ( hst ) clock and potential 6.5 s error in the kepler clock , described in more detail in the appendix , both of which have already achieved data - limited transit timing precisions of that order ( e.g. , * ? ? ?* ; * ? ? ?when calculating the time of exposure of an image , we typically use the time at midexposure . however , the precise time of exposure that should be used is the flux - weighted mean time of exposure .the magnitude of this error depends on the intrinsic stability of source , the stability of the atmosphere , and the exposure time . in the diabolical case of a cloud completely covering the object during one half of the exposure, the error could be as large as half the exposure time . for a typical hot jupiter that dims by 1% over the course of a 15 minute ingress ,the error introduced into the time stamp during ingress or egress by using the mid exposure time is 0.25% of the exposure time 150 ms for a 1 minute exposure .near the peak of some high - magnification microlensing events ( i.e. , * ? ? ?* ) , the flux may double in as little as 6 minutes . during such instances , using the mid exposure time will result in a time stamp error of as much as 1/4 of the exposure time 15 s for a 1 minute exposure .equation ( [ eq : roemer ] ) assumes the object is infinitely far away , and therefore the incoming wavefronts are plane waves .in reality , the wavefronts are spherical , which introduces a distance - dependent , systematic error .the maximum error introduced by the plane - wave approximation is 1000 s for the moon , 100 s for the main asteroid belt , 5 s for the kuiper belt , 1 ms at the distance of proxima centauri , and 150 ns at the distance to the galactic center . the fully precise equation , assuming spherical wavefronts , is where is the distance from the observer to the target . in such instances ,the distances must be derived from precise ephemerides for both the target and observer .although this form is generally applicable , it is not generally practical because at large distances , double precision floating point arithmetic can not reliably recover the small difference between ( the distance from the barycenter to the target ) and . to solve this problem , the pulsar community ( e.g. , * ? ? ?* ) uses the two - term taylor expansion of equation ( [ eq : exactroemer ] ) about : one may recognize the first term as plane wave approximation ( eq . [ [ eq : roemer ] ] ) . in practice , the accuracy of this two - term approximation exceeds the accuracy of the `` exact '' calculation using double precision at a distance of 10,000 au ( 0.05 pc ) . at the distance of proxima centauri ,the accuracy of this approximation is at worst 1 ns .however , this taylor expansion is divergent when .it should never be used for objects inside 1 au and may still be inadequate for other objects inside the solar system .it has a maximum error of 1 day for the moon , 20 s for the main asteroid belt , and 40 ms for the kuiper belt . in these cases , using the exact formula ( eq . [ [ eq : exactroemer ] ] ) may be easiest .therefore , we recommend that for precise calculations of any solar system body , equation ( [ eq : exactroemer ] ) should be used . for better than 1 ms timing of any object outside the solar system , equation ( [ eq : roemercorr ] ) should be used .most readily available time stamp calculators use the position of the geocenter , rather than the location of the observer on the surface of the earth . neglecting the light - travel time from the surface of the earth to the center introduces a 21.3 ms amplitude variation with a period of 1 sidereal day . in practice, most observers can only observe their targets at night , creating a systematic bias of between 8 ms and 21.3 ms ( fig .[ fig : geocenter ] ) .calculated at the geocenter and at the precise location of the observer on the surface of the earth . while geometrically , this effect will oscillate between ms with a period of 1 sidereal day , we exclude points when the sun is above -12 and object is at , which introduces a large observational bias.,width=312 ] usually , when people calculate the bjd , they neglect , and input jd for algorithms designed to take jd .this effectively uses the positions of the earth , sun , and planets offset in time by 32.184 +n s to calculate the correction . when the bjd is calculated in this manner , we denote it as bjd . the correct way to calculate thebjd would be to first calculate the bjd , then subtract the correction is a poor approximation to bjd and should not be used . ] .figure [ fig : utcvutc ] shows , for a maximally affected object on the ecliptic , an example of the difference between the bjd and the fully correct bjd .fortunately , this amounts to at most a 13.4 ms difference ( though growing with the utc - tt difference ) , which is below the precision of most clocks and the geocentric correction that is usually ignored .therefore , to an accuracy of ms , one can safely say bjd bjd , making it easy to convert currently published values of bjd to the superior bjd . and the commonly calculated bjd using the positions of the earth , planets , and sun delayed by 32.184 + ns. it shows that the difference can safely be ignored for ms precision , and therefore the approximate bjd or bjd can be recovered from currently published bjd simply by adding 32.184 + n s.,width=312 ] representing jd as a double precision floating - point number limits the accuracy to about 1 ms , and any operation done on the full jds will be even less accurate .many programs require the use of a reduced or modified julian date , and/or can return the jd to bjd offset in seconds , but care must be taken at every step of the way never to store the full jd as a double precision number if 1 ms precision is required .the shapiro delay occurs for other bodies as well , but observations at the limb of jupiter only delay light by 200 ns .typical modern , commercial gps units use the world geodetic system ( wgs84 ) , which is referenced to the international terrestrial reference system ( itrs ) with an error of about 15 m , which amounts to a 50 ns error in the time stamp .the index of refraction of the atmosphere is not exactly 1 and changes with its composition , temperature , and pressure , which changes the speed of light .however , the largest reasonable deviation due to this effect is only tens of ns .the pulsar community must specify a frequency - dependent dispersion measure delay . at radio wavelengths ( 21 cm ), the delay can be as much as 1 s , but the dispersion delay contributes at less than 1 shortward of m .the most practical way to precisely calculate the bjd time stamp is using jpl s de405 ephemeris .it contains the position of thousands of bodies in the solar system , including the sun , planets , spacecraft , moons , asteroids , and comets .it is oriented to the international celestial reference frame ( icrf ) , which is consistent with the fk5 system at j2000.0 within the 50 mas error of fk5 , and has its origin at the ssb with its axes fixed with respect to extragalactic objects .therefore , it is recommended to use the 3-space cartesian coordinates retrieved from the jpl de405 ephemeris directly with the j2000 object coordinates in equation ( [ eq : roemer ] ) .the following is an outline of the steps required to properly calculate the bjd . 1 .calculate the midexposure time in jd .most fits image headers give date - obs in utc at the beginning of exposure .if high precision is required ( s ) , read the caveats about clock precision in [ sec : clockacc ] carefully ; depending on the sky conditions or variability of the object , one may need to account for the flux - weighted mean time of exposure .2 . convert the midexposure time to jd by applying ( eq . [ [ eq : utc2tdb ] ] ) . for times accurate to 3.4 ms , one can use the simpler jd and calculate bjd ( using the positions of planets delayed by the tt - tdb offset ) . the difference between bjd and bjd no more than 200 ns , which is well below the precision of the bjd .if better than 30 precision is required , the tt(bipm ) - tt(tai ) offset must be applied .3 . retrieve the jpl ephemeris of the observing station for the times spanning the observing window .jpl s horizons system is designed for this . to return inputs for use with equation ( [ eq : roemer ] ) and j2000 targetcoordinates : select `` vector table '' , the ssb as the coordinate origin , and `` earth mean equator and equinox of reference epoch '' for the reference plane .this will return the cartesian coordinates of the observing station with respect to the ssb in the j2000 earth mean equator reference frame at steps as small as 1 minute in ct , which is the same as tdb .if the observing station is on earth and better than 20 ms timing is required , another ephemeris must be generated ( from horizons ) for the observer s position with respect to the non - rotating geocenter , and added to the geocentric positions .we note that the precise conversion from latitude , longitude , and elevation to the cartesian coordinates with respect to the non - rotating geocenter is not trivial , and requires tables of measured precession and nutation of the earth .interpolate the positions of the observing station to each midexposure time jd .input the interpolated x , y , and z positions of the observing station , and the target s j2000 earth mean equator coordinates into equation ( [ eq : roemer ] ) . depending on the distance to the target and the precision required , equation ( [ eq : exactroemer ] ) ( and the target s ephemeris ) or equation ( [ eq : roemercorr ] ) may be required. one must be careful to use sufficiently accurate target coordinates . 7 .if greater than 0.1 ms precision is required , apply the shapiro correction ( equation [ [ eq : shapiro ] ] ) .an ephemeris of the sun is required , which can be generated from horizons .if greater than 1 precision is required , apply the additional einstein correction for the observing station s position with respect to the geocenter .the geocentric velocity is required , which can also be given by horizons .our idl code implementing this procedure is available online .it requires the jd at midexposure and target coordinates ( , ) in j2000 as inputs .we outline its procedure here .more explicit details , as well as the calling procedure and dependencies , are commented inside the code .we compute using craig markwardt s tai_utc program to read the leap second table , and his tdb2tdt program to compute the tt - tdb correction , which uses a 791-term fairhead and bretagnon analytical approximation to the full numerical integration , with an error of 23 ns .our code will automatically update its leap second table the first time it runs after every january 1 or july 1 , but this requires a periodic internet connection and the use of the wget program .it will terminate on failure to update , but this protection can be bypassed for those that elect to ( or have to ) update their table by hand . by default , we ignore the tt(bipm ) - tt(tai ) correction , which would require a constant internet connection , would not apply to data acquired in the previous month , and is likely negligible for most applications .however , our code can optionally correct for it if an up - to - date file is supplied . to read and interpolate the ephemeris from jpl , we use craig markwardt s routines jplephread and jplephinterp for the earth , sun , and other planets .if the observing station is space - borne , the smaller ephemeris used with those programs does not include satellites , so we use an expect script to automate a telnet session to the horizons system and automatically retrieve the ephemeris , which we quadratically interpolate to the desired times using idl s interpol .the accuracy of this interpolation depends on how quickly the object s position is changing and the step size of the ephemeris .horizons can only return data points per query , so the smallest step size ( 1 minute ) limits the calculation to a range of 60 days . for the geocenter ,a 100 minute step size is sufficient for 60 ns accuracy , but , for example , a 2 minute step size is required for 1 accuracy for the _ hst _ ( though it is still limited by its clock accuracy ) .we have found that a 10 minute step size is adequate for 1 ms timing for most objects and allows a range of nearly 2 yr .if the observer is on the earth , and the coordinates ( latitude , longitude , and elevation ) are given , we correct for the additional delay . if no observer - specific information is given , we assume the observer is at the geocenter , and the result will be biased by ms ( fig . [fig : geocenter ] ) . if the target s ephemeris can be returned by horizons and its unique nameis given , we use our expect script to generate its ephemeris too , and calculate the exact ( eq .[ [ eq : exactroemer ] ] ) .if not , and instead the distance is given , we use the two - term approximation to the spherical wave solution ( eq . [ [ eq : roemercorr ] ] ) .otherwise , we use the plane wave approximation ( eq . [ [ eq : roemer ] ] ) .lastly , we include the shapiro correction and the additional einstein correction due to the position of the observer with respect to the geocenter , either from the surface of the earth ( if given the coordinates ) , or the spacecraft . in the geocentric case , our code agrees with barycen to 200 ns ( peak to peak ) and the authors of barycen report that their code agrees with fxbary to 1 .the ephemeris we generate for a location on the surface of the earth agrees with horizons to 20 nano - lt - s , and the geocentric bjds we calculate from horizons ephemeris agree with the bjds we calculate using craig markwardt s routines within 10 ns .the near - exact agreements between these methods is not surprising , and do not necessarily indicate that they are accurate to better than 1 .our code was inspired by barycen and both rely on craig markwardt s routines ( the difference comes from the fact that we index the jpl ephemeris with jd instead of jd ) , and all methods use jpl s de405 ephemeris .the primary advantage of our code is that it includes the jd to jd correction ( but can optionally ignore it ) .the choice of starting with jd is a departure from what is typically done with such time stamp calculators , but we feel this is a far more robust starting point . the current confusion has shown that many assume jd as the starting point , which is likely due to a lack of explicitness in the programs and/or unfamiliarity with various time standards .our hope is that people are unlikely to make the opposite mistake ( assume the input should be jd instead of jd ) since our code is very explicit and calculating the jd is almost always a trivial calculation from the date - obs fits header keyword .additionally , our code can easily correct for the observer s position on the earth or from a spacecraft , and can include the spherical wave correction . in order to schedule observations ,even 10 minute precision is generally good enough , and one can approximate bjd jd ; for more demanding observing schedules , we provide software to iteratively calculate the reverse correction . along with the idl source code, we provide a web - based interface to our codes , though not every feature is enabled .specifically , it is limited to 1 ms precision , can only do one target at a time , only does the plane wave approximation , and is limited to 10,000 jds at a time .those with applications for which these features are too limited should download our source code and run it locally .timing of transient events is a powerful tool for characterizing many astronomical phenomena . in the field of exoplanets in particular ,the search for variations in the times of primary transits and secondary eclipses , or transit timing variations , is one of the most promising new techniques for studying planetary systems . the accuracy with which transit times , and indeed any transient phenomenon , can be measured is limited not only by the data themselves , but by the time stamp to which the transit time is referenced .as the quality of transit timing data crosses the threshold of 1 minute precision , the precise time standard and reference frame in which event times are quoted becomes important .achieving uniform and accurate time stamps with accuracies of better than 1 minute that can reliably be compared to one another requires extraordinary care in both our techniques and our terminology .we have found that the time standards adopted by various groups that measure transit times can differ by as much as a minute , and are typically left unspecified . as these ambiguities can be significant compared to the timing precisions that are quoted, they may therefore lead to spurious detections of transit timing variations or biased eccentricity measurements .here we have summarized the effects one must consider in order to achieve timing precision of 1 .we argue that the bjd is nearly the ideal time stamp , being as reliable as any time stamp can be without being unique to each target system . on the other hand , bjd and the hjd in any form should be avoided whenever possible .most importantly , we emphasize that the time standard should always be explicitly stated .any time stamp that is quoted without a time standard should be assumed to be uncertain to at least 1 minute . unless the time standards used in programs or algorithms have been independently confirmed, one should avoid using ones that do not precisely specify the input and output time standard .in addition , the arrival time at the observing site along with its time standard ( e.g. , jd ) should also be specified .this will remove any ambiguity in the time stamp , allow others to apply improved corrections should more precise ephemerides become available in the future , and allow others to check that original conversion was done accurately enough for their purpose .finally , we have written an idl program for general use that facilitates the use of bjd to an accuracy of 1 , provided that the inputs are sufficiently precise , and we provide a web - based interface to its most useful features .we would like to thank craig markwardt for his fundamental routines that make ours possible , the help desks at the various space telescopes and at iers for answering our questions , the anonymous referee , steve allen , richard pogge , joseph harrington , roberto assef , andrew becker , mercedes lopez - morales , christopher campo , drake deming , ryan hardy , heather knutson , eric agol , and joshua winn for useful discussions , and wayne landsman and jrn wilms for managing the idl astronomy libraries .we looked in detail at several readily available tools for the bjd / hjd calculation , and have been in contact with many people in the exoplanet community and the help desks for several major space telescopes .we summarize our findings here to demonstrate how easily errors of up to 1 minute can be introduced and to stress the importance of specifying the time precisely .we caution the reader not to trust our general findings for specific cases , but always to confirm what has been done in each case where 1 minute timing accuracy is required but the time standard has not been specified explicitly .ground - based observers have typically used one of the following methods to calculate the bjd. however , most fits image headers give the date - obs and time - obs keywords in utc .we have found that most people , when starting with jd end up quoting hjd or bjd .jpl s horizons ephemeris calculator , which is used by many to calculate the bjds from space telescopes , and can be used to calculate ground - based bjds , returns the time in jd= t= jd the ephemeris type is `` vector table '' .any conversion that uses a horizons ephemeris in jd but indexes it with jd , as had been done by several people we spoke with , will calculate bjd , which can be offset from the true bjd by up to 13.4 ms ( as shown in fig .[ fig : utcvutc ] ) , and offset from the uniform bjd by more than 1 minute ( as shown in fig .[ fig : tdbvutc ] ) .iraf s setjd calculates the hjd , but calls for ut , which is likely to be interpreted as jd . in this case, it would calculate the drifting quantity hjd .if tt were used instead , it would calculate the hjd , accurate to s. the idl routines helio_jd ( for hjd ) , from the idl astronomy library curated by wayne landsman , and barycen ( for bjd ) , from the institut fr astronomie und astrophysik idl library , maintained by jrn wilms , both call for the gjd , which , we remind the reader , can be specified in any time standard .often , this is interpreted as jd , in which case they would calculate hjd or bjd , respectively . if tt were used , they would calculate the hjd , accurate to s , or bjd , accurate to the geocentric correction ( 21.3 ms ) .nasa s high energy astrophysics science archive research center ( heasarc ) created the tools fxbary and later the improved version faxbary , both of which call axbary to calculate the bjd .their documentation is precise and correct , but quite long and may be difficult for the uninitiated to follow .therefore , it would not be surprising for users of axbary to input either utc or tt , in which case they could generate either the bjd to the accuracy of the leap seconds or the bjd to the accuracy of the geocentric correction ( 21.3 ms ) . currently , common google results turn up various applets , spreadsheets , programs , or algorithms to calculate the hjd that explicitly call for jd or jd as an input . unless explicitly mentioned otherwise , it is usually safe to assume the time standard used as input will be the time standard used throughout their calculation .thus , these algorithms and applets will very likely calculate hjd .however , they are perfectly capable of calculating hjd if given jd as an input .epoxi has the mid - exposure time bjd in the header for the intended pointing under the fits header keyword kpkssbjt .this can be used directly , as long as one is careful about the intended target , so it would be very surprising if a bjd from epoxi was not bjd .we recalculated the bjd using the horizons ephemeris as described in [ sec : calculating ] and an example fits header given to us by the epoxi help desk .in their example fits header , they pointed at the moon , which is not infinitely far away , so we must use equation [ eq : exactroemer ] for . with this method , we agree with the kpkssbjt header value to ms , the limit of the precision of the keyword .we also redid the bjd calculation of hat - p-4b as described in the _ report on the calibration of epoxi spacecraft timing and reduction to barycentric julian date _ of august 2009 by hewagama et al .. we find agreement with the quoted kpkssbjt fits header keyword to 47 ms . while this is much better than the 0.41 s difference calculated by hewagama et al .( a difference they attribute to `` cumulative rounding limits '' in their method ) , we believe our method to be far more precise .however , we could not obtain access to the original headers and were unable to determine the source of the discrepancy .given the very good agreement with the calculation of the moon above , our best guess is that the target coordinates used by the epoxi pipeline differed from the published values for hat - p-4b .the 47 ms difference could be explained by a discrepancy in r.a ., a discrepancy in declination , or some combination thereof .chandra stores their date - obs keyword in tt .their more precise tstart and tstop keywords are expressed in seconds after 1998 jan 1 , 00:00:00 tt .this departs from what is typically done , which may lead to confusion , but it makes the conversion to a uniform time stamp much more straightforward and less likely to drift by the leap seconds .they provide extensive directions online to calculate the bjd using axbary , so it is likely that anyone using chandra who quotes a bjd is using bjd .the fits headers of _ hst _ state that their date - obs and time - obs keywords are ut . we contacted the _ hst _help desk for clarification , since ut is ambiguous .help desk response stated that their clock reports utc accurate to ms , but `` due to variabilities and quantization in the particular science instruments operations , the actual time light begins falling on the detector is not known to better than about second , 50% ( rough estimate ) . ''it is possible for the hst engineering team to calibrate these variations , but they have limited resources and have no plans to do so .it is thought that this error is some combination of random and systematic errors , but the precise breakdown is unknown .this potential s systematic error may have important implications for the reliability of the transit times quoted with hst observations , most importantly , the 3 s error of the transit time of hd189733b .hst does not calculate the hjd or bjd at any point , leaving the calculation up to each individual observer .our experience with ground based observers suggests that most people will end up quoting an hjd or bjd .the kepler data release notes 2 describe how to calculate the bjd from utc , but do not include the correction to tdb .they mention the horizons ephemeris , but neglect to mention that its output time is in ct , not utc ; thus it appears they calculate bjd , though we were unable to confirm this .in addition , the time stamp uncertainty may be much larger than typical , so it is worth quoting from the kepler data release notes 5 ( released 2010 june 4 ) : the advice of the dawg [ data analysis working group ] is not to consider as scientifically significant relative timing variations less than the read time ( 0.5 s ) or absolute timing accuracy better than one frame time ( 6.5 s ) until such time as the stability and accuracy of time stamps can be documented to near the theoretical limit .the _ spitzer _ pipeline calculates the hjd for the intended pointing ( presumably the target ) at the end of the exposure , subtracts the full exposure time , and records the result in the header as hjd .depending on the exposure time , this will produce roughly a 10 ms effect similar to that shown in figure [ fig : utcvutc ] , and depending on how close the intended pointing was to the object of interest , may produce a s effect similar to figure [ fig : bjdra ] . however , this effect is negligible compared to both the s accuracy of the hjd ( fig .[ fig : bjdvhjd ] ) and the number of leap seconds that may have elapsed between observations ( fig .[ fig : tdbvutc ] ) .one typically quotes the hjd at the midexposure time . since _spitzer _ quotes the hjd at the beginning of the exposure , using the unmodified _ spitzer _ hjds would produce a systematic offset of half the exposure time , though experienced observers correct for this . also contributing to this confusion, the fits header keyword utcs_obs is incorrectly documented . while the documentation states that it is seconds after j2000 et , it is actually seconds after january 1st , 2000 12:00 utc + - 32 .therefore , trusting the documentation as is will unwittingly lead to a difference of s. | as the quality and quantity of astrophysical data continue to improve , the precision with which certain astrophysical events can be timed becomes limited not by the data themselves , but by the manner , standard , and uniformity with which time itself is referenced . while some areas of astronomy ( most notably pulsar studies ) have required absolute time stamps with precisions of considerably better than 1 minute for many decades , recently new areas have crossed into this regime . in particular , in the exoplanet community , we have found that the ( typically unspecified ) time standards adopted by various groups can differ by as much as a minute . left uncorrected , this ambiguity may be mistaken for transit timing variations and bias eccentricity measurements . we argue that , since the commonly - used julian date , as well as its heliocentric and barycentric counterparts , can be specified in several time standards , it is imperative that their time standards always be reported when accuracies of 1 minute are required . we summarize the rationale behind our recommendation to quote the site arrival time , in addition to using bjd , the barycentric julian date in the barycentric dynamical time standard for any astrophysical event . the bjd the most practical absolute time stamp for extra - terrestrial phenomena , and is ultimately limited by the properties of the target system . we compile a general summary of factors that must be considered in order to achieve timing precisions ranging from 15 minutes to 1 . finally , we provide software tools that , in principal , allow one to calculate bjd to a precision of 1 for any target from anywhere on earth or from any spacecraft . |
modern _ real - time systems _ have incurred tremendous challenges to verification engineers .the reason is that a model process running in a modern real - time system can be built with support from many server processes in the environment .moreover , the model may also have to respond to requests from several user processes .the fulfillment of a computation relies not only on the functional correctness of the model , but also on the reactions from the servers and the clients .for example , a company may submit a task of dna sequencing to a server .the server then develops a computing budget and decomposes the task into several subtasks ( e.g. , snp finding , alignments ) .then the server may relegate the subtasks to several other servers .the decompositions of subtasks may then go on and on .if the task is to be completed , not only the server for the root task needs to function correctly , but also all the servers for the subtasks have to fulfill their assignments .thus , to verify the function of the root server , it is only reasonable and practical to assume that all the other supporting servers work correctly . in many industrial projects, the specification can be given in the concept of state - transition diagrams ( or tables ) .in such a context , _ simulation - checking _ is an appropriate framework for verifying that a model conforms to the behavior of a specification .intuitively , the specification simulates the model if every timed step of the model can be matched by the specification at the same time . [ exmp.intro ] in figure [ fig.ms_ne ] , we have the state - transition diagrams of two _ timed automatas _ ( _ ta _ ) .( 0,0 ) # 1#2#3#4#5 ( 5195,3861)(664,-3289 ) ( 976,-361)(0,0)[lb ] ( 1501 , 89)(0,0)[lb ] ( 1501,239)(0,0)[lb ] ( 1501,389)(0,0)[lb ] ( 2926,-811)(0,0)[lb ] ( 5176,-361)(0,0)[lb ] ( 4726,164)(0,0)[lb ] ( 4726,314)(0,0)[lb ] ( 2926,-961)(0,0)[lb ] ( 901,-1261)(0,0)[lb ] ( 2926,-2761)(0,0)[lb ] ( 1501,-1861)(0,0)[lb ] ( 1501,-1711)(0,0)[lb ] ( 2926,-2911)(0,0)[lb ] ( 901,-3211)(0,0)[lb ] ( 976,-2311)(0,0)[lb ] ( 3451,-436)(0,0)[lb ] ( 3376,-2161)(0,0)[lb ] ( 3451,-2386)(0,0)[lb ] ( 3376,-211)(0,0)[lb ] the one in figure [ fig.ms_ne](a ) is for a model while the one in figure [ fig.ms_ne](b ) is for a specification .we use ovals for the _ control locations _ of the tas while arcs for the transition rules . in each oval , we label the invariance condition that must be satisfied in the location . for example , in location , can stay for at most 20 time units . by each transition rule , we stack its synchronization event , triggering condition ( guard ) , and actions . for convenience ,tautology triggering conditions and nil actions are omitted .an event starting with a ` ? ' represents a _ receiving event _ while one with a ` ! ' represents a _ sending event_. for example , for the transition from location to , must send out an event request , be in a state satisfying , and reset clock to zero .the specification in figure [ fig.ms_ne](b ) does not simulate the model in figure [ fig.ms_ne](a ) since event !end of can not be matched by any event of .moreover can neither receive a ?serve event 15 time units after issuing a ! request event while can. however , the concept of simulation described in the last paragraph can be too restrictive in practice .developers of a project usually can not make too much assumption on the environment .the deadline constraints and can be too restrictive and hurt the extensibility of the model in the future .another approach in this regard is using _ fairness assumptions _for example , for the model and specification processes in figure [ fig.ms_ne ] , we may want to check whether simulates under the fairness assumption that the environment functions reasonably .such an assumption can be captured with the fairness assumption that _ there will always be infinitely many occurrences of event serve_. under this assumption , the in figure [ fig.ms_ne](b ) actually simulates the in figure [ fig.ms_ne](a ) . in this work, we propose the _ simulation _ with fairness assumptions for the processes in a dense - time setting .in such a setting , the model and the specification are both_ generalized bch timed automatas _ ( _ gbta _ ) with communication channels and dense - time behaviors .we want to check whether the specification gbta can simulate the model gbta with multiple fairness assumptions . following the approach of , we allow for the requirement and analysis of both strong and weak fairness assumptions .a _ strong fairness _assumption intuitively means something will happen infinite many times . a _ weak fairness _assumption means something will hold true eventually forever . for convenience ,we use two consecutive sets of formulas for fairness assumptions , the former for the strong fairness assumptions while the latter for the weak fairness assumptions .[exmp.intro.fstate ] for the system in figure [ fig.ms_ne ] , we may have the following fairness assumptions . the fairness assumptions in the above say that a valid computation of the system must satisfy the following two conditions ._ for the strong fairness assumption of _ : for every , there exists a with such that in the computation at time , the model process is in location .this in fact says that the model must enter location infinitely many times along any valid computation ._ for the weak fairness assumption of _ : there exists a such that for every with , the model process is in either locations or .this in fact says that the model will stabilize in locations and .the two types of fairness assumption complement with each other and could be handy in making reasonable assumptions .furthermore , we also allow for both state formulas and event formulas in the description of fairness assumptions .state formulas are boolean combinations of atomic statements of location names and state variables . for convenience ,we use index for the model and index for the specification .event formulas are then constructed with a precondition , a event name with a process index , and a post - condition in sequence .[exmp.intro.fevent ] for the system in figure [ fig.ms_ne ] , we may write the following strong event fairness assumption . the event specification of means there is an event serve received by process 1 .the precondition for the event is while the post - condition is .the strong fairness assumption says that there should be infinite many events serve received by process 1 in location . in general , an event specification can be either a receiving or a sending event .such event formulas can be useful in making succinct specifications . without such event formulas, we may have to use auxiliary state variables to distinguish those states immediately before ( or after ) an event from others .such auxiliary variables usually unnecessarily exacerbate the state space explosion problem .one goal of our work is to develop a simulation - checking algorithm based on symbolic model - checking technology for dense - time systems . to achieve this , we focus on a special class of simulations with the restriction of at most one fairness assumption for the specification . for convenience, we call this class the _ usf _ ( _ unit - specification - fairness _ )_ simulations_. then we propose a symbolic algorithm for this special class of simulations . to our knowledge , this is the first such algorithm for gbtas . also unlike the fair simulation checking algorithm based on ranking function in the literature ,our algorithm is based on symbolic logic formulas manipulation , which has been proven useful in symbolic model checking .thus , our algorithm style can be interesting in itself .we also present a technique for the efficient simulation checking of concurrent systems by taking advantage of the common environment of a model and a specification . to apply the simulation checking algorithms mentioned in the above and in the literature , we need first construct a product automata of the environment and the model , in symbols .then we construct a product of and the specification , in symbols .then we check if simulates . as a result ,such algorithms incur duplicate recording of the state information of while manipulating representations for the simulation of by .moreover , different transitions in with the same observable events can also be matched in the simulation - checking .such matching is not only counter - intuitive in simulation against the same environment , but also incur explosion in the enumeration of matched transitions between and .our technique is embodied with the definition of a new simulation relation against a common environment .we have implemented this technique and experimented with benchmarks with and without fairness assumptions .we have the following presentation plan .section [ sec.relwork ] is for related work .section [ sec.prel ] reviews our system models .sections [ sec.simf ] presents our simulation for dense - time systems with fairness assumptions .section [ sec.usf.neg.char ] presents a characterization of the simulation when the specification is a bchi ta .section [ sec.simf.alg ] presents our simulation checking algorithm based on the characterization derived in section [ sec.usf.neg.char ] .section [ sec.sim.env ] presents the simulation against a common environment and techniques for performance verification in this context .sections [ sec.imp ] and [ sec.exp ] respectively report our implementation and experiment .section [ sec.conc ] is the conclusion .cerans showed that the bisimulation - checking problem of timed processes is decidable .tairan et al showed that the simulation - checking problem of dense - time automatas ( tas ) is in exptime .weise and lenzkes reported an algorithm based on zones for timed bisimulation checking .cassez et al presented an algorithm for the reachability games of tas with controllable and uncontrollable actions .henzinger et al presented an algorithm that computes the time - abstract simulation that does not preserve timed properties .nakata also discussed how to do symbolic bisimulation checking with integer - time labeled transition systems .beyer has implemented a refinement - checking algorithm for tas with integer - time semantics .lin and wang presented a sound proof system for the bisimulation equivalence of tas with dense - time semantics .aceto et al discussed how to construct such a modal logic formula that completely characterizes a ta .larsen presented a similar theoretical framework for bisimulation in an environment for untimed systems .however no implementation that takes advantage of the common environment information for verification performance has been reported .proposals for extending simulation with fair states have been discussed in .our simulation game of gbtas stems from henzinger et al s framework of fair simulation .techniques for simulation checking of gbas were also discussed in .we have the following notations . is the set of real numbers . is the set of non - negative reals . is the set of nonnegative integers .also ` iff ' is if and only if. " given a set of atomic propositions and a set of clocks , we use as the set of all boolean combinations of logic atoms of the forms and , where , , ` ' , and .an element in is called a _ state - predicate_. a ta is structured as a directed graph whose nodes are _ modes ( control locations ) _ and whose arcs are _transitions_. please see figure [ fig.ms_ne ] for examples .a ta must always satisfy its _invariance condition_. each transition is labeled with events , a _ triggering condition _ , and a set of clocks to be reset during the transitions .at any moment , a ta can stay in only one _mode_. if a ta executes a transition , then the triggering condition must be satisfied . in between transitions , all clocks in ata increase their readings at a uniform rate .a ta is a tuple . is a finite set of modes ( locations ) . is a finite set of propositions . is a finite set of clocks . is the initial condition . is the invariance condition for each mode . is the set of process transitions . is a finite set of events . is a mapping that defines the events at each transition . and respectively define the triggering condition and the clock set to reset of each transition . without loss of generality , we assume that for all with , is a contradiction .we also assume that there is a null transition that does nothing at any location .that is , the null transition transits from a location to the location itself .moreover , , , and . given a ta , for convenience , we let , , , , , , , , , and . also , for convenience , we let be the _ invariance predicate _ of . [ exmp.tas ] we have already seen examples of tas in figure [ fig.ms_ne ] . for the ta in figure [ fig.ms_ne](a ) ,the attributes are listed in table [ tab.ms_ne.a.attr ] . \\ e_{{\cal m } } & = & \{(\mbox{\tt idle}_1,\mbox{\tt wait}_1 ) , ( \mbox{\tt wait}_1,\mbox{\tt idle}_1 ) , ( \mbox{\tt wait}_1,\mbox{\tt stop}_1)\ } \\ \sigma_{{\cal m } } & = & \{\mbox{\tt request},\mbox{\tt serve},\mbox{\tt end}\ } \\\epsilon_{{\cal m } } & = & [ ( \mbox{\tt idle}_1,\mbox{\tt wait}_1)\mapsto\{!\mbox{\tt request}\ } , ( \mbox{\tt wait}_1,\mbox{\tt idle}_1)\mapsto\{?\mbox{\tt serve}\ } , ( \mbox{\tt wait}_1,\mbox{\tt stop}_1)\mapsto\{!\mbox{\tt end}\ } ] \\\tau_{{\cal m } } & = & [ ( \mbox{\tt idle}_1,\mbox{\tt wait}_1)\mapsto x_1>5 , ( \mbox{\tt wait}_1,\mbox{\tt idle}_1)\mapsto { \mbox{\em true } } , ( \mbox{\tt wait}_1,\mbox{\tt stop}_1)\mapsto x_1>10 ] \\ \pi_{{\cal m } } & = & [ ( \mbox{\tt idle}_1,\mbox{\tt wait}_1)\mapsto \{x_1\ } , ( \mbox{\tt wait}_1,\mbox{\tt idle}_1)\mapsto \{x_1\ } , ( \mbox{\tt wait}_1,\mbox{\tt stop}_1)\mapsto \emptyset ] \\ \end{array} ] denotes a ( partial or total ) function with . a _ valuation _ of a set is a mapping from the set to another set .given an and a valuation of , we say _ satisfies _ , in symbols , iff is evaluated when the variables in are interpreted according to .suppose we are given a ta .a _ state _ of is a valuation of with the following constraints . for each , .there exists a such that and for all , .given a , if , we denote as .for each , .in addition , we require that .we let denote the set of states of .note that we define a state as a mapping instead of as a pair of control locations and a real mapping as in .this is for the convenience of presentation when latter we want to discuss the state - pairs in simulation relations . for any state and real number , is a state identical to except that for every clock , .also given a process transition , we use to denote the destination state from through the execution of .formally , if , then is a new state that is identical to except that the following constraints are true . and . for every clock , . for every clock , . given a and a transition , we write iff , , , and for each ] to denote such a with . a _ run _ of a ta is an infinite sequence of state - transition - time triples with the following restrictions .* non - zeno requirement : * is a non - decreasing and divergent real - number sequence .that is , and . for all , either or {10mm}{0.5pt}\!\!\!\longrightarrow } } } \nu_{k+1} ] , . for every event - predicate in , there are infinitely many s such that , , and . for every state - predicate ,there is a such that for every and ] such that .we let the maximum of such s .then it is clear that for every and ] , . thus it is not true that there are infinitely many s with a ] , .similarly , a play prefix of is called an _ -pprefix _ if for every and ] be the following formula . .standard procedures for constructing state - predicates of existentially quantified formulas can be found in .given a transition - pair with and , we let be the formula of state - pairs that may go to state - pairs in through the simultaneous execution of and respectively . specifically , is defined as follows .\left ( \begin{array}{lll } \eta & \wedge & \lambda_{{\cal m}}(q'_1)\\ & \wedge & \lambda_{{\cal s}}(q'_2 ) \end{array}\right ) \end{array}\right) ] to represent the set of states ( or state - pairs ) that satisfies . given an , a set of event weak fairness assumption , and a , we use to denote the set of state - pairs with the following restrictions .there is a -rprefix ,e , t+t_0) ] is in and satisfies . for every -pprefix with , , , and , for every event weak fairness assumption , if and , then \models\eta_4 ] , formulas of state - pairs , and a set of event weak fairness assumptions , \!]}{\bigcirc}^{e\psi}_t { [ \![}\eta_2{]\!]} ] as follows .there is no -pprefix with , , , , , and for every and , , and , it is not true that \models \neg\eta_4 ] with {8mm}{0.5pt}\!\!\!\longrightarrow}}]\models \exists{{\cal s}}(\eta_2)\wedge\exists{\box}\phi_{{\cal m}}\psi_{{\cal m}} ] . means the following .\!]}\\ \wedge & \exists{\box}\phi_{{\cal m}}\psi_{{\cal m}}\end{array}\right)\right ) ] iff .+ we have the following deduction .\!]}{\bigcirc}^\psi{[\![}\eta_2{]\ ! ] } } \\ \equiv & \mu\nu\in\bigcup_{e\in e_{{\cal m}},t\in{{{\mathbb r}^{\geq 0 } } } } \langle { { \cal m}}\rangle d_1{\bigcirc}^{e\psi}_t d_2 \\ \equiv & \bigvee_{e\in e_{{\cal m}},t\in{{{\mathbb r}^{\geq 0 } } } } \mu\nu\in\langle { { \cal m}}\rangle d_1{\bigcirc}^{e\psi}_t d_2 \end{array}\cal u ] . for convenience ,given two formulas for sets of state - pairs , we let here is the least fixpoint operator . specifies a smallest solution to equation .the procedure to construct formulas for such least fixpoints can be found in .[ lemma.until-star.form ] for every state - pairs and formulas for state - pairs , \!]}{\mbox{}}^\psi { [ \![}\eta_2{]\!]}\cal u ] is true. we can prove this by induction on the maximum number of timed transition steps of to reach state - pairs in \!]} ] .in the base case , and \!]} ] through state - pairs in \!]}\cal u ] .this means that in one timed transition step of and time units by , we end up in a state - pair such that within timed transition of steps through state - pairs in \!]} ] . according to the inductive hypothesis , we know that satisfies .together , this implies the following deduction . according to the definition of least fixpoint , the last step implies . by definition, this implies that .thus this direction of the lemma is proven by induction . we assume that there exist such that , , and for every , .we prove by induction on \cal u ] .the base case is that and .this implies that \!]}\cal u ] .thus the base case is proven .now we assume that the lemma in this direction is true for all ] to state - pairs in \!]} ] + ] , . . .then a run of can also be defined as a sequence with {10mm}{0.5pt}\!\!\!\longrightarrow}}(\mu_{k+1},\nu_{k+1}) ] these two statements rely on the following three statements . the validity of the above three then follows from statements , and in the above .thus we know that is indeed a play of .furthermore , the validity of statements and implies that indeed embeds .now we want to prove claim cl2 . for all assumptions in and , they are automatically satisfied since also embeds and satisfies . for a strong fairness assumption , we have the following two cases to analyze . _ is a state - predicate ._ we claim that along , for every , there exists an and a ] , .this is true since along , which implies that which in turn implies the claim . _ is an event - predicate ._ we claim that along , there exists a such that for all , if and , then .this is true since along , if and , then .this further implies that if and , then . in the end , this implies the claim .with the proof of claims cl1 and cl2 , thus we conclude that the lemma is proven . according to lemma [ lemma.simfe.eq ], we can check the classic simulation in definition [ def.simf ] by checking the one in definition [ def.simfe ] .this can be helpful in enhancing the verification performance when the common environment between the model and the specification is non - trivial .lemma [ lemma.simfe.eq ] implies that we can use the following techniques to enhance the simulation algorithm against an environment .based on condition se1 of definition [ def.simfe ] , we significantly reduce the sizes of the spaces of state - pairs by disregarding state - pairs of the form with .since the number of different zones representing s can be exponential to the input size , the reduction can result in exponential speed - up . by mapping variables in in state - pairs , to those in , we actually only have to record one copy of values for each variables in .since the size of bdd - like diagrams is exponential to the number of variables , this technique can also significantly reduce the memory usage in representations with bdd - like diagrams . in evaluating the precondition of state - pairs, we need to enumerate all the transition pairs of the form with , , and .if we use the classic simulation , the enumeration is of size .but with the simulation against a common environment in definition [ def.simfe ] , the enumeration is of size .thus significant reduction in time and space complexity can also be achieved with definition [ def.simfe ] .we have implemented the techniques proposed in this manuscript in * red * 8 , a model / simulation - checker for ctas and parametric safety analysis for lhas based on crd ( clock - restriction diagram ) and hrd ( hybrid - restriction diagram ) technology .the state - pair spaces are explored in a symbolic on - the - fly style . to our knowledge , there is no other tool that supports fully automatic simulation checking with gbtas .we used parameterized networks of processes as our benchmarks . for a network of processes , we use integer through to index the processes .users supply two index lists , the first for the indices of the model processes and the second for indices of the specification processes .the process indices not in the two lists are treated as indices of the environment processes .for example , we may have a system of 10 processes .the following describes a simulation - checking task of process 1 ( the model ) by process 1 ( the specification ) . .... 1;2 ; .... here processes 3 through 10 are the environment processes . to support convenience in presenting fairness assumptions , we allow parameterized expressions .for example , in table [ tab.sim.reqs](a ) , we have a simulation requirement with parameterized strong fairness assumptions .\(a ) one simulation requirement + ' '' '' .... # ps-1 assume { strong event { execute@(#ps-1 ) } ; } ; # ps assume { strong true event { execute@(#ps ) } true ; } ; assume { |k:2 .. #ps-2 , strong true event { execute@(k ) } ; } .... ' '' '' \(b )another simulation requirement + ' '' '' .... # ps-1 assume { strong event { execute@(#ps-1 ) } ; } ; # ps assume { weak idle@(#ps ) ; } ; assume { |k:2 ..#ps-2 , strong true event { execute@(k ) } ; } .... ' '' '' here ` # ps ` is a parameter for the number of processes .thus for a system of 10 processes , process 9 is the model , process 10 is the specification , while the others are the environment .the last assume statement is for the fairness assumption of the environment .the specification of event - predicates is in the following form .type [ ] a [ ] here type is either ` strong ' or ` weak . '[ ] and [ ] are respectively the optional precondition and the optional post - condition .we may also use quantified expressions to present several fairness assumptions together .for example , in the above , .... assume { |k:2 .. #ps-2 , strong true event { execute@(k ) } ; } .... presents the following strong fairness assumptions . ....strong true event { execute@(2 ) } strong true event { execute@(3 ) } ... ... strong true event { execute@(8 ) } ....to our knowledge , there is no other tool that supports fully automatic simulation checking with fairness assumptions for tas as ours .so we only experimented with our algorithms .we report two experiments .the first is for timed branching simulation against a common environment without fairness assumptions in subsection [ subsec.exp.tsim ] . especially , we report the performance enhancement of the simulation in definition [ def.simfe ] ( without fairness assumption ) over the simulation in definition [ def.simf ] .the second experiment is for simulation against a common environment with fairness assumptions in subsection [ subsec.exp.fsim ] . especially , we use liveness properties in the experiment .we used the following three parameterized benchmarks from the literature ._ fischer s timed mutual exclusion algorithm _ : the algorithm relies on a global lock and a local clock per process to control access to a critical section .two timing constants used are 10 and 19 ._ csma / cd_ : this is the ethernet bus arbitration protocol with collision - and - retry . the timing constants used are 26 , 52 , and 808 ._ timed consumer / producer_ : there is a buffer , some producers , and some consumers .the producers periodically write data to the buffer if it is empty .the consumers periodically wipe out data , if any , in the buffer .the timing constants used are 5 , 10 , 15 , and 20 . for each benchmark , we use one model process and one specification process .all the other processes are environment .also for each benchmark , two versions are used , one with a simulation and one without . for the versions with a simulation , and are identical . for the version without , and differ in only one process transition or invariance condition .for example , for the fischer s benchmark , the difference is that the triggering condition of a transition to the critical section of is mistaken .the performance data is reported in table [ tab.perf.tsim ] .[ cols="<,<,^ , > , > , > , > " , ] + for each benchmarks , there are a model process , a specification process , and environment processes . `n / a ' means not avaiable . "+ data collected on a pentium 4 1.7ghz with 380 mb memory running linux ; + s : seconds ; k : kilobytes of memory in data - structure ; m : megabytes of total memory + as can be seen from the performance data , our techniques show promise for the verification of fulfillment of liveness properties in concurrent computing .in this work , we investigate the simulation problem of tas with multiple strong and weak fairness assumptions . for the succinct presentation of fairness assumptions , we also allow for event fairness properties .we then present an algorithm for the usf - simulation of gbtas .the algorithm is based on symbolic model - checking and simulation - checking techniques and can be of interest by itself .we then propose a new simulation against a common environment between the model and the specification. we then present efficiency techniques for this new simulation .our implementation and experiment shows the promise that our algorithm could be useful in practice in the future .the work is partially supported by nsc , taiwan , roc under grants nsc 97 - 2221-e-002 - 129-my3 .+ part of the work appears in the proceedings of formats 2007 , lncs 4763 , springer - verlag and the proceedings of hscc 2009 , lncs 5469 , springer - verlag .f. wang .symbolic parametric safety analysis of linear hybrid systems with bdd - like data - structures ., 31(1):3851 , 2005 .a preliminary version is in proceedings of 16th cav , 2004 , lncs 3114 , springer - verlag .f. wang .symbolic simulation checking of dense - time automata . in _5th formats ( international conference on formal modelling and analysis of timed systems ) _ , volume lncs 4763 .springer - verlag , october 2007 .f. wang .time - progress evaluation for dense - time automata with concave path conditions . in _atva ( international symposium of automated technology on verification and analysis ) _ , volume lncs 5311 .springer - verlag , 2008 .f. wang , g .- d .huang , and f. yu .tctl inevitability analysis of dense - time systems : from theory to engineering . , 32(7 ) , 2006 . a preliminary version of the work appears in the proceedings of 8th ciaa ( conference on implementation and application of automata ) , july 2003 , santa barbara , ca , usa ; lncs 2759 , springer - verlag . | we investigate the simulation problem in of dense - time system . a specification simulates a model if the specification can match every transition that the model can make at a time point . we also adapt the approach of emerson and lei and allow for multiple strong and weak fairness assumptions in checking the simulation relation . furthermore , we allow for fairness assumptions specified as either state - predicates or event - predicates . we focus on a subclass of the problem with at most one fairness assumption for the specification . we then present a simulation - checking algorithm for this subclass . we propose simulation of a model by a specification against a common environment . we present efficient techniques for such simulations to take the common environment into consideration . our experiment shows that such a consideration can dramatically improve the efficiency of checking simulation . we also report the performance of our algorithm in checking the liveness properties with fairness assumptions . * keywords : * branching simulation , fairness , verification , bchi automatas , concurrent computing , timed automata , algorithms , experiment |
many complex systems display a very heterogeneous degree distribution characterized by a power law decay of the form .this form implies the absence of a characteristic scale hence the name of `` scale - free network '' ( sfn ) . among these networks ,a certain number are of a great interest to epidemiology and it is thus very important to understand the effect of their topology on the spreading dynamics of a disease .one of the most relevant results is that disease spreading does not show an endemic threshold in sfn when the population size is infinite and .this result means that a disease propagates very easily on a large sfn whatever the value of its transmission probability .in addition , recent studies showed that the presence of hubs in sfn not only facilitates the spread of a disease but also accelerates dramatically its outbreak .the long - tailed degree distribution of sfn is the signature of the presence of a non - negligible number of highly connected nodes .these hubs were already identified in the epidemiological literature as superspreaders .consequently , from a public health point of view , studying the spreading of epidemics on sfn is all the more appropriate .superspreading events affect the basic reproductive number widely used epidemiological parameter making its estimate from real - world data difficult . as a matter of fact, it seems that superspreading events appeared in the onset of the recent sars outbreak and could be crucial for the new emergent diseases and bioterrorist threats .their potential threat justifies detailed studies of the incidence of the degree distribution at the initial stage of epidemics .the variability plays an important role in the accuracy and the forecasting capabilities of numerical models and has thus to be quantified in order to assess the meaningfulness of simulations with respect to real outbreaks . using a numerical approach, we analyze the evolution of epidemics generated by different sets of initial parameters , both for sfn and homogeneous random networks ( rn ) .we use the barabsi - albert model ( ba ) for generating a sfn and the erds - renyi network ( er ) as a prototype for rn . concerning the epidemic modeling ,a simple and classical approach is to consider that individuals are only in two distinct states , infected ( i ) or susceptible ( s ) .there is initially a number of infected individuals and any infected node can pass the disease to his neighbors . the probability per unit time to transmit the disease the spreading rate is denoted by and once a susceptible node is infected it remains in this state . in more elaborated models ,an infected individual can change its state to another category , for example , coming back to susceptible ( sis ) , or going to immunized or dead ( sir ) .this s i approach ( si ) , in spite of its simplicity , is a good approximation at short times to more refined models such as the sis or sir models .the si model on both sfn and rn is thus well adapted to the characterization of the variability of the initial stages of epidemic outbreaks spreading in complex networks , which is the focus of this article .the outline of the paper is the following . in sectionii , we study the fluctuations of the prevalence and we identify different parameters controlling them .in particular , we highlight the effects due to different realizations of the network as well as different initial conditions .we also investigate the influence of the nodes degree on the prevalence variability . in section iii , we present results on the infection time and its variation with the degree and with the distance from the origin of infections .we also discuss the effect of the number of paths between two nodes on the infection time .finally , we discuss our results and conclude in section iv.=-1we analyze in this section the effect of the underlying network topology on the variability of outbreaks .it is indeed important to understand whether the local fluctuations of the structure of the network can have a large impact on the development of epidemics . in order to analyze this effect, we measure the variability of outbreaks as the relative variation of the prevalence ( density of infected individuals ) given by = \frac{\sqrt{\langle i(t)^2\rangle -\langle i(t ) \rangle^2}}{\langlei(t ) \rangle}.\ ] ] in order to evaluate this quantity we run simulations for different `` model sets '' : first , for a given number of outbreaks on a single network , second for a single outbreak on different networks , and finally several outbreaks on different networks .we show in fig .[ fig : cv_1vs1000 ] the curves ] ] on complex networks . since the initial prevalence is fixed and is the same for all instances , is initially equal to zero and can only increase . at very large times , almost all nodes are infected implying that .this argument implies the existence of a peak which as shown in fig .[ fig : cv_1vs1000]is located for ba networks at the beginning of the outbreak , with a maximum value larger than the one obtained for er networks . in order to characterize the relation between the variability peak and the network heterogeneity , we define as the time at which the maximum of ] [ ] ] for outbreaks starting from initial infected nodes with a given degree ( from up to ) .this figure shows that the variability peak decreases when is increased . in other words , when an outbreak begins from a highly connected node , the early stages of the spreading tend to be less variable .one might think that the number of paths available on a highly connected node leads to a higher overall variability , it is however not the case . as shown in the inset of fig .[ fig : cv_graine ] , the prevalence increases with the seed degree , which may explain the variability for different .indeed , when the seed is a hub , the number of infected becomes rapidly very large and thus leads to smaller relative variations of the prevalence .this result leads us to investigate more thoroughly the degree of infected nodes and analyze the differences between ba and er networks . in this section, we study in detail the degree properties of the infected nodes during the outbreak of the disease . [cols="<,^ " , ] the reason for this behavior lies in the difference of the numbers of shortest paths in these networks . indeed ,if we enumerate these paths , we observe that their numbers relatively differ between both ba and er topologies .we have computed the size and the number of shortest paths between a randomly selected node , i.e. a potential seed of infection and the rest of the network and we present in table [ tab : nb_sp ] the average number of shortest path at distance .results are computed over random selection of the potential seed in order to get an accurate picture of the network .the table exhibits a difference in the number of path for ( difference for , for , for ) which confirms the fact that on ba networks , nodes have more paths to go from one to another in a small number of hops . for two different configurations . in the first case infectionoccurs in one step and in the second case another path is added .the dotted curve represents the average time of infection for the first case , and the plain curve represents for the second case and is given by eq .( [ eq : mintdti ] ) .the result of a numerical simulation are shown by symbols ) . ]table [ tab : nb_sp ] describes the statistics of shortest paths but longer paths also contributes to the spreading of the disease .their role can be highlighted by studying the following simple cases . in the first casean infected node is in contact with a susceptible node . in the second case, there is an additional path from to going through a susceptible node ( see fig . [fig : avg_t ] ) . in the first `` direct '' case , the average time of infection of given by the addition of a longer path in the second case ( fig .[ fig : avg_t ] ) changes the behavior of and eq .( [ eq : td ] ) no longer holds for this case .in fact , the time of infection of the susceptible node is given by , \label{eq : mintdti}\ ] ] where is the time of a direct infection and of an indirect 2-steps infection process : .the statistics of can be easily computed and its first moment reads eq .( [ eq : avg_t ] ) predicts values always smaller than ( see fig .[ fig : avg_t ] ) .this result could appear as paradoxical since adding a _ longer _path actually _ reduces _ the average infection time .in fact , the probability that the disease is not transmitted on both paths is very small and the existence of another path cuts off large direct infection time and thus reduces the average infection time of . since ba networks have a clustering coefficient larger than er networks this result explains the small difference of infection times for seen in fig .[ fig : density_seq_dtopo_sfn_rn ] . from the seed .top panel : spreading on a ba network ; bottom panel : er network .both panels show , for every nodes of a single network , nodes , , and computed over outbreaks , , originating from exactly the same seed of degree . ] concerning the relationships between the relative dispersion of infection time and , their behavior on both topologies are reported on fig .[ fig : ell_cv ] .this figure shows that the nodes in both networks exhibit higher values of when they are closer to the seed , i.e. for . for larger distances, is practically constant in both cases .we have analyzed in detail the variability of a simple epidemic process on sfn .first , we have shown that different realizations of ba networks do not display significant statistical differences in outbreak variability .consequently , it is statistically reliable to consider a single realization of the network , provided it is large enough .we have also shown that the prevalence fluctuations are maximal during the time regime for which the diversity of the degrees of the infected node is the largest . in order to analyze in detail this variability , we examined the temporal degree pattern of infected nodes .in particular , we demonstrated the high variability of superspreaders prevalence .we found that for the hubs the infection time is usually small but with fluctuations which can be large .even if the hubs are good candidates for being chosen as surveillance stations given their short average infection time , they present non - negligible fluctuations which limit their reliability . in this respect ,the ideal detection stations should be nodes with the best trade - off between a short average infection time and a high reliability as given by small infection time fluctuations .the topological distance to the seed is also an important parameter in epidemic spreading pattern .nodes at a short distance from the seed are infected at small time in the high variability regime and thus have a large infection time variability .maybe more surprising is the importance of the number of paths not only the shortest one going from the seed to another node .the larger this number and the smaller the average infection time .this is an important conclusion for containment strategies since the reduction of epidemic channels will increase the delay of the infection arrival and will thus allow for a better preparation against the disease ( for example vaccination ). these results could be helpful in designing early detection and containment strategies in more involved models which go beyond topology and which include additional features such as passenger traffic in airlines or city populations .the authors thank a .- j .valleron for his support during this work .p.c . acknowledges financial support from a.c.i .systmes complexes en sciences humaines et sociales , and f.a . from frm( fondation pour la recherche mdicale ) .we also thank m. loecher and j. kadtke for sharing with us their manuscript prior to publication . | we study numerically the variability of the outbreak of diseases on complex networks . we use a si model to simulate the disease spreading at short times in homogeneous and in scale - free networks . in both cases , we study the effect of initial conditions on the epidemic dynamics and its variability . the results display a time regime during which the prevalence exhibits a large sensitivity to noise . we also investigate the dependence of the infection time of a node on its degree and its distance to the seed . in particular , we show that the infection time of hubs have non - negligible fluctuations which limit their reliability as early - detection stations . finally , we discuss the effect of the multiplicity of paths between two nodes on the infection time . in particular , we demonstrate that the existence of even long paths reduces the average infection time . these different results could be of use for the design of time - dependent containment strategies . |
numerical integration of the einstein equations is the only way to investigate highly dynamical and nonlinear gravitational space - time .the detection of gravitational wave requires templates of waveform , among them mergers of compact objects are the most plausible astrophysical sources .numerical relativity has been developed with this purpose over decades . for neutron star ( ns ) binaries ,a number of scientific numerical simulations have been done so far , and we are now at the level of discussing the actual physics of the phenomena , including the effects of the equations of state , hydrodynamics , and general relativity by evolving various initial data .mergers of black holes ( bhs ) are also available after the breakthrough by pretorius in 2004 .pretorius s implementation had many novel features in his code ; among them he discretizes the four - dimensional einstein equations directly , which is not a conventional approach so far . however , after the announcements of successful binary bh mergers by campanelli et al . and baker et al . based on the standard 3 + 1 decomposition of the einstein equations , many groups began producing interesting results .the merger of ns - bh binary simulations has also been reported recently , e.g. .almost all the groups which apply the above conventional approach use the so - called bssn variables together with `` ''-type slicing conditions for the lapse function and `` -driver '' type slicing conditions for the shift function .bssn stands for baumgarte - shapiro and shibata - nakamura , the modified arnowitt - deser - misner formulation initially proposed by nakamura .( the details are described in [ subsec : bssn ] . )there have already been several efforts to explain why the combination of this recipe works from the point of view of the well - posedness of the partial differential equations ( e.g. ) .however , the question remains whether there exists an alternative evolution system that enables more long - term stable and accurate simulations .the search for a better set of equations for numerical integrations is called the formulation problem for numerical relativity , of which earlier stages are reviewed by one of the authors . in this article , we report our numerical tests of modified versions of the bssn system , the _ adjusted bssn systems _ , proposed by yoneda and shinkai .the idea of their modifications is to add constraints to the evolution equations like lagrange multipliers and to construct a robust evolution system which evolves to the constraint surface as the attractor .their proposals are based on the eigenvalue analysis of the constraint propagation equations ( the evolution equations of the constraints ) on the perturbed metric . for the adm formulation ,they explain why the standard adm does not work for long - term simulations by showing the existence of the constraint violating mode in perturbed schwarzschild space - time . for the bssn formulation, they analyzed the eigenvalues of the constraint propagation equations only on flat space - time , but one of their proposed adjustments was immediately tested by yo et al . for the numerical evolution of kerr - schild space - time and confirmed to work as expected .( the details are described in [ subsec : adjbssn ] . )our numerical examples are taken from the proposed problems for testing the formulations of the mexico numerical relativity workshop 2001 participants , which are sometimes called the apples - with - apples test . to concentrate the comparisons on the formulation problem ,the templated problems are settled so as not to require technical complications ; e.g. , periodic boundary conditions are used and the slicing conditions do not require solving elliptical equations .several groups already reported their code tests using these apples tests ( e.g. ) , and we are also able to compare our results with theirs .this article is organized as follows .we describe the bssn equations and the _ adjusted _ bssn equations in sec .[ subsec : bssn ] and [ subsec : adjbssn ] .we give our three numerical test problems in sec .[ sec : setup ] .comments on our coding stuff are in sec .[ sec : code ] .[ sec : num ] is devoted to showing numerical results for each testbeds , and we summarize the results in sec . [ sec : summary ] .we start by presenting the standard bssn formulation , where we follow the notations of , which are widely used among numerical relativists .the idea of the bssn formulation is to introduce auxiliary variables to those of the arnowitt - deser - misner ( adm ) formulation for obtaining longer stable numerical simulations .the basic variables of the bssn formulation are , which are defined by ,\label{eq : ext - trldess}\\ \tilde{\gamma}^i & = & \tilde{\gamma}^{jk}{\tilde{\gamma}^i}_{jk } , \label{eq : conf - con}\end{aligned}\ ] ] where are the intrinsic and extrinsic adm 3-metric .the conformal factor is introduced so as to set ] , * grid : with , where with * time step : * boundary conditions : periodic boundary condition in direction and planar symmetry in and directions * gauge conditions : the 1d simulation is carried out for a crossing - time or until the code crashes , where one crossing - time is defined by the length of the simulation domain . the second test is to check the ability of handling a travelling gravitational wave .the initial 3-metric and extrinsic curvature are given by a diagonal perturbation with component where for a linearized plane wave traveling in the -direction .here is the linear size of the propagation domain and is the amplitude of the wave. the non - trivial components of extrinsic curvature are then following , we chose the following parameters : * linear wave parameters : and * simulation domain : ] , * grid : with , where with * time step : * boundary conditions : periodic boundary condition in -direction and plane symmetry in - and -directions * gauge conditions : the harmonic slicing ( [ eq : gwave - gauge ] ) and the 1d simulation is carried out for a crossing - time or until the code crashes .we have developed a new numerical code based on the adjusted bssn systems .the variables are , and the evolution equations are ( [ eq : ev - conf])-([eq : ev - conf - con ] ) with / without adjustment ( [ b1-adj ] ) , ( [ b2a - adj ] ) , and/or ( [ b2b - adj ] ) .the time - integration is under the free - evolution scheme , and we monitor five constraints , ( [ eq : cal h])-([eq : cons - cals ] ) , to check the accuracy and stability of the evolutions . our time - integration scheme is the three - step iterative crank - nicholson method with centered finite difference in space .this scheme should have second - order convergence both in space and time , and we checked its convergence in all the testbeds .as we have already mentioned in the end of ii a , we do not apply the trace - out technique of , ( [ redef_traceout ] ) in our code .we also remark on our treatment of the conformal connection variable . as was pointed out in , it is better not to use in all the evolution equations .we surmise this is because the amplification of the error due to the discrepancy of the definition ( [ eq : conf - con ] ) , i.e. , the accumulations of the violations of -constraint ( [ eq : cons - calg ] ) .therefore , we used the evolved only for the terms in ( [ eq : ev - conf - con ] ) and ( [ eq : conm - ricci ] ) , and not for other terms , so as not to implicitly apply the -constraint in time evolutions .it is crucial that our code can produce accurate results , because the adjustment methods are based on the assumption that the code represents the bssn system ( [ eq : ev - conf])-([eq : ev - conf - con ] ) accurately .we verified our code by comparing our numerical data with analytic solutions from the gauge - wave and gowdy - wave testbeds in sec .[ sec : setup ] .the actual procedures are as follows : 1 . evolve only one component , e.g. , numerically , and express all the other components with those of the analytic solution .in this situation , the origin of the error is from the finite differencing of the analytic solution in the spatial direction and from that of the numerically evolved component ( ) both in spatial and time directions .we checked the code by monitoring the difference between the numerically evolved component ( ) and its analytic expression .this procedure was applied to all the components one by one .2 . evolve only several components , e.g. , and , numerically , and express the other components by the analytic solution .the error can be checked by a procedure similar to the one above .3 . evolve all the components numerically , and check the error with the analytic solution .we repeated these procedure three times by switching the propagation directions ( , , and -directions ) of gauge - wave and gowdy - wave solutions .we also applied these procedures in a 2d test , and checked the off - diagonal component .it should be emphasized that the adjustment effect has two meanings , improvement of stability and of accuracy . even if a simulation is stable, it does not imply that the result is accurate .we judge the stability of the evolution by monitoring the l2 norm of each constraint , where is the total number of grid points , while we judge the accuracy by the difference of the metric components from the exact solution , adjusted systems , ( [ b1-adj])-([b2b - adj ] ) , require to specify the parameter . from the analytical prediction in we know the signature of , but not for its magnitude . by definition of the adjustment terms in eq .( [ b1-adj])-([b2b - adj ] ) , applying small should produce the close results with those of the plain system . on the contrary, the large system will violate the courant - friedrich - lewy condition .hence , there exists a suitable region in the adjustment parameters . at this moment, we have to chose experimentally , by observing the life - time of simulations .the value of , used in our demonstrations , is one of the choices of which the adjustment works effectively in all the resolutions .as the first test , we show the plain bssn evolution ( that is , no adjustments ) in fig .[ 1d - gauge - plain ] for the gauge - wave test . in fig .[ 1d - gauge - plain ] , the l2 norms of the hamiltonian and momentum constraints ( [ eq : errorc ] ) are plotted as a function of the crossing - time .the second - order convergent nature is lost at an early time , the 20 crossing - time , and the simulation crashes at about the 100 crossing - time .the poor performance of the plain bssn system for the gauge wave test has been reported in ( see their fig .this drawback , on the other hand , can be overcome if one uses the fourth - order finite differencing scheme , an example of which can be seen in ( see their fig . 2 ) .[ cols="^,^ " , ] in order to check the _ accuracy _ of the simulations , we prepare fig . [ 1d - gowdy - gzz ] to show the error of the component of the metric . unlike the gauge - wave or the linear wave test , in this gowdy - wave test the amplitude of the metric functionsdamps with time .therefore we use the criterion that the error normalized by be under for an _ accurate evolution_. this criterion is the same as the one used in zlochower et al . .figure [ 1d - gowdy - gzz ] shows the normalized error in versus time for the plain bssn , adjusted bssn with -equation , and adjusted bssn with -equation systems .we find that these three systems produce accurate results up to , , and , respectively .this proves that the adjustments work effectively , i.e , they make possible a stable and accurate simulation , especially the -adjusted bssn system .in this article , we presented our numerical comparisons of the bssn formulation and its adjusted versions using constraints .we performed three testbeds : gauge - wave , linear wave , and collapsing polarized gowdy - wave tests with their evolutions by three kinds of adjustments , which were previously proposed by yoneda and shinkai based on their constraint propagation analysis .the idea of the adjusted systems is to construct a system robust against constraint violations by modifying the evolution equations using the constraint equations .we can summarize our tests as follows : * when the plain ( original ) bssn evolutions already show satisfactory good evolutions ( e.g. , the linear wave test ) , the constraint violations ( i.e. , adjusted terms ) are also small or ignorable . therefore the adjusted bssn equations become quite similar to the plain bssn equations , and their results coincide with the plain bssn results . * among the adjustments we tried , we observed that the adjusted bssn system with the -eq .( [ b1-adj ] ) is the most robust for all the testbeds examined in this study .it gives us an accurate and stable evolution compared to the plain bssn system .quantitatively , the life - time of the simulation becomes 10 times longer for the gauge - wave testbed and 5 times longer for the gowdy - wave testbed than the life - time of the plain bssn system .however , it should be noted that for the gauge - wave testbed , the convergence feature is lost at a comparatively early time , the 200 crossing - time in the hamiltonian constraint and the 50 crossing - time in the momentum constraint .recently , it has been claimed that the set up of the gauge wave problem in apples - with - apples has a problematic point , which arises from the harmonic gauge condition . in , it is argued that this gauge has a residual freedom in the form , where is an arbitrary and is a function in eq .( [ eq : gwave - metric ] ) . of course, our set up corresponds to the case , but numerical error easily excites modes that result in either exponentially increasing or decaying metric amplitude .actually , we find the amplitude of the error decays with time in this testbed .so , we conclude that due to the adjustment , the growing rate of the gauge mode is suppressed and the life - time of the simulation is extended as a result .* the other type of adjustments ( [ b2a - adj ] and [ b2b - adj ] ) show their apparent effects while depending on a problem .the -adjustment for the gauge - wave testbed makes the life - time longer slightly .the -adjustment for the gowdy - wave testbed makes possible a simulation twice as long as the plain bssn system .we can understand the effect of the adjustments in terms of adding dissipative terms . by virtue of the definition of the constraints, we can recognize that the adjusted equation corresponds to the diffusion equation ( see , for example , eq .( [ b1-adj ] ) ) and the signature of determines whether the diffusion is positive or negative . in the adjusted -eq .system , ( [ b1-adj ] ) , the adjustment term corresponds to the positive diffusive term , due to the definition of and the positiveness of ( see eq .( [ eq : cal mi ] ) and ( [ b1-adj ] ) ) .this fact might explain why the adjusted -eq . system works effectively for all the testbeds .in contrast , why are not all the adjustments effective in all testbeds ? as we mentioned in sec .iib , the eigenvalue analysis was made on the linearly perturbed violation of constraints on the minkowski space - time .since the constraint violation grows non - linearly as seen in the appendix of , the candidates may not be the best in their later evolution phase .we remark upon two more interesting aspects arising from our study .the first is the mechanism of the constraint violations . as was shown in the appendix of , each constraint propagation ( behavior of their growth or decrease ) depends on the other constraint terms together with itself .that is , we can guess and constraints ( [ eq : cons - cala ] and [ eq : cons - cals ] ) in this article , propagate independently of the other constraints , while the violation of the -constraint , ( [ eq : cons - calg ] ) is triggered by the violation of the momentum constraint , and both the hamiltonian and the momentum constraints are affected by all the other constraints .such an order of the constraint violation can be guessed in fig .[ fig : c - vio - gauge ] ( earlier time ) , where we plot the rate of constraint violation normalized with its initial value , , as a function of time , for the gauge - wave testbeds with the plain bssn evolution .( note that the constraints at the initial time , , are not zero due to the numerical truncation error . )the parameters are the same as those shown in sec .[ sec : gaugewave ] , and the lowest resolution run is used . from this investigation, we might conclude that to monitor the momentum constraint violation is the key to checking the stability of the evolution .the second remark is on the lagrange multipliers , , used in the adjusted systems . as discussed in sec .[ subsec : adjbssn ] , the signatures of the are determined _ a priori _ , and we confirmed that all the predicted signatures of in are right to produce positive effects for controlling constraint violations .however , we have to search for a suitable magnitude of for each problem .therefore we are now trying to develop a more sophisticated version , such as an auto - controlling system , which will be reported upon in the future elsewhere .although the testbeds used in this work are simple , it might be rather surprising to observe the expected effects of adjustments with such a slight change in the evolution equations .we therefore think that our demonstrations imply a potential to construct a robust system against constraint violations even in highly dynamical situations , such as black hole formation via gravitational collapse , or binary merger problems .we are now preparing our strong - field tests of the adjusted bssn systems using large amplitude gravitational waves , black hole space - time , or non - vacuum space - time , which will be reported on in the near future .k.k . thanks k. i. maeda and s. yamada for continuing encouragement . k.k .also thanks y. sekiguchi and m. shibata for their useful comments on making numerical code .this work was supported in part by the japan society for promotion of science ( jsps ) research fellowships and by a grant - in - aid for scientific programs .was partially supported by the special research fund ( project no .4244 ) of the osaka institute of technology .a part of the numerical calculations was carried out on the altix3700 bx2 at yitp at kyoto university .99 m. shibata , k. taniguchi and k. uryu , phys .d * 68 * , 084020 ( 2003 ) , phys .d * 71 * , 084021 ( 2005 ) m. shibata and k. taniguchi , phys .d * 73 * , 064027 ( 2006 ) p. marronetti and s. l. shapiro , phys .d * 68 * , 104024 ( 2003 ) p. marronetti , m. d. duez , s. l. shapiro and t. w. baumgarte , phys .lett . * 92 * , 141101 ( 2004 ) j. a. faber , t. w. baumgarte , s. l. shapiro and k. taniguchi , astrophys . j. * 641 * , l93 ( 2006 ) f. pretorius , phys . rev . lett . * 95 * , 121101 ( 2005 ) m. campanelli , c. o. lousto , p. marronetti and y. zlochower , phys . rev .lett . * 96 * , 111101 ( 2006 ) j. g. baker , j. centrella , d - i .choi , m. koppitz , j. van meter , phys .* 96 * , 111102 ( 2006 ) .p. diener , f. herrmann , d. pollney , e. schnetter , e. seidel , r. takahashi , j. thornburg , j. ventrella , phys .* 96 * , 121101 ( 2006 ) f. herrmann , i. hinder , d. shoemaker , p. laguna , gr - qc/0601026v2 z. b. etienne , j. a. faber , y. t. liu , s. l. shapiro and t. w. baumgarte , arxiv:0707.2083 [ gr - qc ] .w. tichy and p. marronetti , phys . rev .d * 76 * , 061502 ( 2007 ) m. campanelli , c. o. lousto , y. zlochower and d. merritt , phys .* 98 * , 231102 ( 2007 ) j. a. gonzalez , m. d. hannam , u. sperhake , b. brugmann and s. husa , phys .lett . * 98 * , 231101 ( 2007 ) m. campanelli , c. o. lousto , y. zlochower and d. merritt , astrophys .j. * 659 * , l5 ( 2007 ) j. thornburg , p. diener , d. pollney , l. rezzolla , e. schnetter , e. seidel and r. takahashi , class . quant .* 24 * , 3911 ( 2007 ) j. a. gonzalez , u. sperhake , b. bruegmann , m. hannam and s. husa , phys .lett . * 98 * , 091101 ( 2007 ) j. g. baker , j. centrella , d. i. choi , m. koppitz and j. van meter , phys .d * 73 * , 104002 ( 2006 ) m. shibata and k. uryu , phys .d * 74 * , 121503 ( 2006 ) , class . quant .* 24 * , s125 ( 2007 ) t.w .baumgarte and s.l .shapiro , phys .d * 59 * , 024007 ( 1999 ) .t. nakamura , k. oohara and y. kojima , prog .. suppl . * 90 * , 1 ( 1987 ) .t. nakamura and k. oohara , in _ frontiers in numerical relativity _ edited by c.r .evans , l.s .finn , and d.w .hobill ( cambridge univ .press , cambridge , england , 1989 ) .n. jansen , b. bruegmann and w. tichy , phys .d * 74 * , 084022 ( 2006 ) y. zlochower , j. g. baker , m. campanelli and c. o. lousto , phys .d * 72 * , 024021 ( 2005 ) b. bruegmann , j. a. gonzalez , m. hannam , s. husa , u. sperhake and w. tichy , arxiv : gr - qc/0610128 . m. alcubierre , g. allen b. brgmann , e. seidel and w .- m .suen phys .d * 62 * , 124011 ( 2000 ) m. c. babiuc _ et al . _ [ apples with apples collaboration ] , arxiv:0709.3559 [ gr - qc ] .m. alcubierre , b. brugmann , p. diener , m. koppitz , d. pollney , e. seidel and r. takahashi , phys .d * 67 * , 084023 ( 2003 ) s. a. teukolsky , phys .d * 61 * , 087501 ( 2000 ) g. yoneda and h. shinkai , class .* 18 * , 441 ( 2001 ) .m. babiuc , b. szilagyi and j. winicour , in analytical and numerical approaches to mathematical relativity eds . by j. frauendiener , d. giulini , and v. perlick , ( springer , heidelberg , 2006 ) .[ arxiv : gr - qc/0404092 ] . w. press , b. p. flannery , s. teukolosky , and w. t. vetterling , numerical recipes in c ( cambridge university press , cambridge , england , 1986 ) . | we present our numerical comparisons between the bssn formulation widely used in numerical relativity today and its adjusted versions using constraints . we performed three testbeds : gauge - wave , linear wave , and gowdy - wave tests , proposed by the mexico workshop on the formulation problem of the einstein equations . we tried three kinds of adjustments , which were previously proposed from the analysis of the constraint propagation equations , and investigated how they improve the accuracy and stability of evolutions . we observed that the signature of the proposed lagrange multipliers are always right and the adjustments improve the convergence and stability of the simulations . when the original bssn system already shows satisfactory good evolutions ( e.g. , linear wave test ) , the adjusted versions also coincide with those evolutions ; while in some cases ( e.g. , gauge - wave or gowdy - wave tests ) the simulations using the adjusted systems last 10 times as long as those using the original bssn equations . our demonstrations imply a potential to construct a robust evolution system against constraint violations even in highly dynamical situations . |
network and security researchers rely on topological maps of the logical internet to address problems ranging from critical infrastructure protection to policy . production active measurement systems that continually gather and curate internet topology , e.g. , are thus important to many longitudinal analyses and to shedding light on network events of interest .obtaining ip , router , and provider - level network topologies has been a continual research focus for more than two decades .while significant progress has been made , topology mapping at internet scale remains a challenge . both the accuracy of the inferred network topologies , and the speed at which they can be recovered , present obstacles to current mapping efforts . in this work ,we focus on the _ speed _ and _ scale _ of internet - wide active topology mapping . given its scale , experience and popular beliefdictates that obtaining even partial internet topologies via active network probing is a time - intensive process .for instance , caida s archipelago ( ark ) system uses dozens of vantage points and at least a day to traceroute to a single address in each routed /24 ipv4 prefix .a recent topology cycle gathered by ark from april , 2016 sent approximately 11 m traceroutes from 37 monitors over the course of 31 hours in order to discover m distinct router interfaces , and m links .we re - examine some of the assumed fundamental limits of active topology mapping to consider whether such probing could be performed in minutes rather than hours .taking inspiration from recent stateless and randomized high - speed network scanners such as zmap and masscan , we create yarrp(yelling at random routers progressively ) . to facilitate high - probing rates , yarrpis stateless , reconstituting all necessary information from replies as they arrive asynchronously . to avoid overloading routers or links , yarrprandomly permutes its input space when probing .yarrpis thus well - suited for internet - scale studies .our contributions include : 1 .development of yarrp , a publicly available tool that permits rapid active network topology discovery .we run yarrpat to discover more than 400,000 router interfaces in under 30 minutes . 2 .a comparison of yarrpand caida s existing production topology collection platform , showing recall and speed differences .3 . as an application of rapid topology discovery , we conduct successive topology snapshots separated by a small time delta and characterize the distribution and causes of observed path differences .traditional traceroute obtains the sequence of router interface ip addresses along the forward path to a destination by sending probe packets with varying time to live ( ttl ) values and examining the icmp responses . by maintaining the transmission timestamp of each probe , traceroute can report the round trip time ( rtt ) from the source to each responsive hop .modern traceroute implementations send batches of concurrent probes to lower tracing time , e.g. linux defaults to 16 simultaneous probes . in order to match probes tothe icmp ttl exceeded responses they generate , the probe must include unique identifiers that are returned as part of the icmp quotation . because the quote is only required to copy the first 28 bytes of the packet that induced the expiry message , traceroute typically relies on the first 8 bytes of the transport - layer header to match responses to probes . while various improvements have been proposed and implemented , the core behavior of traceroute and large scale active topology scanning remains largely unchanged . to prevent false inferences due to load - balanced paths , augustin et al .created paris traceroute . to reduce unnecessary probing , donnet et al .developed doubletree , a modified traceroute that begins probing from a likely path midpoint outward until it reaches previously discovered hops .similarly , proposed several topology primitives empirically shown to reduce the volume of probing while maintaining or increasing topological discovery .luckie et al .developed scamper , a production - quality packet prober .scamper implements both doubletree and paris traceroute , has an open api , can maintain a configurable probing rate , and can be controlled remotely .caida s production ark infrastructure uses scamper to perform continual internet - wide probing .traceroute was originally designed as a tool for network administrators to diagnose a small number of paths , not as a means to gather snapshots of the entire internet topology .fundamentally , traceroute and its variants all have two properties that limit their scalability and speed .they : * maintain state for each outstanding probe sent , including some identifier and origination time .* are sequential , probing all hops along the path to a destination .while some tools ( e.g. scamper ) can traceroute to multiple targets , this parallelism is limited to a finite window of destinations .in contrast , yarrpis designed to be stateless and random probing different portions of many different paths simultaneously .this allows yarrpto send probes at a high per - packet rate , while spreading the load among many destination networks to avoid concentrating load on particular paths , links or routers , thereby avoiding anomaly alarms or icmp rate limiting .the high - level idea of yarrpis : i ) randomization of the probing order of the domain of network range(s ) and ttls ; and ii ) stateless operation , whereby all necessary state is encoded into the probes such that it can be recovered from the icmp replies .yarrpis written in c++ , is portable to a variety of unix - like platforms , and is publicly available .existing traceroute techniques probe all hops along a path to a destination in sequence . instead, we employ a keyed block cipher to provide a bijection over the input domain of target ips and ttls ( ) .this means that yarrpwill e.g. send a probe to ip address with , then toward with , then with , and so on until the entire space of for each target is covered . the symmetric rc5 block cipher with a 32-bit block size is fast and a natural fit for our application . with key , yarrpencrypts the sequence where bits of each ciphertext determine the target ip address and ttl to probe . in this way , yarrprandomizes the order of probed .yarrpcan permute arbitrarily large or small ipv4 address and ttl domains , or can permute the order of specific targets read from a file .depending on the size of the domain , we switch between either a prefix - cipher or cycle - walking cipher , as described in .to facilitate comparison with caida s ipv4 topology dataset , yarrphas a mode that probes a random address in each ipv4 24-bit subnet this mimics the targets selected in a full cycle of caida s probing . here ,yarrpencrypts each with key . for , yarrpprobes the ipv4 address * 2 ^ 8 + ( c_i[0]+c_i[1]+c_i[2 ] ) \% 256 ] . in this fashion , we permute through the space of possible /24s , and construct the least - significant octet as a function of the subnet such that the same random address in each /24 is used as the destination for each ttl .an advantage of yarrp s randomization method is that the probing work can easily be distributed among multiple vantage points with negligible coordination or communication overhead .we discuss distributed yarrpas a future enhancement in [ sec : conclusions ] .existing traceroute techniques require state to match icmp replies to probes .in contrast , yarrpdoes not require state .we overload various fields in the probe packets with specific values such that we can reconstruct the corresponding probe s destination , transmission time , and originating ttl from within the quote of the icmp ttl exceeded messages .figure [ fig : fields ] depicts the tcp / ip header fields we utilize .we encode the ttl with which the packet was sent in the ipid and the elapsed time in the tcp sequence number .we use elapsed time rather than e.g. unix time in order to maintain millisecond resolution with only a 32-bit field .yarrpcan also encode microsecond resolution , so long as the expected duration of a probing run is less than seconds .the destination tcp port is fixed to port 80 to facilitate firewall traversal , while we populate the source tcp port with the checksum of the target ip address . in this fashion , we can detect instances where the destination ip address is modified enroute , a phenomenon malone and luckie observe in 2% of their results . in order to properly accommodate load - balanced paths , which are common in the internet ,we ensure that , for a given destination , certain fields remain fixed for all ttl probe values .for instance , although the tcp source port changes , it is a function of the destination ip address and therefore will contain the same value for all probes sent toward the destination .this design allows us to maintain the benefits of paris traceroute .when icmp ttl exceeded messages arrive , we examine the included quotation to recover the destination probed , the originating ttl ( hop ) , responding interface at that hop , and compute the rtt by taking the difference of the packet arrival time and the probe origination time as encoded in the quoted tcp sequence number .these values can be computed from the minimum 28 bytes of required quotation .finally , yarrpcan source either tcp syn or ack probes .while syn probes can permit middlebox traversal , we use the ack - only mode to avoid alarms triggered by large volumes of syn traffic .the benefits of yarrp s design come with several concomitant challenges , namely : i ) reconstructing the unordered responses into paths , ii ) knowing when to stop probing , and iii ) avoiding unnecessary probing .in following with yarrp s stateless nature , icmp responses are decoded as they arrive and written sequentially to a structured output file .each entry in the output file corresponds to an icmp response .an entry includes the target ip address , originating ttl , responding router interface ip address , rtt , and meta - data such as timestamps , ipid , response ttl , packet sizes , and dscp markings .because of the inherently random probing , the entries for each hop along a path to a given destination will be unordered and intermixed with other responses in the yarrpoutput file .we must therefore reconstruct complete paths by parsing the entire output file and maintaining state for each destination . while this is a memory and time - intensive task ,the key point is that it can be performed _ off - line_. in this fashion , we decouple probing from path reconstruction to permit the probing to be as fast as possible . included in the yarrpdistribution is a ` yrp2warts ` python script that performs this off - line conversion into the standard warts binary trace format .a practical consequence of yarrp s randomization and lack of state is that its probing behavior does not depend on the received responses .thus , yarrpcannot stop probing once it reaches the destination or when the path contains a sequence of unresponsive hops ( the so - called `` gap limit '' ) . to better understand the optimal range of ttls to probe ( from the possible space 1 - 255 ) , we examine the results from a complete cycle of ark probing from january , 2016 .we seek to determine , across each of the ark vantage points , the number of unique router interfaces discovered at each ttl .figure [ fig : ttlhisto ] shows the inter - quartile range of the number of distinct interfaces found as a function of ttl for each of the vantage points ; the red line in the boxplot displays the median number of interfaces per ttl among each vantage point . because of the internet s tree - like structure , the first few hops reveal only a small number of interfaces regardless of the destination probed .the bulk of the interfaces are found between ttls 10 to 16 , with an inflection point around a ttl of 14 .the amount of discoverable topology beyond a ttl of 32 is negligible ( note the log scale y - axis ) . as a result , yarrpdefaults to probing ttls 1 to 32 to minimize unnecessary probing while exploring the majority of the space . for many destinations yarrpwill perform more probing than traditional traceroute methods .this is both an advantage and a disadvantage : we show in [ sec : results : gaplimit ] that discoverable topology exists beyond multiple unresponsive hops ( where existing methods terminate early ) . in environments sensitive to probing volume , several optimizations can substantially decrease unnecessary probing at the expense of maintaining some state .this subsection discusses optimizations to the base yarrpdesign to enable different tradeoffs . first ,yarrpcan read a bgp routing table of network prefixes and build a longest - match patricia trie . when iterating through the entire permuted ipv4 space , yarrpcan skip destinations that are not routed .based on current global bgp routing tables , this optimization avoids probing approximately 1.5b ip addresses ( 35% of the 32-bit space ) that are unlikely to return useful results .note that the memory required to maintain the bgp table is constant during a probing run ( amounting to approximately 300 mb during runtime ) . in our experiments, these lookups in the patricia trie did not prevent yarrpfrom running at over .second , the tree - like structure of the network implies that the set of interfaces near to the vantage point is small relative to the universe of router interfaces . in figure[ fig : ttlhisto ] for instance , all of the traces have a single first hop in common and orders of magnitude fewer interfaces at hops 1 - 3 as compared to hops 13 - 15 . to avoid rediscovering the same nearby router interfaces repeatedly , yarrpcan maintain state over the set of responding local `` neighborhood '' interface ip addresses at hops 1 through a run - time configurable . for each ttl in the neighborhood , yarrpmaintains two timestamps : the last time a probe was sent with that ttl , and the last time a new interface at that depth replied . if no new interfaces have been discovered within the past seconds of probing , yarrpskips future probes at that ttl .the ` yrp2warts ` script can then stitch together these missing hops .while the amount of state in neighborhood mode can grown unbounded , in practice it is small for small , while avoiding substantial over - probing .this section examines results from running yarrpon the internet . we compare the topological recall against an existing production system andthen analyze the discovery yield ( i.e. the amount of new topology discovered over time ) . finally , as an application of yarrp s probing speed , we gather two successive snapshots of internet topology separated in time by a small delta to reveal instances of short - lived network dynamics .we empirically verify yarrp s correctness by evaluating it against caida s use of scamper in the ark infrastructure .caida makes their topology traces publicly available .we find all 67,045 destinations probed from the san diego vantage point on may 1 , 2016 . from this same network attachment point , we instruct yarrpto probe the same destinations . in this fashion , because the vantage point networks and set of destinations are the same , we expect the paths to be largely congruent , allowing an unbiased comparison of topological recall between the two probing methods . from the yarrpand ark probing , we construct graphs of interface nodes connected by edges when the interfaces appear in consecutive hops of a path .we ignore anonymous interfaces such that the graph may be disconnected .figure [ fig : wartscmp ] displays the resulting graph degree distributions on a log - log scale .we observe that the distributions match closely , empirically supporting yarrp s ability to discover the responsive topology .however , yarrpdiscovers 16% fewer interfaces ( 80,134 versus 66,939 ) and 13% fewer edges ( 96,763 versus 84,113 ) than ark in this experiment .we attribute much of this difference to yarrp s use of tcp ack probes as compared to caida s use of icmp probes . in a survey by key et al ., icmp probes elicit almost 18% more responses as compared to tcp probes .icmp and udp - based probing are future yarrpenhancements ( see [ sec : conclusions ] ) .[ sec : results : gaplimit ] yarrp s stateless nature implies that it probes all ttls from 1 to 32 , whereas ark s use of scamper ceases probing after encountering five unresponsive hops in a row . in the same caidasan diego probing run , 39,613 traces stopped due to this gap - limit .for each of these gap - limit traces , we compute the difference of the highest responding ttl hop from yarrpprobing and the highest responding ttl from the ark probing .figure [ fig : gapdiff ] shows the cumulative distribution of this difference among the gapped traces ; a positive difference means that yarrpdiscovered topology beyond the point where ark stopped probing . for % of the traces ,there is no difference . in 8% of the traces, ark discovers one more hop than yarrp .however , yarrpdiscovers one additional hop in % of the targets , and more than 5 additional hops in 4% of the cases . a goal of yarrpis rapid topological discovery . in this subsection , we look specifically at the ability to discover unique router interface addresses rapidly . on may 10 , 2016, we run yarrpfrom a northeast united states university vantage point at and instruct it to perform the ark - mode randomized probing of the globally routed ipv4 /24 prefixes .we limit yarrpto this rate , and limit the duration of our experiment , per prior agreement with the local network administrator .the physical machine is a multi - core intel l5640 processor running at 2.27ghz , with yarrprunning on an ubuntu virtual machine allocated a single core . at this rate ,the cpu utilization is % .we enable the `` neighborhood '' optimization , as described in [ sec : design : optimize ] , as we are interested in finding as many distinct router interfaces as possible given the probing rate .figure [ fig : discovery ] displays the cumulative number of distinct router interfaces discovered as a function of time . as a basis of comparison , we also plot the number of unique interfaces found over time for a single vantage point ( again , using data from the san diego node of caida s continual /24 probing on may 1 , 2016 ) .caida s san diego monitor discovers 12,568 unique interfaces in 1,500 seconds ( per second ) .by contrast , yarrpdiscovers 421,162 unique ipv4 interfaces in the same period , or approximately 280 distinct router interfaces a second .the interfaces found by yarrpin less than 30 minutes equates to 42% of _ all _ unique interfaces discovered from all ark monitors over the course of probing for more than a day . as an application of rapid topology discovery, we collect topology snapshots in rapid succession and analyze their properties and differences in this subsection .we gather 67,045 target destinations from caida s may 1 , 2016 topology probing from their san diego monitor . again using the east coast university vantage point, we run yarrpto probe ttls 1 - 32 for these same 67,045 targets .we run yarrpat and invoke yarrpthree times in succession with a minute pause in - between . in this way, each snapshot takes approximately 8 minutes to gather , and each is separated by a minute .we term the snapshots and in chronological order .the interface - level graph resulting from contains 39,968 interfaces and 46,721 edges , while has 40,038 interfaces and edges . contains interfaces and edges . to better understand the differences between snapshots , we perform a per - target path comparisonfor each target in , we compare the discovered path in to the discovered path to that same target in .we use the levenshtein edit distance to measure the per - target path differences between snapshots .the edit distance is the minimum number of edits ( insertions , substitutions , or deletions of router interfaces ) .note that inter - snapshot differences are not attributable to per - flow load balancing as yarrpkeeps the packet header fields which are used for load balancing constant for the same destination between snapshots ( [ sec : design : stateless ] ) . additionally , to better understand the types of path changes , we count the frequency of each edit operation and missing hop substitutions .these missing hop operations are instances where the path contains a responsive router for a particular ttl for one snapshot , but no response at that ttl when probing the same destination in a subsequent snapshot .such missing hops may be attributable to routers performing icmp rate limiting , or may be due to packet loss .a deeper analysis of the most frequent missing hops between and reveals that the large majority ( 92.2% ) come from the first four hops within the local network of the vantage point . specifically , 73% of the missing hops are due to the router at ttl 3 , 18% are due to the router at ttl 1 , and 1% are due to the router at ttl 4 .in contrast , the router at ttl 2 always responds , suggest that some of the local routers implement icmp rate limiting while one does not .figure [ fig : ed ] displays the results of the edit distance comparison between and , ignoring differences attributable to the local network ( ttl , as described above ) . the paths to approximately 91% of the destinations are identical between and , while approximately 6% have a single hop difference . less than 1% of the destinations show a difference of hop edits. separated by the edit operation , we see that 4% of the 1 hop differences are due to missing hops , 1% are hop deletions , and fewer than 1% are substitutions . to understand the potential of rapidly collected topology snapshots , we manually investigate and highlight a path exhibiting a significant change between snapshots .figure [ fig : samplepath ] shows , for each of the three snapshots , the final four responsive hops toward the destination 188.32.230.138 ( in as 42610 ) . in the intermediate snapshot , ,we see two hop substitutions where the next hop after as 1273 ( 213.185.219.106 , cable and wireless ) changes to a different sequence of hops within as 12389 ( rostelecom ) . by the time of the third snapshot , , the path changes back to that seen in . while the exact cause of this short - lived routing change is unknown , the key point is that it would not have been discovered by the existing topology mapping systems .we leave a comprehensive analysis of the extent and duration of these short - lived dynamics exposed by yarrpto future work .yarrpdemonstrates a new technique for internet - scale active topology probing that permits rapid collection of topology snapshots . as with our initial investigation of short - lived topology dynamics ,our hope is that yarrpfacilitates analyses not previously possible .yarrpis stable and the code is publicly available .that said , there are several enhancements that would be valuable additions .first , yarrpcurrently only supports ipv4 probing .given the vastly larger ipv6 address space , and relative topological sparsity , adding ipv6 support to yarrpcould enable more complete maps of the ipv6 topology to be gathered .second , yarrpis currently capable of sending only tcp probes .it is well - known that using different transport protocols yields different responses , due to widespread security and policy filtering .we plan to add icmp and udp probing to yarrp , which requires utilizing different transport header fields to encode probe information .doing so is non - trivial as we must maintain both the paris - traceroute property of keeping certain fields constant to keep packets on a single load - balanced path , while also retaining yarrp s stateless behavior .third , yarrp s stateless and asynchronous nature implies that a malicious actor could attempt to send bogus responses , while middleboxes are known to mangle packet headers . in the future, we wish to use a keyed cryptographic integrity function over multiple probe values . instead ofa simple checksum on the target ip address , we will populate the source port with the value of this keyed integrity check .yarrpcan then ensure that it both sent the original probe , and that the probe was not modified in - flight such that the response is not useful . finally , an attractive feature of yarrp s design is the ability to easily randomize and distribute the probing to multiple vantage points with negligible coordination and communication overhead .similar to the rapid scanning worm envisioned by staniford et al . , the permuted domain can be distributed . for vantage points , and a given domain , each vantage point encrypts of the range .thus , the vantage point encrypts the range of values to to obtain its sequence of + values to probe from the overall permutation .the potential speed improvement is linearly proportional then to the number of vantage points . only the values and need be sent to each vantage point to distribute the permuted space and achieve complete randomized coverage .given our empirical ( and conservative ) yarrprate in this work , we estimate that it is possible to implement a distributed yarrpamong vantage points to traceroute to every routed ipv4 address ( targets ) in approximately one hour .yarrpmay thus facilitate rapid collection of _ complete _ internet snapshots in the future . | obtaining a `` snapshot '' of the internet topology remains an elusive task . existing active topology discovery techniques and production systems require significant probing time time during which the underlying network may change or experience short - lived dynamics . in this work , we consider how active probing can gather the internet topology in _ minutes _ rather than days . conventional approaches to active topology mapping face two primary speed and scale impediments : i ) per - trace state maintenance ; and ii ) a low - degree of parallelism . based on this observation , we develop yarrp(yelling at random routers progressively ) , a new traceroute technique designed for high - rate , internet - scale probing . yarrpis stateless , reconstituting all necessary information from icmp replies as they arrive asynchronously . to avoid overloading routers or links with probe traffic , yarrprandomly permutes an input space . we run yarrpat , a rate at which the paths to all /24 s on the ipv4 internet can be mapped in approximately one hour from a single vantage point . we compare yarrp s topological recall and discovery rate against existing systems , and present some of the first results of topological dynamics exposed via the high sampling rates yarrpenables . |
representing data as graphs is becoming increasingly popular , as technological progress facilitates measuring `` connectedness '' in a variety of domains , including social networks , trade - alliance networks , and brain networks . while the theory of pattern recognition is deep , previous theoretical efforts regarding pattern recognitionalmost invariably assumed data are collections of vectors . here , we assume data are collections of graphs ( where each graph is a set of vertices and a set of edges connecting the vertices ) . for some data sets , the vertices of the graphs are _ labeled _ , that is , one can identify the vertex of one graph with a vertex of the others ( note that this is a special case of assuming vertices are labeled , where each vertex has a unique label ) .for others , the labels are unobserved and/or assumed to not exist .we investigate the theoretical and practical implications of the absence of vertex labels .these implications are especially important in the emerging field of `` connectomics '' , the study of connections of the brain . in connectomics ,one represents the brain as a graph ( a brain - graph ) , where vertices correspond to ( groups of ) neurons and edges correspond to connections between them . in the lower tiers of the evolutionary hierarchy ( e.g. , worms and flies ), many neurons have been assigned labels . however , for even the simplest vertebrates , vertex labels are mostly unavailable when vertices correspond to neurons .classification of brain - graphs is therefore poised to become increasingly popular .although previous work has demonstrated some possible strategies of graph classification in both the labeled and unlabeled scenarios , relatively little work has compared the theoretical limitations of the two .we therefore develop a random graph model amenable to such theoretical investigations .the theoretical results lead to universally consistent graph classification algorithms , and practical approximations thereof .we demonstrate that the approximate algorithm has desirable finite sample properties via a real brain - graph classification problem of significant scientific interest : sex classification .a labeled graph consists of a vertex set , where is the number of vertices , and an edge set , where .let be a _ labeled _ graph - valued random variable taking values , where is the set of labeled graphs on vertices .the cardinality of is super - exponential in .for example , when all labeled graphs are assumed to be simple ( that is , undirected binary edges without loops ) , then .let be a categorical random variable , , where .assume the existence of a joint distribution , which can be decomposed into the product of a class - conditional distribution ( likelihood ) and a class prior .because is finite , the class - conditional distributions can be considered discrete distributions , where is an element of the -dimensional unit simplex ( satisfying and ) . in the above , it was implicitly assumed that the vertex labels were observed .however , in certain situations ( such as the motivating connectomics example presented in section [ sec:1 ] ) , this assumption is unwarranted . to proceed , we define two graphs to be isomorphic if and only if there exists a vertex permutation ( shuffle ) function such that .let be a permutation - valued random variable , , where is the space of vertex permutation functions on vertices so that .[ def : shuffled ] let be a _ shuffled _ graph - valued random variable , that is , a labeled graph valued random variable that has been passed through a random shuffle channel . extending the above graph - classification model to include this vertex shuffling distribution yields assume throughout this work ( with loss of generality ) that the shuffling distribution is both _class independent _ and _ graph independent _ ; therefore , this joint model can be decomposed as as in the labeled case , the shuffled graph class - conditional distributions can be represented by discrete distributions . because can be any of different graphs, it must be that . when is uniform on , all shuffled graphs within the same isomorphism set are equally likely ; that is for some .note that one can think of a labeled graph as a shuffled graph for which is a point mass at , where is the identity matrix .the above shuffling view is natural whenever the vertices of the collection of graphs share a set of labels , but the labeling function is unknown .however , when the vertices of the collection of graphs have different labels , perhaps a different view is more natural . an _ unlabeled graph _ is the collection of graphs isomorphic to one another , that is , .let be an element of the collection of graph isomorphism sets .the number of unlabeled graphs on vertices is ( see and references therein ) .an _ unlabeling function _ is a function that takes as input a graph and outputs the corresponding unlabeled graph .let be an _unlabeled _ graph - valued random variable , that is , a labeled graph - valued random variable that has been passed through an unlabeled channel .in other words , , and takes values . the joint distribution over unlabeled graphs and classes is therefore , which decomposes as .the class - conditional distributions over isomorphism sets ( unlabeled graphs ) can also be thought of as discrete distributions where are vectors in the -dimensional unit simplex .comparing shuffling and unlabeling for the independent and uniform shuffle distribution , we have for all .we consider graph classification in the three scenarios described above : labeled , shuffled , and unlabeled . to proceed , in each scenario we define three mathematical objects : ( i ) a graph classifier , ( ii ) risk , ( iii ) , the bayes optimal classifier , and ( iv ) the bayes risk . a _ labeled graph classifier _ is any function that maps from labeled graph space to class space .the risk of a labeled graph classifier under loss is the expected misclassification rate ] , where the expectation is taken against .the _ shuffled graph bayes optimal classifier _ is given by where is again the set of possible labeled ( or shuffled ) graph classifiers .the _ shuffled graph bayes risk _ is given by , where implicitly depends on .an _ unlabeled _ graph classifier is any function that maps from unlabeled graph space to class space .the risk under loss is given by ] , because when graphs are labeled . for _ shuffled _ graph classification is assumed to be uniform over the permutation matrices , so that all label information is both unavailable and irrecoverable .the training data are therefore } ] , and then plugs those estimates into the labeled bayes classifier , eq . , resulting in where the dependency on the training data is implicit in the notation .a _ shuffled _ graph bayes plugin classifier , , estimates the parameters using the training data }$ ] , and then plugs those estimates into the shuffled bayes classifier , eq . , resulting in an _ unlabeled _ graph bayes plugin classifier , , first determines in which unlabeled set each shuffled graph resides , using as defined in section [ sec : gi ] .then , it estimates the parameters and using the training data .finally , it plugs those estimates into the unlabeled bayes classifier , eq . , resulting in for brevity , we will sometimes refer to the above three induced classifiers as simply `` classifiers '' .moreover , the sequence of classifiers ( for example , ) we will also refer to as a `` classifier '' .the three parametric classifiers , eqs . , admit classifier estimators that exist , are unique , and moreover , are universally consistent , although the relative convergence rates and values that they converge to differ .let be the risk of the induced _ labeled _ graph bayes plugin classifier using the training data to obtain maximum likelihood estimators for .note that is a random variable , as it is a function of the random training data .this yields [ thm:4 ] as .because and are both finite , the maximum likelihood estimates for the categorical parameters are guaranteed to exist and be unique .hence , the labeled graph bayes plugin classifier is universally consistent to ( that is , it converges to regardless of the true joint distribution , ) .similarly , let be the risk of the induced _ shuffled _ graph bayes plugin classifier using the training data to obtain maximum likelihood estimators for .this yields [ cor : sh_plug ] as .the previous proof rests on the finitude of , which remains finite after shuffling ( uniform or otherwise ) , and therefore , the previous proof holds , replacing with .thus while one could merely plug the shuffled graphs into , such a procedure is inadvisable .specifically , the above procedure does not use the fact that all whenever for some .instead , consider the risk of the induced _ unlabeled _ graph bayes plugin classifier upon using the function to map each shuffled graph to its corresponding unlabeled graph , and then obtaining maximum likelihood estimates of the unlabeled graph parameters , .[ cor : un_plug ] as .because ( by a factor of approximately ) , it follows that classifying by first projecting the graphs into a lower dimensional space should yield improved performance .specifically , we have the following result : [ thm : tdomp ] dominates for _ shuffled _ graph data .consider the scalar decomposed into the vector , where each .note that each .yet , the estimators , and are not equal , because the former can borrow strength from all shuffled graphs within the same unlabeled graph , but the latter does not . assuming without loss of generality that the class priors are equal and known , the above domination claim is equivalent to stating that for each , \leq { \mathbb{p}}[{\operatornamewithlimits{argmax}}_{y \in { \mathcal{y } } } { \hat{\theta}}'_{g|y } \neq { \operatornamewithlimits{argmax}}_{y \in { \mathcal{y } } } \theta'_{g|y } | { \mathcal{t}}_s'].\end{aligned}\ ] ] because , the only difference between the two sides of the above inequality is the estimators .we know that the estimators have the following distributions : where is the number of observations of any in the training data , and is the number of observations of in the training data . from this, we see that for each , will have a tighter concentration around the truth due to is borrowing strength , because , so our result holds .corollary [ cor : un_plug ] demonstrates that one can induce a universally consistent classifier using eq . .lemma [ thm : tdomp ] further shows that the performance of dominates . yet, using is practically useless for two reasons .first , it requires solving graph isomorphism problems .unfortunately , there are no algorithms for solving graph isomorphism problems with worst - case performance known to be in only polynomial time .second , the number of parameters to estimate is super - exponential in ( ) , and acceptable performance will typically require .we can therefore not even store the parameter estimates for small graphs ( e.g. , ) , much less estimate them .this motivates consideration of an alternative strategy .a nearest - neighbor ( ) classifier using euclidean norm distance is universally consistent to for vector - valued data as long as with as .this non - parametric approach circumvents the need to estimate many parameters in high - dimensional settings such as graph - classification .the universal consistency proof for was extended to graph - valued data in reference , which we include here for completeness .specifically , to compare labeled graphs , reference considered a frobenius norm distance where is the adjacency matrix representation of the labeled graph , .let denote the frobenius norm classifier on _ labeled _ graphs using , and let indicate the misclassification rate for this classifier .reference showed : [ thm:5 ] as . because both and have finite cardinality , the law of large numbers ensures that eventually as , the plurality of nearest neighbors to a test graph will be identical to the test graph .let denote the frobenius norm classifier on _ shuffled _ graphs using , and let indicate the misclassification rate for this classifier .from the above lemma and corollary [ cor : sh_plug ] , the below follows immediately : [ cor : knn1 ] as .given shuffled graph data , however , other distance metrics appear more `` natural '' to us .for example , consider the `` graph - matched frobenius norm '' distance : where and are shuffled adjacency matrices .let indicate the misclassification rate of the classifier using the above graph - matched norm _ shuffled _ graphs , and let indicate the misclassification rate for this classifier . given an exact graph matching function a function that actually solves eq .we have the following result : [ cor : sh_knn ] as . thus , given shuffled data , one could consider either or .interestingly , when the data are labeled graphs , , one can outperform by _ shuffling _ , that is , by apparently destroying the label information .consider an example in which , such that no information is in the labels .in such scenarios , shuffling can effectively borrow strength from different labeled graphs that are within the same unlabeled graph set .let indicate the misclassification rate of the classifier using _ labeled _ graphs , and let indicate the misclassification rate for this classifier .we therefore state without proof : [ thm : nodom ] neither nor dominates when data are _ labeled _ graphs .thus , when the training data consists of shuffled graphs , the best universally consistent classifier ( of those considered herein ) is a that uses as the distance metric .other universally consistent classifiers that we considered either require estimating more parameters than there are molecules in the universe , or are inadmissible under loss . when vertex labels are available , no classifier dominates .the above theoretical results consider bayes plug - in and classifiers . herewe consider other classifiers .specifically , let be the misclassification rate for some classifier that operates on , that is , only has access to shuffled graphs .consider the set of seven graph invariants studied in : size , max degree , max eigenvalue , scan statistic , number of triangles , and average path length . via montecarlo , was unable to find a uniformly most powerful graph invariant ( test statistic ) for a particular hypothesis testing scenario with unlabeled graphs .the above results , however , indicate that there exists optimal classifiers ( or test statistics ) for any unlabeled or shuffled graph setting . to proceed , let be the _ chance _ classifier , that is and let be the misclassification rate for this classifier .moreover , let be the risk of the invariant classifier that is equivalent to the unlabeled bayes plug - in classifier ( see lemma [ thm:3 ] ) . from the above results , it follows that : [ thm : order ] in expectation , + as .while asymptotic results can be informative and insightful , understanding the computational properties of the different classifiers can be as ( or even more ) informative for real applications .table [ tab : comp ] compares the space and time complexity of the various classifiers considered above .only the classifiers have the property that they do not require more space than there are atoms in the universe ( for any bigger than ) . of those ,the labeled classifier does not require time exponential in the number of vertices .therefore , we only found one type of classifier with performance guarantees that has both polynomial space and time .unfortunately , the finite sample performance of this classifier is abysmal .this motivates constructing approximate classifiers ..order of computational properties for training the various shuffled graph classifiers . [ cols=">,^,^,^",options="header " , ] [ tab : comp ]we buttress the above theoretical results via numerical experiments .the asymptotic results combined with the computational complexities of the above described algorithm suggest that none of the proposed algorithms have all the properties we effectively require for real world applications , in particular , polynomial space and time complexity , as well as reasonable convergence rates .we therefore propose a different algorithm , which lacks universal consistency , but can be run on real data with good hope for reasonable performance . in particular , we modify , the _ unshuffled _ classifier . instead of requiring this classifier to actually solve the graph matching problem , eq . , we use a recently proposed state - of - the - art approximate cubic time algorithm . denote this classifier .a `` connectome '' is a brain - graph in which vertices correspond to ( groups of ) neurons , and edges correspond to connections between them .diffusion magnetic resonance ( mr ) imaging and related technologies are making the acquisition of mr connectomes routine .49 subjects from the baltimore longitudinal study on aging comprise this data , with acquisition and connectome inference details as reported in .each connectome yields a vertex simple graph ( binary , symmetric , and hollow adjacency matrix ) . associated with each graph is class label based on the sex of the individual ( 24 males , 25 females ) .because the vertices are labeled , we can compare the results of having the labels and not having the labels .consider the following five classifiers : * - : a -nearest neighbor ( ) with frobenius norm distance on the _ labeled _ adjacency matrices .* - : a with frobenius norm distance on the _ shuffled _ adjacency matrices .* - : a with an _ approximate _ graph - matched frobenius norm distance on the shuffled adjacency matrices , as described above .because graph - matching is -hard , we instead use an inexact graph matching approach based on the quadratic assignment formulation described in , which only requires time . *- : a with euclidean distance using the seven graph invariants described above .prior to computing the euclidean distance , for each invariant , we rescale all the values to lie between zero and one . * : use the chance classifier defined above .performance is assessed by leave - one - out misclassification rate .figure [ fig:1 ] reifies the above theoretical results in a particular finite sample regime .we apply the five algorithms discussed above to sub - samples of the connectome data , which shows approximate convergence rates for this data . fortunately, this real data example supports the main lemmas of this work .specifically , the classifier using on the _ labeled _ graphs ( dashed gray line ) achieves the lowest misclassification rate for all , which one would expect if labels contain appropriate class signal .moreover , the classifier using the inexact graph - matching frobenius norm on the shuffled adjacency matrices , , performs best of all classifiers using only shuffled graphs ( compare dashed black line with solid black and gray lines ) . on the other hand , while the classifier using the frobenius norm on shuffled graphs , , must eventually converge to , its convergence rate is quite slow , so the classifier using standard invariants outperforms the simple based ., such that errorbars were neglibly small .five classifiers were compared , as described in main text . note that when is larger than , as predicted by theory , we have .moreover , . ]in this work , we address both the theoretical and practical limitations of classifying shuffled graphs , relative to labeled and unlabeled graphs . specifically , first we construct the notion of shuffled graphs and shuffled graph classifiers in a parallel fashion with labeled and unlabeled graphs / classifiers , as we were unable to find such notions in the literature .then , we show that shuffling the vertex labels results in an irretrievable situation , with a possible degradation of classification performance ( lemma [ thm:1 ] ) .even if the vertex labels contained class - conditional signal , bayes performance may remain unchanged ( lemma [ thm:2 ] ) .moreover , although one can not recover the vertex labels , one can obtain a bayes optimal classifier by solving a large number of graph isomorphism problems ( lemma [ thm:3 ] ) .this resolves a theoretical conundrum : is there a set of graph invariants that can yield a universally consistent graph classifier ? when the generative distribution is unavailable , one can induce a consistent and efficient `` unshuffling '' classifier by using a graph - matching strategy ( corollary [ cor : un_plug ] ) . while this unshuffling approach dominates the more nave approach ( lemma [ thm : tdomp ] ) , it is intractable in practice due to the difficulty of graph matching and the large number of isomorphism sets . instead, a frobenius norm classifier applied to the adjacency matrices may be used , which is also universally consistent ( corollary [ cor : sh_knn ] ) .surprisingly , none of the considered classifiers dominate the other for labeled data ( lemma [ thm : nodom ] ) , yet asymptotically , we can order shuffled graph classifiers ( lemma [ thm : order ] ) .because graph - matching is -hard , we instead use an approximate graph - matching algorithm in practice ( see for details ) . applying these classifiers to a problem of considerable scientific interest classifying human mr connectomes we find that even with a relatively small sample size ( ) , the approximately graph - matched algorithm performs nearly as well as the algorithm _ using _ vertex labels , and slightly better than a algorithm applied to a set of graph invariants proposed previously .this suggests that the asymptotics might apply even for very small sample sizes .thus , this theoretical insight has led us to improved practical classification performance .extensions to weighted or ( certain ) attributed graphs are straightforward .this work was partially supported by the research program in applied neuroscience .l. devroye , l. gyrfi , g. lugosi , and l. gyorfi , _ a probabilistic theory of pattern recognition_.1em plus 0.5em minus 0.4emnew york : springer , 1996 .[ online ] .available : http://www.amazon.ca/exec/obidos/redirect?tag=citeulike09-20&path=asin/0387946187 j. white , e. southgate , j. n. thomson , and s. brenner , `` the structure of the nervous system of the nematode caenorhabditis elegans . ''_ philosophical transactions of royal society london .series b , biological sciences _ , vol .1165 , pp . 1340 , 1986 .r. p. w. duin , e. pkalska , and e. pkalskab , `` the dissimilarity space : bridging structural and statistical pattern recognition , '' _ pattern recognition letters _ , vol . in press ,april , may 2011 .[ online ] .available : http://linkinghub.elsevier.com/retrieve/pii/s0167865511001322 http://www.sciencedirect.com/science/article/pii/s0167865511001322 http://dx.doi.org/10.1016/j.patrec.2011.04.019[http://linkinghub.elsevier.com/retrieve/pii/s0167865511001322 http://www.sciencedirect.com/science/article/pii/s0167865511001322 http://dx.doi.org/10.1016/j.patrec.2011.04.019 ] j. t. vogelstein , r. j. vogelstein , and c. e. priebe , `` are mental properties supervenient on brain properties ? '' _ nature scientific reports _ , vol . in press , p. 11[ online ] .available : http://arxiv.org/abs/0912.1672 h. pao , g. a. coppersmith , c. e. priebe , h. p. ao , g. a. c. oppersmith , and c. e. p. riebe , `` statistical inference on random graphs : comparative power analyses via monte carlo , '' _ journal of computational and graphical statistics _ , pp . 122 , 2010 .[ online ] .available : http://pubs.amstat.org/doi/abs/10.1198/jcgs.2010.09004 p. hagmann , l. cammoun , x. gigandet , s. gerhard , p. ellen grant , v. wedeen , r. meuli , j .-thiran , c. j. honey , and o. sporns , `` mr connectomics : principles and challenges , '' _ j neurosci methods _ , vol .194 , no . 1 ,pp . 3445 , 2010 .[ online ] .available : http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=retrieve&db=pubmed&dopt=citation&list_uids=20096730 w. r. gray , j. a. bogovic , j. t. vogelstein , b. a. landman , j. l. prince , and r. j. vogelstein , `` magnetic resonance connectome automated pipeline : an overview . ''_ ieee pulse_ , vol . 3 , no . 2 ,pp . 428 , mar[ online ] .available : http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber=6173097 m. r. garey and d. s. johnson , _ computers and intractability : a guide to the theory of np - completeness ( series of books in the mathematical sciences)_.1em plus 0.5em minus 0.4emw .h. freeman , 1979 .[ online ] .available : http://www.amazon.com/computers-intractability-np-completeness-mathematical-sciences/dp/0716710455 j. t. vogelstein , j. c. m. conroy , l. j. podrazik , s. g. kratzer , d. e. fishkind , r. j. vogelstein , and c. e. priebe , `` ( brain ) graph matching via fast approximate quadratic programming , '' _ arxiv preprint _ | we develop a formalism to address statistical pattern recognition of graph valued data . of particular interest is the case of all graphs having the same number of uniquely labeled vertices . when the vertex labels are latent , such graphs are called _ shuffled graphs_. our formalism provides insight to trivially answer a number of open statistical questions including : ( i ) under what conditions does shuffling the vertices degrade classification performance and ( ii ) do universally consistent graph classifiers exist ? the answers to these questions lead to practical heuristic algorithms with state - of - the - art finite sample performance , in agreement with our theoretical asymptotics . |
given samples from two distributions , one fundamental and classical question to ask is : how close are the two distributions ?first , one must specify what it means for two distributions to be close , for which many different measures quantifying the degree of these distributions have been studied in the past .they are frequently called distance measures , although some of them are not strictly metrics .the divergence measures play an important role in statistical theory , especially in large theories of estimation and testing .they have been applied to different areas , such as medical image registration ( ) , classification and retrieval . in machine learning ,it is often convenient to view training data as a set of distributions and use divergence measuires to estimate dissimilarity between examples .this idea has been used in neuroscience , where the neural response pattern of an individual is modeled as a distribution , and divergence meaures is used to compare responses across subjects ( see , e.g ) .later many papers have appeared in the literature , where divergence or entropy type measures of information have been used in testing statistical hypotheses . for more examples and other possible applications of divergence measures ,see the extended technical report ( ) . for these applications and others , it is crucial to accurately estimate divergences .+ the class of divergence measures is large ; it includes the rnyi- ( ) , tsallis- ( ) , kullback - leibler ( kl ) , hellinger , bhattacharyya , euclidean divergences , etc .these divergence measures can be related to the csiszr- divergence ( ) .the kullback - leibler , hellinger and bhattacharyya are special cases of rnyi- and tsallis- divergences .but the kullback leibler one is the most popiular of these divergence measures .1ex in the nonparametric setting , a number of authors have proposed various estimators which are provably consistent .krishnamurthy and kandasamy used an initial plug - in estimator by estimates of the higher order terms in the von mises expansion of the divergence functional . in their frameworks , they proposed tree estimators for rnyi- , tsallis- , and euclidean divergences between two continuous distributions and establised the rates of convergence of these estimators .1ex the main purpose of this paper is to analyze estimators for divergence measures between two continuous distributions .our approach is similar on those of krishnamurthy and kandasamy and is based on plug - in estimation scheme : first , apply a consistent density estimator for the underlying densities , and then plug them into the desired formulas . unlike of their frameworks , we study the uniform bandwidth consistent estimators of these divergences .we introduce a method to establish consistency of kernel - type estimators divergences between two continuous distributions when the bandwidthh is allowed to range in a small interval which may decrease in length with the sample size .our results will be immediately applicable to proving uniform bandwidth consistency for nomparametric estimation of divergenge measures .1ex the rest of this paper is organized as follows : in section 2 , we introduce divergence measures and we construct their nonparametric estimators . in section 3 , we study the unfiform bandwidth consistency of the proposal estimators .section 4 is devoted on the proofs .let us begin by standardizing notation and presenting some basic definitions .we will be concerned with two densities , : ] , where and are as in corollaries ( [ coro3 ] ) and ( [ coro4 ] ) .chose an estimator of in the corollaries ( [ coro3 ] ) and ( [ coro4 ] ) as the form thus , we have consequently , by defining the quantities we get from corollaries ( [ coro3 ] ) and ( [ coro4 ] ) , and thus , we obtain asymptotic certainty interval for and in the following sense . + for each , we have , as , \right)\longrightarrow1.\ ] ] and \right)\longrightarrow1.\ ] ] finally , we will say that the intervals ,\ ] ] and ,\ ] ] provide asymptotic certainty intervals for the divergences and .we have addressed the problem of nonparametric estimation of a class of divergence measures .we are focusing on the rnyi- and the tsallis- divergence measures . under our study , one can easily deduced kullback - leibler , hellinger and bhattacharyya nonparmetric estimators .the results presented in this work are general , since the required conditions are fulfilled by a large class of densities .we mention that the estimator in ( [ divestim ] ) can be calculated by using a monte - carlo method under the density .and a pratical choice of is where and .+ it will be interesting to enrich our results presented here by an additional uniformity in term of in the supremum appearing in all our theorems , which requires non trivial mathematics , this would go well beyond the scope of the present paper .another direction of research is to obtain results , in the case where the continuous distributions and are both unknown .* proof of lemma [ lem1 ] . * to prove the strong consistency of , we use the following expression where and is a sequence of positive constant . define we have since is a 1-lipschitz function , for then . + therefore for , we have where denotes , as usual , the supremum norm , i.e. , .hence , finaly , 1ex we now impose some slightly more general assumptions on the kernel than that of theorem [ theo1 ] . consider the class of functions for , set , where the supremum is taken over all probability measures on , where represents the -field of borel sets of . here , denotes the -metric and is the minimal number of balls of -raduis needed to cover .+ we assume that satisfies the following uniform entropy condition .1ex ( * k.6 * ) for some and , 1ex ( * k.7 * ) is a pointwise measurable class , that is there exists a countable sub - class of such that we can find for any function a sequence of functions in for which this condition is discussed in .it is satisfied whenever is right continuous .1ex _ remark that condition ( * k.6 * ) is satisfied whenever ( * k.1 * ) holds , i.e. , is of bounded variation on ( in the sense of hardy and kauser , see , e.g. and . condition ( * k.7 * ) is satisfied whenever ( * k.2 * ) holds , _i.e. _ , is right continuous ( refer to the references therein ) ._ 1ex from theorem 1 in , whenever is measurable and satisfies ( * k.3 - 4 - 6 - 7 * ) , and when is bounded , we have for each pair of sequence , such that , together with and as , with probability 1 since , in view of ( [ terme1 ] ) and ( [ mason2005 ] ) , we obtain with probability 1 it concludes the proof of the lemma .* proof of lemma [ lem2 ] .* + let be the complement of in ( _ i.e _ , ) .we have with and .repeat the arguments above in the terms with the formal change of by .we show that , for any , latexmath:[\[\label{result2 } which implies latexmath:[\[\label{term2 } on the other hand , we know ( see , e.g, ) , that since the density is uniformly lipschitz and continuous , we have for each sequences , with , as , thus , .it is obsious to see that thus , hence , thus , in view of ( [ mason20051 ] ) , we get finaly , in view of ( [ form1 ] ) and ( [ form2 ] ) , we get it concludes the proof of the lemma . * proof of theorem [ theo1 ] . *we have combinating the lemmas ( [ lem1 ] ) and ( [ lem2 ] ) , we obtain it concludes the proof of the theorem .* proof of corollary [ coro1 ] . *remark that using the theorem ( [ theo1 ] ) , we have and the corollary [ coro1 ] holds * proof of corollary [ coro2 ] . *a first order taylor expansion of arround and gives remark that from theorem [ theo1 ] , which turn , implies that thus , for all consequently and the corollary [ coro2 ] holds .* proof of theorem [ theo2 ] .* under conditions , and using taylor expansion of order we get , for , where and thus a straightforward application of lebesgue dominated convergence theorem gives , for large enough , let be a nonempty compact subset of the interior of ( say ) . + first , note that we have from corollary 3.1.2 .p. 62 of viallon ( 2006 )( see also , , statement ( 4.16 ) ) . set , for all , ,1 [ , \\ \label{valiron2d } & \leq & \displaystyle \sup_{x\in \mathrm{j } } \vert \widehat{f}_{n}(x)- f(x ) \vert^{\alpha } \int_{\mathrm{j } } g^{1-\alpha}(x)dx,\\ \label{valiron2 } \displaystyle & \leq & \sup_{x\in \mathrm{j } } \vert \widehat{f}_{n}(x)- f(x ) \vert^{\alpha } \int_{\mathrm{r}^d } g^{1-\alpha}(x)dx.\end{aligned}\ ] ] one fined , by combining ( [ valliron ] ) and ( [ valiron2 ] ) let be a sequence of nondecreasing nonempty compact subsets of such that now , from ( [ dem ] ) , it is straightforward to observe that the proof of theorem [ theo2 ] is completed .* proof of corollary [ coro3 ] . *a direct application of the theorem [ theo2 ] leeds to the corollary [ coro3 ] .* proof of corollary [ coro4 ] . * hereagain , set , for all , a first order taylor expansion of leeds to using condition , is compactly supported ) , is bounded away from zero on its support , thus , we have for enough large , there exists , such that , for all in the support of . from ( [ valiron2d ] ), we have hence , one fined , by combining the last equation with ( [ valliron ] ) the proof of corollary is completed .9 clarkson , j. a. and adams , c. r. ( 1933 ) . on definitions of bounded variation for functions of two variables . _ trans ._ , 35 ( 4 ) , 824 - 854 .csiszr , i. ( 1967 ) .information - type measures of differences of probability distributions and indirect observations ._ studia sci .hungarica , _ 2 : 299 - 318 .deheuvels , p. ( 2000 ) .uniform limit laws for kernel density estimators on possibly unbounded intervals ._ in recent advances in reliability theory _( bordeaux , 2000 ) , stat .technol . , pages 477 - 492 .birkha boston , deheuvels , p. and mason , d. m. ( 2004 ) .general asymptotic confidence bands based on kernel - type function estimators .inference stoch . process ._ , 7(3 ) , 225 - 277 .deroye , l. and gyorfi , l. ( 1985 ) .nonparametric density estimation ._ wiley series in probability and mathematical statistics _ : tracts on probability and statistics .john wiley sons inc ., new york .the l1 view .rnyi , a. ( 1961 ) . on measures of entropy and information ._ in fourth berkeley symposium on mathematical statistics and probability ._ rnyi , a. ( 1970 ) .probability theory ._ publishing company , amsterdam . _ villmann , t. and haase , s. ( 2010 ) .mathematical aspects of divergence based vector quantization using frechet - derivatives ._ university of applied sciencesmittweida . _vituskin , a. g. ( 1955 ) .o mnogomernyh variaciyah .tehn.teor . lit . , | we propose nonparametric estimation of divergence measures between continuous distributions . our approach is based on a plug - in kernel - type estimators of density functions . we give the uniform in bandwidth consistency for the proposal estimators . as a consequence , their asymptotic 100% confidence intervals are also provided . , , + |
a novel influenza strain termed influenza a(h1n1)v , first identified in mexico in march 2009 , has rapidly spread to different countries and is currently the predominant influenza virus in circulation worldwide . as of april 11 , 2010, it has caused at least 17798 deaths in 214 countries .the first confirmed case in india , a passenger arriving from the usa , was detected on may 16 , 2009 in hyderabad .the initial cases were passengers arriving by international flights. however , towards the end of july , the infections appeared to have spread into the resident population with an increasing number of cases being reported for people who had not been abroad .as of 11 april 2010 , there have been 30352 laboratory confirmed cases in india ( out of 132796 tested ) and 1472 deaths have been reported , i.e. , of the cases which tested positive for influenza a(h1n1)v . to devise effective strategies for combating the spread of pandemic influenza a(h1n1 ) , it is essential to estimate the transmissibility of this disease in a reliable manner. this is generally characterized by the reproductive number , defined as the average number of secondary infections resulting from a single ( primary ) infection .a special case is the basic reproductive number , which is the value of measured when the overall population is susceptible to the infection as is the case at the initial stage of an epidemic .estimate of the basic reproduction number for influenza a(h1n1)v in reports published from data obtained for different countries vary widely .for example , has been variously estimated to be between 2.2 to 3.0 for mexico , 1.72 for mexico city , between 1.4 and 1.6 for la gloria in mexico , between 1.3 to 1.7 for the united states and 2.4 for victoria state in australia .the divergence in the estimates for the basic reproductive number may be a result of under - reporting in the early stages of the epidemic or due to climatic variations .they may also possibly reflect the effect of different control strategies used in different regions , ranging from social distancing such as school closures and confinement to antiviral treatments . in this paper , we estimate the basic reproductive number for the infections using the time - series of infections in india extracted from reported data . by assuming an exponential rise in the number of infected cases during the initial stage of the epidemic when most of the population is susceptible, we can express the basic reproductive number as ( see , e.g. , ref . , p. 19 ) , where is the rate of exponential growth in the number of infections , and is the mean generation interval , which is approximately equal to 3 days . using the time - series data we obtain the slope of the exponential growth using several different statistical techniques .our results show that this quantity has a value of around 0.15 , corresponding to .we used data from the daily situation updates available from the website of the ministry of health and family welfare , government of india . in our analysis , data up to september 30 , 2009 was used , corresponding to a total of 10078 positive cases . note that , after september 30 , 2009 , patients exhibiting mild flu like symptoms ( classified as categories a and b ) were no longer tested for the presence of the influenza a(h1n1 ) virus .as the data exhibit very large fluctuations , with some days not showing a single case while the following days show extremely large number of cases , it is necessary to smooth the data using a moving window average .we have used an -day moving average ( ) , which removes large fluctuations while remaining faithful to the overall trend .the incidence data for the 2009 pandemic influenza data in india immediately reveals that the disease has been largely confined to the urban areas of the country . indeed , 6 of the 7 largest metropolitan areas of india ( which together accommodate about 5 % of the indian population )account for 7139 infected cases up to september 30 , 2009 , i.e. , of the data - set we have used .figure [ figure1 ] shows the daily number of confirmed infected cases , as well as , the 5-day moving average from june 1 to september 30 , 2009 , for the country as a whole and the six major metropolitan areas which showed the highest incidence of the disease : hyderabad , delhi , bangalore , mumbai , chennai and pune .the adjoining map shows the geographic locations of these six cities . in the period up to july 2009 , infections were largely reported in people arriving from abroad .there is a marked increase in the number of infections towards the end of july and the beginning of august 2009 in all of these cities ( note that the ordinate is in logarithmic scale ) .this is manifest as a sudden rise in the number of infected cases for the country as a whole , implying that the infection started spreading in the resident population in the approximate period of 28 july to august 12 .estimated from the time - series data of number of infected cases , , averaged over a 5-day period to smoothen the fluctuations ( d , solid curve ) .the slope is calculated by considering the number of infected cases over a moving window having different sizes ( ) , ranging between 7 days and 36 days . by moving the starting point of the window across the period 1st june-20th august ( in steps of 1 day ) and calculating the best fit linear slope of the data on a semi - logarithmic scale ( i.e. , time in normal axis , number of infections in logarithmic axis ) we obtain an estimate of .the arrow indicates the region between july 28-august 12 ( region within the broken lines ) , which shows the largest increase in number of infections within the period under study , corresponding to the period when the epidemic broke out in the resident population . over this time - interval, the average of is calculated for the set of starting dates and window sizes over which ( b ) the correlation coefficient between log( ) and , is greater than ( we consider in our analysis ) and ( c ) , the measure of significance for the correlation . ]figure [ figure2 ] ( a ) shows the exponential slope estimated in the following way . the time - series of the number of infections is first smoothed by taking a 5-day moving average .the resulting smoothed time - series is then used to estimate by a regression procedure applied to the logarithm of the number of infected cases [ log( ) ] across a moving window of length days .the origin of the window is varied across the period 1st june to 20th august ( in steps of 1 day ) .we then repeat the procedure by varying the length of the window over the range of 7 days to 36 days . to quantify the quality of regression we calculate the correlation coefficient [ fig .[ figure2 ] ( b ) ] between log ( ) and time ( in days ) , and its measure of significance [ fig .[ figure2 ] ( c ) ] .the correlation coefficient is bounded between and 1 , with a value closer to 1 indicating a good fit of the data to an exponential increase in the number of infections .the measure of significance of the fitting is expressed by the corresponding -value , which expresses the probability of obtaining the same correlation by random chance from uncorrelated data .the average of the estimated exponential slope is obtained by taking the mean of all values of obtained for windows originating between july 28-aug 12 and of various sizes , for which the correlation coefficient ( we consider in our analysis ) and the measure of significance . for comparison, we show again in figure [ figure2 ] ( d ) the number of infected cases of h1n1 in india ( dotted ) together with its 5-day moving average ( solid line ) .the horizontal broken lines running across the figure indicate the period between july 28 and august 12 which exhibited the highest increase in number of infections within the period under study ( from 1st june to 30th september ) . of the variation in log( ) with time , as a function of the threshold of correlation coefficient , , used to filter the data .the averaging is performed for infections occurring within the period july 28-august 12 ( for details see caption to fig .[ figure2 ] ) .different symbols indicate the actual daily time - series data ( squares ) and the data smoothed over a moving -day period , with = 2 ( right - pointed triangle ) , 3 ( diamond ) , 4 ( inverted triangle ) , 5 ( circle ) and 10 ( triangle ) . the significance of the correlation between log( ) with time , for all data points used in performing the average .note that for the data show very similar profiles for variation of with , indicating the robustness of the estimate with respect to different values of used .the sudden increase in the value of the average slope around implies that beyond this region the slope depends sensitively on the cutoff value .considering the region where the variation is more gradual gives us an approximate value of the slope , corresponding to a basic reproduction number . ]figure [ figure3 ] shows the average exponential slope as a function of , calculated for the original data and for different periods over which the moving average is taken ( and 10 ) . for 3 - 5, the data show a similar profile indicating the robustness of the estimate of the average exponential slope with respect to different values of .the sudden increase in around implies that beyond this region the slope depends sensitively on the cutoff value .considering the region where the variation is smoother gives an approximate value , corresponding to a basic reproductive number for the epidemic , assuming the mean generation interval , days .we compute the confidence bounds for the estimate of from the 5-day moving average time - series by using the _ confint _ function of the scientific software matlab .this function generates the goodness of fit statistics using the solution of the least squares fitting of log( ) to a linear function .it results in a mean value , with the corresponding confidence intervals calculated as [ 0.116 , 0.206 ] , consistent with our previous estimate of . ,calculated for different periods ( with the abscissa indicating the starting date and the symbol indicating the duration ) from the 5-day moving average time - series data of infected cases in india .the curves corresponding to the periods of different durations ( 14 - 16 days ) intersect around july 31 , 2010 , indicating that the value of the average exponential slope is relatively robust with respect to the choice of the period about this date .( b ) the distribution of bootstrap estimates of the exponential slope for the period july 31 to august 15 , 2009 .the average slope obtained from 1000 bootstrap samples is 0.166 with a standard deviation of 0.024 , which agrees with the approximate value of ( corresponding to ) calculated in fig .[ figure3 ] . ]we have also used bootstrap methods to estimate the exponential slope , .this involves selecting random samples with replacement from the data such that the sample size equals the size of the actual data - set .the same analysis that was performed on the empirical data is then repeated on each of these samples .the range of the estimated values calculated from the random samples allows determination of the uncertainty in estimation of .[ figure4 ] ( a ) shows the average , , calculated for different periods ( with abscissa indicating the starting date and the symbol indicating the duration of the period ) from the 5-day moving average time - series data of infected cases . the curves corresponding to the periods of different durations ( 14 - 16 days ) intersect around july 31 , 2010 , indicating that the value of the average exponential slope is relatively robust with respect to the choice of the period about this date .the average value of the bootstrap estimates at the intersection of the three curves is 0.15 , in agreement with our earlier calculations of .[ figure4 ] ( b ) shows the distribution of the bootstrap estimates of the exponential slope for a particular period , july 31 to august 15 , 2009 .the average slope obtained from 1000 bootstrap samples for this period is 0.166 with a standard deviation of 0.024 , which indicates that the spread of values around the average estimate of = 0.15 is not large .this confirms the reliability of the estimated value of the exponential slope , and hence of our calculation of the basic reproductive number .it may appear surprising that there was a very high number of infections in pune ( 1238 positive cases up to september 30 ) , despite it being less well - connected to the other major metropolitan cities of india , in comparison to urban centres that did not show a high incidence of the disease .for example , the kolkata metropolitan area , which has a population around three times the population of the pune metropolitan area , had only 113 positive cases up to september 30 .this could possibly reflect the role of local climatic conditions : pune , located at a relatively higher altitude , has a generally cooler climate than most indian cities .in addition , the close proximity of pune to mumbai and the high volume of road traffic between these two cities could have helped in the transmission of the disease .another feature pointing to the role of local climate is the fact that in chennai , most infected cases were visitors from outside the city , while in pune , the majority of the cases were from the local population , even though the total number of infected cases listed for the two cities in our data - set are comparable ( 928 in chennai and 1213 in pune ) .this suggests the possibility that the incidence of the disease in pune could have been aided by its cool climate , in contrast to the hotter climate of the coastal city of chennai .the calculation of for india assumes well - mixing of the population ( i.e. , homogeneity of the contact structure ) among the major cities in india .given the rapidity of travel between the different metropolitan areas via air and rail , this may not be an unreasonable assumption .however , some local variation in the development of the epidemic in different regions can indeed be seen ( fig .[ figure1 ] ) . around the end of july ,almost all the cities under investigation showed a marked increase in the number of infected cases - indicating spread of the epidemic in the local population .this justifies our assumption of well - mixing in the urban population over the entire country for calculating the basic reproductive number . to conclude, we stress the implications of our finding that the basic reproductive number for pandemic influenza a(h1n1)v in india lies towards the lower end of the values reported for other affected countries .this suggests that season - to - season and country - to - country variations need to be taken into account in order to formulate strategies for countering the spread of the disease .evaluation of the reproductive number , once control measures have been initiated , is vital in determining the future pattern of spread of the disease .ministry of health and family welfare , government of india , situation update on h1n1 , 11 april 2010 .available from : ` http://mohfw-h1n1.nic.in/documents/pdf/situational ` ` updatesarchives / april2010/situational%20updates%20 ` ` on%2011.04.2010.pdf ` bolle , p. y. , bernillon , p. and desenclos , j. c. , a preliminary estimation of the reproduction ratio for new influenza a(h1n1 ) from the outbreak in mexico , march - april 2009 ._ euro surveill . _ , 2009 , * 14*(19 ) , pii : 19205 .cruz - pacheco , g. , duran , l. , esteva , l. , minzoni , a. a. , lopez - cervantes , m. , panayotaros , p. , ahued ortega , a. and villasenor ruiz , i. , modelling of the influenza a(h1n1)v outbreak in mexico city , april - may 2009 , with control sanitary measures ._ euro surveill . _ , 2009 , * 14*(26 ) , pii:19254 .fraser , c. , donnelly , c. a. , cauchemez , s. , hanage , w. p. , van kerkhove , m. d. , hollingsworth , t. d. , griffin , j. , baggaley , r. f. , jenkins , h. e. , lyons , e. j. , jombart , t. , hinsley , w. r. , grassly , n. c. , balloux , f. , ghani , a. c. , ferguson , n. m. , rambaut , a. , pybus , o. g. , lopez - gatell , h. , alpuche - aranda , c. m. , chapela , i. b. , zavala , e. p. , guevara , d. m. , checchi , f. , garcia , e. , hugonnet , s. , roth , c. ; who rapid pandemic assessment collaboration , pandemic potential of a strain of influenza a(h1n1 ) : early findings ._ science _, 2009 , * 324 * pp .1557 - 1561 .yang , y. , sugimoto , j. d. , halloran , m. e. , basta , n. e. , chao , d. l. , matrajt , l. , potter , g. , kenah , e. and longini , i. m. , the transmissibility and control of pandemic influenza a(h1n1 ) virus ._ science _ , 2009 , * 326 * , pp .729 - 733 .mcbryde , e. , bergeri , i. , van gemert , c. , rotty , j. , headley , e. , simpson , k. , lester , r. , hellard , m. and fielding , j. , early transmission characteristics of influenza a(h1n1)v in australia : victorian state , 16 may - 3 june 2009 ._ euro surveill ._ , 2009 , * 14*(42 ) , pii : 19363 . | we analyze the time - series data for the onset of a(h1n1)v influenza pandemic in india during the period june 1- september 30 , 2009 . using a variety of statistical fitting procedures , we obtain a robust estimate of the exponential growth rate . this corresponds to a basic reproductive number for influenza a(h1n1)v in india , a value which lies towards the lower end of the range of values reported for different countries affected by the pandemic . |
dynamical systems subject to random abrupt changes , such as manufacturing systems , networked control systems , economics and finance _ etc_. , can be modelled adequately by random jump linear systems ( rjlss ) .rjlss are a particular kind of stochastic switching systems , which consists of a set of linear systems , also called multiple modes , and the switching among them is governed by a random jump process .a notable class of rjlss is markov jump linear system ( mjls ) in which the underlying random jump process is a finite state markov chain ( or a finite state markov process ) .many important results related to stability , control , and applications of such systems have been investigated in the literature , for instance in , , , , , etc .almost all the works related to mjlss assume that the underlying random jump process is time - homogeneous / time - inhomogeneous markov , which is a restrictive assumption . in this article, we deal with a class of rjlss in which the evolutions of the random jump process depends on the state variable , and are referred to as _ state - dependent jump linear systems _( sdjls ) . in the following ,we list some motivations for sdjls modelling of dynamical systems . in the analysis of random breakdown of components , the age , wear , and accumulated stress of a component affect its failure rate , for instance .thus , it can be assumed that the failure rate of a component is dependent on state of the component at age , where the state variable may be an amount of wear , stress etc . as an another instance , in , a state - dependent markov process was utilized to describe the random break down of cylinder lines in a heavy - duty marine diesel engine .also , we can examine a stock market with situations : up and down , and the transitions between the situations can be dependent on state of the market , where the state variable may be general mood of investors and current economy etc . also , a state - dependent regime switching model was considered in to model financial time series .one can find other instances or examples of sdjls modelling in the literature .the studies of stability and control of sdjlss have been scanty in the literature .a study of hybrid switching diffusion processes , a kind of continuous - time state - dependent jump non - linear systems with diffusion , was considered in by treating existence , uniqueness , stability of the solutions etc . for rjlss , a state and control dependent random jump process was considered in , where the authors used stochastic maximum principle to obtain optimal control for a given objective function .sdjls modelling of flexible manufacturing system was proposed in , and dynamic programming is used to obtain an optimal input which minimizes the mentioned cost .a state - dependent jump diffusion modelling of a production plant was considered in to obtain an optimal control . in the sequel, we bring back the attention to the main ingredients of the problem that we address in this article . in this article, we consider that the sdjls is affected by possibly unbounded stochastic disturbances , and the perfect state information is assumed .we also deal with constraints on different variables of the system that is inherent in all practical systems .model predictive control ( mpc ) , also called receding horizon control ( rhc ) , is an effective control algorithm that has a great potential to handle input and/or state constraints , for problems across different disciplines .mpc is a form of control in which the current input is obtained at each sampling time by solving on - line a finite horizon optimal control problem in the presence of constraints , using the current information and the predicted information over the finite horizon .normally more than one input is obtained at the current sampling time , however , only the first controller input will be implemented to the plant . at the next sampling time , these actions will be repeated , that is why the mpc is also called the rhc .one can refer to , , , etc ., for classic contributions in the mpc literature . in the context of rjlss , of late , the rhc scheme has been extended to discrete - time mjlss . for discrete - time mjlss ,the rhc with hard symmetric constraints and bounded uncertainties in system parameters was dealt by , , where the constraints and the objective function were posed as sufficient conditions in terms of linear matrix inequalities ( lmis ) to be solved at each sampling time ; for the similar case without constraints , the rhc was addressed by following the similar approach .also , for unconstrained state and control input , optimality of the rhc was addressed via dynamic programming , variational methods , solving riccati equations , etc .one major issue in the presence of unbounded disturbances is that the rhc can not necessarily guarantee the satisfaction of constraints .for instance , an additive unbounded disturbance eventually drives the state variable outside any bounded limits , no matter how arbitrarily large they may be .a possible alternative is to consider the satisfaction of constraints in stochastic manner , which allow occasional constraint violations . in this direction , recent approaches , , , and the references therein treat the rhc of discrete - time linear systems with stochastic constraints . for discrete - time mjlss , in case of perfect state availability , a linear quadratic regulator problem with second moment constraints was considered in , where the entire problem was converted to a set of lmis .however , to the best of the authors knowledge , the rhc of discrete - time sdjlss with probabilistic constraints has not been examined . in this article , we address a one - step rhc of discrete - time sdjlss with additive gaussian process noise ( its distribution has an unbounded support ) , and probabilistic state constraints under perfect state availability .we would like to highlight several challenges in our problem set - up .first , in the presence of additive process noise with unbounded support , it impossible to guarantee hard bounds on the state , and also on the linear state - feedback control .second , one needs to pre - stabilize the system before addressing the rhc problem .third , one needs to obtain a tractable representation of the rhc problem in the presence of probabilistic constraints. our approach along with main contributions in this article can be listed as follows . in our problem set - up , we consider the control to have a linear state - feedback and an offset term , , where linear state - feedback gains are computed off - line for pre - stabilization and admissible offset terms are computed on - line to solve the rhc problem . in the presence of unbounded process noise , it is not possible to ensure hard bounds on the state and the control variables that follow state - feedback law , thus we consider the state variable to be probabilistically constrained and the control to be unconstrained , except for the offset term. using inverse cumulative distribution function , we convert the probabilistic state constraints to deterministic constraints and the overall rhc problem is replaced by a tractable deterministic rhc problem .to summarize , for sdjls subject to possibly unbounded random disturbances and probabilistic state constraints , our contributions in this article are : * pre - stabilizing the system state by a state - feedback controller in means square sense , * implementing a one - step rhc scheme on - line with probabilistic constraints on the state variable , which are converted to deterministic constraints . for illustration , we apply our approach to a macroeconomic situation .the article is organized as follows .section [ sec : mathematical - model ] presents the problem setup .we present the pre - stabilization of the system by a state feed - back controller in section [ sec : pre - stabilization ] .we convert the probabilistic constraints to suitable deterministic constraints in section [ sec : prob_constraints ] .we give a one - step rhc scheme with probabilistic constraints in section [ sec : one - step - rhc - with ] .section [ sec : examples ] presents an illustrative example followed by conclusions in section [ sec : conclusions ] . finally , we give majority of the proofs in the appendix to improve readability . _notation : _ let denotes the -dimensional real euclidean space and the set of non - negative integers . for a matrix , denotes the transpose , the minimum ( maximum ) eigenvalue and the trace of . the standard vector norm in is denoted by the corresponding induced norm of a matrix by . given a matrix , ( or ) denotes that the matrix is positive definite ( or negative definite ) . given two matrices and , ( or )denotes the element wise inequalities .symmetric terms in block matrices are denoted by a matrix product denotes the identity matrix of dimension is denoted by .the diagonal matrix formed from its vector arguments is denoted by diag .the underlying probability space is denoted by where is the space of elementary events , is a -algebra , and is the probability measure . the mathematical expectation of a random variable is denoted by ] . in the sequel , we try to obtain deterministic constraints that imply the probabilistic constraint ( [ constraint : multivariate ] ) .we say it as converting probabilistic constraint to deterministic ones .a sufficient condition to satisfy ( [ constraint : multivariate ] ) is given by , now , we present a sufficient deterministic condition to satisfy ( [ constraint : multivariate_final ] ) via approximation of inscribed ellipsoidal constraints , . [theorem : multivariate]let , where denotes chi - square inverse cumulative distribution function with a given probability and degrees of freedom .if then , where and are rows of and respectively . the proof is given on the similar lines of , and outlined in appendix b. note that ( [ constraint : chi_square ] ) is an over conservative condition , because the deterministic condition ( [ constraint : chi_square ] ) implies ( [ constraint : multivariate_final ] ) , which finally implies ( [ constraint : multivariate ] ) .even for this special case , we could not obtain an equivalent representation of ( [ constraint : multivariate_final ] ) , which is non - convex in general , where it is hard to find its feasible region .so , alternatively , we propose the individual probabilistic constraints of type where and represent the row of and respectively . with given ,the constraints ( [ constraint : univariate ] ) offer satisfaction of each individual constraint of the polyhedron probabilistically , but with more constraint violations than ( [ constraint : multivariate ] ) that will also be observed in section [ sec : examples ] .however , we consider the individual probabilistic constraints ( [ constraint : univariate ] ) , because they are simpler to handle and in general convex , .similar to the above treatment , a sufficient condition to satisfy ( [ constraint : univariate ] ) is given by by ( [ sys : stabilized_input ] ) , one obtains , where and are the cumulative distribution and inverse cumulative distribution of the random variable respectively .the probabilistic constraints ( [ constraint : univariate ] ) result in handling with uni - variate gaussian random variables when converting to deterministic constraints ( [ constraint : det_univariate ] ) , which is straightforward . in this case , given , can easily be obtained .observe that the conditions ( [ constraint : univariate_final ] ) and ( [ constraint : det_univariate ] ) are equivalent , which imply ( [ constraint : univariate ] ) .the probabilistic constraints ( [ constraint : multivariate ] ) and ( [ constraint : univariate ] ) are two different ways of treating the constraints in a probabilistic fashion .we consider the probabilistic constraints ( [ constraint : univariate_final ] ) because of the simplicity and low conservatism involved .in this section , we provide a one - step rhc problem with the objective function ( [ obj : func ] ) subject to the probabilistic constraints ( [ constraint : univariate_final ] ) and the state - feedback control input ( [ eq : stabilization_input ] ) . at each , we consider the following one - step rhc problem \nonumber \\ & \text{s.t.}\ ; ( \ref{sys : djls_noise } ) , ( \ref{eq : p_tran}),\nonumber\\ & \quad\;\ : \text{pr}\big\{g_{j}x_{k+1}\le h_{j}|\mathcal{f}_k\big\}\ge\xi,\ , 1\le j\le r,\,\xi\in[0,1],\label{eq : prob_constraint}\\ & \quad\;\ : u_{k}=k_{\theta_{k}}x_{k}+\nu_{k},\\ & \quad\;\:\nu_{k}\in\mathbb{u}.\label{eq : input_con}\end{aligned}\ ] ] in , we choose the prediction horizon to be one because of the probabilistic constraints ( [ eq : prob_constraint ] ) and the system ( [ sys : stabilized_input ] ) . in section[ sec : prob_constraints ] , we obtained a deterministic equivalence of the constraints ( [ eq : prob_constraint ] ) in terms of the state variable and the mode at time that are known .in general , the larger prediction horizon result in better performance and more computational burden depending on the system .the choice of the prediction horizon depends on the performance requirements and computational capability .suppose , if we consider a multi - step prediction , the probabilistic constraints in look like . by proceeding with the similar approach of section [ sec : prob_constraints ] , we can obtain an equality similar to ( [ constraint : remark_univar ] ) that contain additional unknown random variables , where it is not possible to obtain its deterministic equivalence .thus , we choose a one - step prediction horizon to obtain a deterministic equivalence of the probabilistic constraints ( [ eq : prob_constraint ] ) for tractability of . at this point , we give a brief review of some works in the literature of rjlss with multi - step prediction . in case of perfect state availability , without process noise ,a multi - step prediction horizon was considered for discrete - time mjlss with hard symmetric input and state constraints , , where an upper bound of the objective function was minimized and the overall problem with the constraints was converted to sufficient lmis .the approach was based on , where the state variable was restriced to an invariant ellipsoid in the constrained space and the techniques of lmis and lyapunov method were utilized .however , it brings a computational burden because of the additional lmis , and more importantly it reduces the feasibility region drastically depending on the invariant ellipsoid . to avoid the sufficiency, the authors in directly considered ellipsoidal constraints ( in terms of second moment constraints ) for discrete - time mjlss with multi - step prediction , where the constraints were replaced by a set of lmis .it can be possible that the become infeasible when the constraints ( [ eq : prob_constraint ] ) are tight with a given admissible input . to resolve this issue ,the constraints can be relaxed by an additional slack variable as , where denotes a column vector of appropriate size , which contain all ones .thus from ( [ constraint : det_univariate ] ) , the addition of a variable can be compensated by adding a variable to the objective function in . in particular , should be chosen as a very high value , which act as a penalty to discourage the use of slack variable . thus will be converted to +\alpha\rho_{k}\nonumber \\ & \text{s.t.}\ ; ( \ref{sys : djls_noise } ) , ( \ref{eq : p_tran}),\nonumber\\ & \quad\;\ : g_{j}\big(\tilde{a}_{\theta_{k}}x_{k}+b_{\theta_{k}}\nu_{k}\big)\le h_{j}-f_{g_{j}w_{k}}^{-1}(\xi)+\mathbf{1}\rho_{k},\ , 1\le j\le r,\,\xi\in[0,1],\label{constraint : slack_rhc2}\\ & \quad\;\ : u_{k}=k_{\theta_{k}}x_{k}+\nu_{k},\\ & \quad\;\:\nu_{k}\in\mathbb{u } , \\ & \quad\;\:\rho_{k}\ge0.\label{eq : epsilon}\end{aligned}\ ] ] notice that due to ( [ constraint : slack_rhc2 ] ) , ( [ eq : epsilon ] ) , is always feasible .the main task is to minimize the objective function in by a proper choice of to accomplish this task , we present in a tractable fashion in the following . in order to write the in terms of tractable terms , consider the term ] with ^{t} ] .the state - dependent transitions of the mode are given by ( [ eq : p_tran ] ) , where ,\mu=\left[\begin{array}{ccc } 0.67 & 0.17 & 0.16\\ 0.30 & 0.47 & 0.23\\ 0.26 & 0.10 & 0.64 \end{array}\right ] , \ ] ] ,\ , h\equiv h_{k}=\left[\begin{array}{c } -2 - 0.5k\\ 5 + 0.5k \end{array}\right].\ ] ] observe that when the economy ( the state ) satisfies , the transitions among the modes follow , where the transition probabilities to boom are higher , and also the transition probabilities to slump are lower compared to .thus , we observe the transitions among the economic situations depend on the state of the economy .consider the state - feedback control input ( [ eq : stabilization_input ] ) with .for pre - stabilization , we consider proposition [ theorem : stabilization_ss ] that can be satisfied with ,x_{2}=\left[\begin{array}{cc } 1.9255 & 0\\ 0 & 0.7628 \end{array}\right],x_{3}=\left[\begin{array}{cc } 1.1393 & 0\\ 0 & 0.1044 \end{array}\right],\ ] ] ,y_{2}=\left[\begin{array}{cc } 8.2797 & -3.4325\\ \end{array}\right],y_{3}=\left[\begin{array}{cc } -6.0384 & 0.5429 \\ \end{array}\right].\ ] ] thus the system is pre - stabilized in the sense of ( [ eq : msqs ] ) , and the state - feedback gains are given by ,k_{2}=\left[\begin{array}{cc } 4.3 & -4.5\\ \end{array}\right],k_{3}=\left[\begin{array}{cc } -5.3 & 5.2 \\ \end{array}\right].\ ] ] we assume the probabilistic state constraints ( [ constraint : univariate_final ] ) with as a pre - specified monitory target policy at each .it means that the income ( the state ) is required to meet the target ( the range between the red lines given in figure [ fig : ex_statemc ] and figure [ fig : eco_income_compare ] with a probability , which denotes the level of constraint satisfaction .we consider the following parameters for the one - step rhc problem : and using the parameters , we solve , at each time . with these parameterswe obtain as zero for all , which imply that the original problem was feasible and the solution of is equivalent to the solution of .we consider the planning for 5 years of duration , where each unit time represents the period of three months .a sample mode ( ) evolution is given in figure [ fig : ex_mode ] , which shows a sample evolution of an economic situation ( normal , bloom or slump ) .a corresponding optimal cost is shown in figure [ fig : ex_input ] .suppose , if the values of are not zero and larger ( ) to make the constraints ( [ constraint : slack_final ] ) feasible , then this would make the values of , because all the remaining terms of the objective function in are positive . ] ] to observe the probabilistic state constraints ( [ constraint : univariate_final ] ) qualitatively , we performed monte carlo simulations for 50 runs and the incomes are shown in figure [ fig : ex_statemc ] .one can observe occasional constraint violations , because we consider the constraint satisfaction probabilistically . to compare the probabilistic constraints ( [ constraint : univariate_final ] ) , ( [ constraint : multivariate_final ] ), we also solved with the joint probabilistic constraints ( [ constraint : multivariate_final ] ) and the incomes are shown in figure [ fig : ex_state_joint ] that are obtained via monte carlo simulations . from figure[ fig : ex_statemc ] and [ fig : ex_state_joint ] , one can observe more constraint violations with individual probabilistic constraints ( [ constraint : univariate_final ] ) than ( [ constraint : multivariate_final ] ) , which is stated in section [ sec : prob_constraints ] . in figure[ fig : ex_state_joint ] , we obtain , for some , which is due to smaller feasibility region of ( [ constraint : chi_square ] ) compared to ( [ constraint : det_univariate ] ) that can be observed by comparing the right hand sides of ( [ constraint : chi_square ] ) and ( [ constraint : det_univariate ] ) ..5 .5 to compare the probabilistic state constraints ( [ constraint : univariate_final ] ) with different values of qualitatively , we perform monte carlo simulations of 50 runs for values of ( 0.95 , 0.5 , 0.3 ) separately by keeping the remaining parameters same as above , and the incomes are shown in figure [ fig : eco_income_compare ] .observe that the larger the level of constraint satisfaction , the better the incomes meet the required target .for this experiment , we obtained as zero , for all , when is 0.5 and 0.3 ; for some , when is 0.95 .it implies that the higher the level of constraint satisfaction , the more the chance of becoming the original infeasible .this can also be observed from ( [ constraint : det_univariate ] ) , where the larger values of makes the right hand side of ( [ constraint : det_univariate ] ) smaller , thus reducing its feasibility region ., width=340,height=226 ]we considered a receding horizon control of discrete - time state - dependent jump linear systems subject to additive stochastic unbounded disturbance with probabilistic state constraints .we used an affine state - feedback control , and synthesized feedback gains that guarantee the mean square boundedness of the system by solving a linear matrix inequality problem off - line .we obtained sufficient deterministic conditions to satisfy probabilistic constraints by utilizing inverse cumulative distribution function , and consequently converted the overall receding horizon problem as a tractable deterministic optimization problem .although , it is difficult to guarantee the recursive feasibility in the case of stochastic unbounded disturbance , we attempted to resolve this issue with an addition of slack variable to the obtained constraints with a penalty in the objective function .we performed simulations a macroeconomic system to verify the proposed methodology .[ sec : appendix ]consider the given system where is defined in ( [ eq : p_tran ] ) and from remark [ remark : input_measurable ] , is a stochastic process with the property that for each , the vector is -measurable and \leq\delta<\infty$ ] .+ one knows that there exist matrices and , with , for such that ( [ lmi1 ] ) and ( [ lmi2 ] ) are verified .consider .let , , and .one has -v(x_{k},\theta_{k } ) \\ & = x_{k}^{t}\left(\tilde{a}_{\theta_{k}}^{t}\upsilon_{(\theta_{k},x_{k})}\tilde{a}_{\theta_{k}}-p_{\theta_{k}}\right)x_{k}+2x_{k}^{t}\tilde{a}_{\theta_{k}}^{t}\upsilon_{(\theta_{k},x_{k})}\mathbb{e}\left[w_{k}|{\mathcal{f}_{k}}\right ] + 2\nu_{k}^{t}b_{\theta_{k}}^{t}\upsilon_{(\theta_{k},x_{k})}\tilde{a}_{\theta_{k}}x_{k}\\ & \quad + 2\nu_{k}^{t}b_{\theta_{k}}^{t}\upsilon_{(\theta_{k},x_{k})}\mathbb{e}\left[w_{k}|{\mathcal{f}_{k}}\right]+\nu_{k}^{t}b_{\theta_{k}}^{t}\upsilon_{(\theta_{k},x_{k})}b_{\theta_{k}}\nu_{k}+\mathbb{e}\left[w_{k}^{t}\upsilon_{(\theta_{k},x_{k})}w_{k}|{\mathcal{f}_{k}}\right],\\ & \leq-\mu\vert x_{k}\vert^{2}+2x_{k}^{t}\tilde{a}_{\theta_{k}}^{t}\upsilon_{(\theta_{k},x_{k})}\mathbb{e}\left[w_{k}|{\mathcal{f}_{k}}\right ] + 2\nu_{k}^{t}b_{\theta_{k}}^{t}\upsilon_{(\theta_{k},x_{k})}\tilde{a}_{\theta_{k}}x_{k}+2\nu_{k}^{t}b_{\theta_{k}}^{t}\upsilon_{(\theta_{k},x_{k})}\mathbb{e}\left[w_{k}|{\mathcal{f}_{k}}\right]\\ & \quad + \alpha_{1}\vert\nu_{k}\vert^{2}+\alpha_{2}\mathbb{e}\left[w_{k}^{t}w_{k}|{\mathcal{f}_{k}}\right],\end{aligned}\ ] ] where and . because the random vector is independent of we obtain -v(x_{k},\theta_{k})\\ & \leq-\mu\vert x_{k}\vert^{2}+2x_{k}^{t}\tilde{a}_{\theta_{k}}^{t}\upsilon_{(\theta_{k},x_{k})}\mathbb{e}\left[w_{k}\right]+2\nu_{k}^{t}b_{\theta_{k}}^{t}\upsilon_{(\theta_{k},x_{k})}\tilde{a}_{\theta_{k}}x_{k}+2\nu_{k}^{t}b_{\theta_{k}}^{t}\upsilon_{(\theta_{k},x_{k})}\mathbb{e}\left[w_{k}\right]\\ & \quad + \alpha_{1}\vert\nu_{k}\vert^{2}+\alpha_{2}\mathbb{e}\left[w_{k}^{t}w_{k}\right],\\ & \leq-\mu\vert x_k\vert^{2}+2\vert\nu_{k}\vert\vert b_{\theta_{k}}\vert\vert\upsilon_{(\theta_{k},x_{k})}\vert\vert\tilde{a}_{\theta_{k}}\vert\vert x_{k}\vert+\alpha_{1}\vert\nu_{k}\vert^{2}+n\alpha_{2}.\end{aligned}\ ] ] from the young s inequality , ; note that one has where .this yields -v(x_{k},\theta_{k } ) & \leq-(\mu-\kappa)\vert x_{k}\vert^{2}+(\alpha_{1}+\kappa^{-1}\beta^{2})\vert\nu_{k}\vert^{2}+n\alpha_{2}.\end{aligned}\ ] ] take .we have & \leq(1-c_{2}^{-1}(\mu-\kappa))v(x_{k},\theta_{k})+(\alpha_{1}+\kappa^{-1}\beta^{2})\vert\nu_{k}\vert^{2}+n\alpha_{2},\label{equa7}\end{aligned}\ ] ] where , with the constraint .let .taking expectation on both sides of ( [ equa7 ] ) , we get \leq & q\mathbb{e}\left[v(x_{k},\theta_{k})\right]+(\alpha_{1}+\kappa^{-1}\beta^{2})\mathbb{e}\left[\vert\nu_{k}\vert^{2}\right]+n\alpha_{2}.\end{aligned}\ ] ] we obtain recursively & \leq q^{k}\mathbb{e}\left[v(x_{0},\theta_{0})\right]+(\alpha_{1}+\kappa^{-1}\beta^{2})\sum\nolimits _ { p=1}^{k-1}q^{k - p-1}\mathbb{e}\left[\vert\nu_{p}\vert^{2}\right ] + n\alpha_{2}\sum\nolimits _ { p=1}^{k-1}q^{k - p-1},\\ & \leq q^{k}\mathbb{e}\left[v(x_{0},\theta_{0})\right]+\frac{n\alpha_{2}+\delta(\alpha_{1}+\kappa^{-1}\beta^{2})}{1-q}.\end{aligned}\ ] ] finally , we obtain & \leq\frac{c_{2}}{c_{1}}q^{k}\|x_{0}\|^{2}+\frac{n\alpha_{2}+\delta(\alpha_{1}+\kappa^{-1}\beta^{2})}{c_{1}(1-q)},\end{aligned}\ ] ] where .hence the proof is complete. the given probabilistic constraint ( [ constraint : multivariate ] ) , let then ( [ constraint : prob_intheproof ] ) is equivalent to since is a polyhedron , it is difficult to obtain the closed form of the above integral .so , we consider an inscribed ellipsoidal approximation of , .it is reasonable since the level curves of multivariate gaussian distribution are ellipsoids .consider an ellipsoid ( since the covariance of is identity matrix ) , where denotes the chi - square cumulative distribution function with degrees of freedom .the above inequality can be satisfied with value of by utilizing the maximization of a liner functional over an ellipsoidal set , can be ensured by , , | in this article , we consider a receding horizon control of discrete - time state - dependent jump linear systems , particular kind of stochastic switching systems , subject to possibly unbounded random disturbances and probabilistic state constraints . due to a nature of the dynamical system and the constraints , we consider a one - step receding horizon . using inverse cumulative distribution function , we convert the probabilistic state constraints to deterministic constraints , and obtain a tractable deterministic receding horizon control problem . we consider the receding control law to have a linear state - feedback and an admissible offset term . we ensure mean square boundedness of the state variable via solving linear matrix inequalities off - line , and solve the receding horizon control problem on - line with control offset terms . we illustrate the overall approach applied on a macroeconomic system . |
radio frequency identification ( rfid ) is a promising technology for the proliferation of internet of things ( iot ) applications , and it can be used to detect and identify the items in the proximity . due to their costeffective , durable , and energy efficient operation , rfid technology has been used in wide range of applications such as asset management , access control , public safety , localization , and tracking . among these , enabling high accuracy localization for massively deployed iot devices carries critical importance for a diverse set of iot applications .localization using radio frequency ( rf ) signals has been actively researched in the literature over the past decades .outdoor localization is mostly handled with global positioning system ( gps ) technology whereas indoor localization requires alternative approaches since gps needs a line - of - sight connection between user equipment and satellites .moreover , massive deployment of iot devices necessitates energy and cost efficient localization methods for prolonged durations .the rfid technology hence becomes a promising alternative for cost - effective , energy efficient indoor identification and localization for massively deployed iot . an ultra high frequency ( uhf ) rfid communication is fundamentally different from the conventional rf communication since it has two distinct links : the forward ( power - up ) and the reverse ( backscatter ) link .the forward link powers the passive rfid tags and the reverse link carries the information of tags .ability to power - up tags in the forward link enables _ battery - less _ operation of rfid tags , which is a major advantage of rfid systems for low - power iot applications . in general , there are two configurations for uhf rfid systems : 1 ) monostatic configuration , and 2 ) bistatic configuration . in the monostatic configuration ,a single reader antenna transmits the continuous wave , which powers up the passive tag , and subsequently receives the backscattered information signal from the tag . in the bistatic configuration the transmission and receptionare handled by different reader antennas as shown in fig .[ fig : iotframework ] .these antennas might be co - located ( i.e. , at same location , closely spaced ) or dislocated ( at separate locations ) .a particular challenge with both configuration is that complex , directional , and three dimensional rfid propagation models need to be explicitly taken into account to accurately characterize the real - world forward / backward propagation channels . in this paper, we use sophisticated and realistic 3d path - loss and radiation models to study fundamental lower bounds on the localization accuracy of received signal strength ( rss ) based uhf rfid localization systems for both monostatic and bistatic configurations .the main contributions of this work are as follows : 1 ) cramer - rao lower bound ( crlb ) on the localization accuracy are derived in closed - form considering an enhanced rss model , using the _ directional _ and _ 3d _ radiation pattern from uhf rfid reader antennas , and the concept of _ localization coverage _ ; 2 ) tag and reader sensitivity is incorporated into analytic derivations both for monostatic and bistatic scenarios , to derive _ localizability _ and _ localization coverage _ metrics ; 3 ) extensive computer simulations are carried out to compare the localization accuracy of the maximum likelihood ( ml ) technique with the crlbs , considering directional radiation patterns and using different configurations for rfid reader antennas .our analysis and simulation results show that for certain scenarios , using bistatic antenna configuration as in fig .[ fig : iotframework ] may increase the average localization coverage by when compared to monostatic rfid configuration .another important parameter in the antenna configurations is the elevation angle . especially with lower transmit powers, it affects the localization coverage and accuracy .corner placement of antennas for with mw gives localization coverage , while and results in .our results for the specific rfid configuration show that it is possible to locate a tag within meter error with a probability of with corner placement of antennas , whereas this probability drastically reduces to when side placement is used for with bistatic configuration .the rest of this paper is organized as follows .literature review for rss - based localization in passive uhf rfid systems is provided in section [ sect : litrev ] . in section [sect : systemmodel ] , the system model is described in detail which involves a 3d radio propagation model for rfid systems .the concept of localizability is defined , as well as localization coverage percentage in section [ sect : covareas ] .section [ sect : crlb ] derives the crlbs and the maximum likelihood estimator ( mle ) based on the likelihood function for an rfid tag s location for the considered rfid scenario .numerical results are provided in section [ sect : nresults ] , and concluding remarks are given in section [ sect : conclusion ] ..35 .55 [ fig : system ] although there are several studies in the literature that investigate rss - based localization with rfid technology , fundamental lower bounds on rfid - based wireless localization are relatively unexplored . in , authors used a mobile robot with rfid reader antennas to generate map of an indoor environment with rfid tags on the walls .after the mapping phase , the robot may locate itself inside the building based on the closest tag information . in the landmarc localization technique introduced in ,reference rfid tags are used for implementing rss - based indoor localization method , where fixed - location reference tags with known locations are used to localize the tags . in , authors improve landmarc approach to tackle with multipath effects and rf interference .a probabilistic rfid map - based technique with kalman filtering is used to enhance the location estimation of the rfid tags in . another approach to localize the rfid tags is studied in , which uses the phase difference information of backscattered signal of the rfid tags . in , authors consider a multipath environment to derive the crlbs on the position error of an rfid based wireless localization system .geometry of the deterministic multipath components and the interfering diffuse multipath components are considered in the backscatter channel model .typically a simple path - loss model is used for rfid propagation models in the existing literature , which employs free - space path - loss signal strength model .these models are not capable of accurately capturing the radiation pattern of rfid reader antennas since they are highly directional .there are also several experimental studies in the literature related to rss - based uhf rfid localization systems . in , an experimentation with passive uhf rfid systemis conducted to investigate the relationship between rss and distance .recently in , crlb of rss - based localization are derived considering a frequency dependent path - loss propagation model , where the model explicitly depends on the transmit power level and the transmission frequency .accuracy of several localization techniques are compared to crlb with given path - loss model via simulations and experiments . in , authors used -nearest neighbor ( knn ) algorithm to estimate the location of the target tag from rss information . an experiment involving four antennas and seventy tags is conducted , which resembles to the simulation scenario in our manuscript . it is shown that power control techniques may significantly improve localization accuracy .effects of multipath propagation and signal scattering are considered in for passive uhf rfid localization , using mle and linear least square techniques .a localization algorithm using the differences of rss values from various tags under same conditions is also proposed .its performance , which is shown to outperform the knn algorithm used in landmarc . a two - parameter path - loss model for uhf rfid systems is constructed in , which shows that the rss of rfid systems are slightly more stable than wifi rss values , and this yields more precise location estimates for rfid rss - based localization . in our earlier work, we have studied the bounds on rfid localization for monostatic rfid configuration . in this study ,our additional contributions include : 1 ) use of bistatic antenna configuration and different antenna placement which provides a more generalized framework , 2 ) use of an enhanced rss model with lognormal distributed noise which yields different crlb formulations , 3 ) incorporation of reader antenna and tag sensitivity into theoretical analysis , 4 ) study of _ localization coverage _ for rfid tags , outside of which they can not be localized with a reasonable accuracy , and 5 ) extensive new simulations to study the effects of various parameters and configurations .in the rest of this paper , we consider the rfid localization scenario as shown in fig . [fig : system ] .in particular , fig . [fig : system](a ) illustrates a monostatic antenna configuration , where the reader antenna is both the transmitter and the receiver . on the other hand , the bistatic antenna configuration is shown in fig .[ fig : system](b ) , where one antenna transmits the power - up signal for rfid tag , and the other antenna receives the backscattered signal from the tag .we will consider the more general case of bistatic antenna configuration , and study the monostatic configuration as a special case .for the considered scenario , let rfid reader antennas be mounted on the walls , located at a height of meters from the ground for the antenna . as shown in fig .[ fig : system](b ) , rfid reader antennas and ( which are the forward and reverse antennas , respectively ) are tilted by an angle and , respectively , with reference to the azimuth plane .the goal is to localize an rfid tag , which is located at a distance below a reader antenna .the total backscattered received power at a bistatic configuration of reader antenna and antenna , which are located at and , respectively while the position of the tag is , is given by : which can be written in logarithmic scale as = 20\log_{10}\big(\tau\mu_{\rm t}\rho_{\rm l}p_{\rm tx}g^2_{\rm t}|h_ih_j\gamma|^2\big)\nonumber\\&+20\log_{10}\big(g_{\rm r}^i\big)+20\log_{10}\big(g_{\rm r}^j\big)\nonumber\\ & + 20\log_{10}\big(l(d_i)\big)+20\log_{10}\big(l(d_j)\big),\label{eq : preceived2}\end{aligned}\ ] ] where is a coefficient that quantifies the specific data encoding modulation details that can be calculated using power density distribution of the tag s signal . according to the epcglobal c1g2 specifications , any tag in the interrogation zone of the readercan send back its information by reflecting the incoming continuous wave .the power transfer efficiency ] denote the unknown location of the tag , assuming that the received power in log scale at an rfid reader antenna is subject to gaussian noise .consider that the observations of received power in from different rfid antennas mounted on the walls are stacked in a vector ] is the fisher information matrix ( fim ) for , = \begin{bmatrix } \textbf{i}_{11 } & \textbf{i}_{12 } \\\textbf{i}_{21 } & \textbf{i}_{22 } \end{bmatrix } , \label{eq : fim}\ ] ] whose elements are as derived in - ._ proof : _ see appendix [ sect : derivation ] . an example derivation of the crlb for the special case of for all is explained in detail in appendix [ sect : derivationexample ] .while the crlb gives a lower bound on the localization rmse , an effective estimator is needed to find an rfid tag s location as accurate as possible , ideally with an rmse close to the crlb . in here, we will define a simple mle estimator for comparison purposes with the crlb . using the likelihood function defined in, the mle can be formulated as follows having a closed form solution for the mle in is not mathematically tractable due to the complexity of the directional antenna radiation pattern as captured through - . in particular ,due to entangled sines and cosines , after equating differentiation of the likelihood function as in to zero , one can not obtain a closed form solution .thus our problem could be solved with mle grid search , which can be represented as follows in our computer simulations in section [ sect : nresults ] , we consider a densely sampled grid of nearly uniformly spaced points .the granularity of the grid is set to cm. then , the mle solution corresponds to the grid position that maximizes the likelihood function in and can be found using exhaustive search . to reduce complexity, the mle solution is found by a constrained search over the region that is defined by the number of rss measurements and corresponding antennas .when there are only two rss measurements available , the search is conducted only over the positions where . as it is stated in section [ subsect : coverageareas ] , a grid location with only two rss measurements is still localizable , although the accuracy is relatively limited when compared to locations where more than two rss measurements are available .based on our numerical results that will be shown in section [ sect : nresults ] , overall localization accuracy is still acceptable .accuracy of the mle will be compared with the crlb in various scenarios in the next section ..passive uhf rfid system parameters . [ cols="^,^ " , ]numerical results are provided to validate analytic derivations with computer simulations and to compare the performance of the mle with the crlb for rfid based iot localization .the simulation parameters for the passive uhf rfid system is given in table [ table : parameter ] .as stated in section [ sect : crlb ] , the received power at the rfid reader antenna is subject to lognormal noise .the noise variance is adopted from the statistical models in , which were derived from rfid propagation measurements .our computer simulation considers rfid antennas that are installed in a square shaped room with meters width , and the height of the reader antennas are meters above floor level .the channel is assumed to be frequency flat slow fading channel in our system .there are two antenna placement configurations , one is placing the antennas to centers of side walls which is referred as ` side ' , and the second is placing them on the corners of the room which is referred as ` corner ' in figures . the reader uses circularly polarized antennas which have a radiation pattern as defined in , and the tag antennas are assumed to be vertically polarized .the height of the tag is assumed to be known and meter .elevation angles of reader antennas are defined as , , and .elevation angles lower than are not considered due to lack of localization coverage for those angles . in fig .[ fig : coveragep ] , localization coverage percentage in is illustrated for different elevation angles , antenna placement configurations , and transmit power levels for monostatic ( fig .[ fig : coveragep](a ) ) and bistatic ( fig .[ fig : coveragep](b ) ) antenna configuration .the localization coverage is below for monostatic cases other than side .the coverage percentage for monostatic configuration increases rapidly with increasing transmit power from on the average for mw to for mw transmit power .localization coverage for bistatic cases show improvement with increased transmit power as well .the mean localization coverage percentage for mw is , while increasing transmit power to mw substantially boosts it to .the elevation angle also plays a critical role in localization coverage of the system . in monostatic and bistatic configurations , is superior to other angles for both corner and side placement of antennas . in general, the coverage is increased with increased elevation angle .corner placement of the antennas is better in bistatic configuration , whereas in monostatic configuration side placement has larger coverage area in general .the corner placement of the antennas covers of the area for bistatic configuration on the average for all available transmit powers , whereas side placement enables to localize the tags in of the area .things are different for monostatic case , where corner placement has coverage , while side placement achieves better performance with .this is expected since side placement increases the overlap possibility of monostatic antenna coverages with less distances between antennas , whereas corner placement exploits the radiation coverage with increased distances between antennas .0.49 , meter , for , and mw.,title="fig : " ] [ fig : mle45 m ] 0.49 , meter , for , and mw.,title="fig : " ] [ fig : mle45b ] + 0.49 , meter , for , and mw.,title="fig : " ] [ fig : crlb45 m ] 0.49 , meter , for , and mw.,title="fig : " ] [ fig : crlb45b ] in fig .[ fig : t45mc ] , average mle and crlb rmse for monostatic and bistatic configurations with , for mw at each possible tag location is given .the localization coverage for monostatic configuration is just above , while in bistatic configuration it is above as represented in fig .[ fig : coveragep ] .monostatic configuration has localization coverage above for only side placement of antennas with , thus they are not represented in median localization rmse results which they do not have . the median localization rmse of crlb and mle are compared in fig .[ fig : pi4all](a ) , for elevation angle of .monostatic configuration is not in the results since it does not have a coverage above as in fig .[ fig : coveragep ] .median rmse of crlb for side placement of antennas begin with meters at mw and gets as low as meters , while corner placement has lower median error in general from meters at mw to meters at mw . as expected , mle gets closer performance to the crlb as transmit power increases .median rmse of mle for side placement of antennas begin with meters at mw to meters at mw , while corner placement does better with meters at mw , and meters at mw . in fig .[ fig : pi4all](b ) , performance of side placement , and in fig .[ fig : pi4all](c ) , performance of corner placement is are shown . in fig . [fig : pi4all](b ) , the localization probability of a tag with mle below an error of meter for monostatic configuration with side placement and mw is , while for bistatic configuration with same parameters it gets to .the cdf values of crlb for those are and , respectively . in fig .[ fig : pi4all](c ) , the localization probability of a tag with mle below an error of meter for monostatic configuration with corner placement and mw is , while for bistatic configuration with same parameters it gets to .the cdf values of crlb for those are and , respectively .the side placement of antennas has better performance with monostatic mle compared to corner placement , while bistatic performance substantially lower .increasing elevation angle to helps to decrease median localization rmse and improve localization performance .the median localization rmse of crlb and mle are compared in fig .[ fig : pi3all](a ) , for elevation angle of . as shown in fig[ fig : coveragep ] , bistatic configuration is always above in localization coverage . in fig .[ fig : pi3all](a ) , median rmse of crlb for side placement of antennas begin with meters at mw and gets as low as meters , while corner placement has lower median error in general from meters to meters at mw . similar to , mle converges to crlb as transmit power increases .median rmse of mle for side placement of antennas begin with meters at mw , which reduces to meters at mw , while corner placement does better with meters and meters , respectively . in fig .[ fig : pi3all](b ) cdf of localization rmse for side placement is shown for side placement with .the localization probabilities of a tag below an error of meter for monostatic and bistatic configuration are and , respectively , while their crlb are and , respectively . in fig .[ fig : pi3all](c ) , the localization probability of a tag with mle below an error of meter for monostatic configuration with corner placement and mw are and , while their crlb are and , respectively .side placement of antennas increase the performance of monostatic configuration while degrading bistatic configuration performance similar to . in fig .[ fig : pi2all](a ) , the median localization rmse of crlb and mle are compared for elevation angle of with side and corner placement of antennas .median rmse of crlb for side placement of antennas begin with meters at mw and gets as low as meters , while corner placement has lower median error in general from meters to meters at mw .similar to and , mle converges to crlb as transmit power increases .median rmse of mle for side placement of antennas begin with meters at mw and reduce to meters at mw , while corner placement does better with meters and meters , respectively . in fig .[ fig : pi2all](b ) cdf of localization rmse for side placement is represented .the localization probabilities of a tag below an error of meter for monostatic and bistatic configuration are and , respectively .the crlb for those are and , respectively . in fig .[ fig : pi2all](c ) , the localization probability of a tag with mle and crlb is shown with respect to localization rmse .the probability of having an error below meter for monostatic configuration with corner placement and mw are and , while the cdf of crlb for those are and , respectively .side placement of antennas increase the performance of monostatic configuration slightly while degrading bistatic configuration performance substantially .in general , configurations with larger elevation angle results better localization coverage and lower localization rmse . in fig .[ fig : pi4all](a ) , the median localization rmse for has much higher values compared to in fig .[ fig : pi3all](a ) and in fig .[ fig : pi2all](a ) , for example , at mw localization rmse is not available for since its localization coverage is all below for either corner and side placement of antennas , while and have acceptable accuracies .especially has median localization rmse of meters for both side and corner configuration . at all elevation angles , corner placement of antennas has better localization coverage for bistatic configuration at mw .monostatic configuration does better with side placement of antennas , since in that case the coverage of antennas overlaps in larger areas .increasing transmit power not only increases the localization coverage , but also reduces the localization error . as a conclusion ,an elevation angle larger than is crucial for localization coverage and accuracy as well as corner placement of antennas with transmit power at mw which is the eirp limit in epc gen2 protocol of uhf rfid systems .in this paper , fundamental limits on the iot localization accuracy of a passive uhf rfid tag is studied considering realistic propagation models for reader antennas .our results show that high accuracy of localization does not only depend on the transmit power , but also depends on the use of right elevation angle and antenna placement and the use of bistatic configuration in localization system . in our simulationsit is shown that among the considered elevation angles , yields the best results for the given deployment scenario , since it maximizes the received power , results in largest localization coverage for iot and minimizes the localization error .we observed that bistatic localization coverage drops with the use of side placement of antennas , while it increases monostatic localization coverage .using bistatic configurations improves the probability of localizing the tag with higher accuracies when compared with monostatic configurations .the best results are achieved with bistatic configuration and side placement of the antennas .this work was made possible by the national science foundation grant ast1443999 .the statements made herein are solely the responsibility of the authors .in this appendix we will show derivation of crlb through obtaining fim .individual elements of the fim in can be calculated using the likelihood function in as follows : , \label{eq : fim_element } \end{aligned}\ ] ] where is the -th element of the fim for . as in , using the fim element in can be derived as =\frac{1}{\sigma^2}\sum_{i=1}^{n}\sum_{j=1}^{n}\bigg(\frac{\partial\hat{{p } } _ { ij}}{\partial { \rm x}_m}\times\frac{\partial\hat{{p } } _ { ij}}{\partial { \rm x}_n}\bigg).\label{eq : expectfim } \end{aligned}\ ] ] note that is in logarithmic scale .derivative of each element in received power is calculated separately since it can be written as summation of different functions in logarithmic scale .partial derivative of can be represented as the ( unknown ) location of the tag ( ) does not affect the parameters of received power , and hence the resulting partial derivative of is then given by this appendix we will derive the crlb for parameters and for .the gain function in for those particular values of and becomes first derivative of with respect to , for , is where then for in , and can be solved as the same solution for is given in the path loss function does not change with and , and it only depends on the distance between the reader antenna and the tag . then , the derivative of with respect to and is as follows based on these derivations , using , the crlb for any location can be calculated with known set and for with given parameters and . in fig .[ fig : t45mc](c ) and fig .[ fig : t45mc](d ) , crlb for monostatic and bistatic configurations respectively are calculated for any possible location of tag .l. yan , y. zhang , l. yang , and h. ning , _ the internet of things : from rfid to the next - generation pervasive networked systems _ , ser .wireless networks and mobile communications.1em plus 0.5em minus 0.4em taylor & francis , 2008 .e. welbourne , l. battle , g. cole , k. gould , k. rector , s. raymer , m. balazinska , and g. borriello , `` building the internet of things using rfid : the rfid ecosystem experience , '' _ ieee internet computing _ , vol . 13 , no . 3 , pp .4855 , may 2009 .k. akkaya , i. guvenc , r. aygun , n. pala , and a. kadri , `` iot - based occupancy monitoring techniques for energy - efficient smart buildings , '' in _ wireless communications and networking conference workshops ( wcncw ) , 2015 ieee _ , march 2015 , pp . 5863 .x. jia , q. feng , t. fan , and q. lei , `` rfid technology and its applications in internet of things ( iot ) , '' in _ proc .consumer electronics , communications and networks ( cecnet ) _ , apr .2012 , pp . 12821285 .s. wamba and e. w. ngai , `` importance of the relative advantage of rfid as enabler of asset management in the healthcare : results from a delphi study , '' in _ proc .hawaii int .system science ( hicss ) _2012 , pp . 28792889 .a. al - ali , f. aloul , n. aji , a. al - zarouni , and n. fakhro , `` mobile rfid tracking system , '' in _ proc .information and communication technologies : from theory to applications _ , apr .2008 , pp . 14 .n. patwari , j. ash , s. kyperountas , a. hero , r. moses , and n. correal , `` locating the nodes : cooperative localization in wireless sensor networks , '' _ ieee sig .proc . mag ._ , vol . 22 , no . 4 , pp5469 , july 2005 .s. gezici , z. tian , g. giannakis , h. kobayashi , a. molisch , h. poor , and z. sahinoglu , `` localization via ultra - wideband radios : a look at positioning aspects for future sensor networks , '' _ ieee sig .proc . mag ._ , vol . 22 , no . 4 , pp .7084 , july 2005 . i. guvenc , s. gezici , and z. sahinoglu , `` fundamental limits and improved algorithms for linear least - squares wireless position estimation , '' _ wireless comm . and mobile computing _ , vol . 12 , no . 12 , pp . 10371052 , 2012 .l. geng , m. bugallo , a. athalye , and p. djuric , `` real time indoor tracking of tagged objects with a network of rfid readers , '' in _ proc .european signal processing conference ( eusipco ) _ , aug .2012 , pp . 205209 .m. moreno , m. zamora , j. santa , and a. skarmeta , `` an indoor localization mechanism based on rfid and ir data in ambient intelligent environments , '' in _ proc .innovative mobile and internet services in ubiquitous computing ( imis ) _ , july 2012 , pp .805810 .d. hahnel , w. burgard , d. fox , k. fishkin , and m. philipose , `` mapping and localization with rfid technology , '' in _ proc .robotics and automation _ , vol . 1 , apr .2004 , pp . 10151020 vol.1 .a. bekkali , h. sanson , and m. matsumoto , `` rfid indoor positioning based on probabilistic rfid map and kalman filtering , '' in _ proc .wireless and mobile computing _ , oct .2007 , pp . 2121. b. s. iftler , a. kadri , and i. guvenc , `` experimental performance evaluation of passive uhf rfid systems under interference , '' in _ proc .on rfid tech . and appl ., rfid - ta _ , sept .2015 , pp . 8186 .x. zheng , h. liu , j. yang , y. chen , r. martin , and x. li , `` a study of localization accuracy using multiple frequencies and powers , '' _ ieee trans .parallel and distributed syst . _ , vol . 25 , no . 8 , pp . 19551965 , aug .akre , x. zhang , s. baey , b. kervella , a. fladenmuller , m. zancanaro , and m. fonseca , `` accurate 2-d localization of rfid tags using antenna transmission power control , '' in _ wireless days ( wd ) , 2014 ifip _ , nov 2014 , pp . 16 .d. lieckfeldt , j. you , and d. timmermann , `` exploiting rf - scatter : human localization with bistatic passive uhf rfid - systems , '' in _ ieee int .conf . on wireless and mobile computing ,netw . and comm .2009 , pp .179184 .m. hasani , e .- s .lohan , l. sydanheimo , and l. ukkonen , `` path - loss model of embroidered passive rfid tag on human body for indoor positioning applications , '' in _ proc .ieee rfid tech . andappl . conf .( rfid - ta ) _ , sept .2014 , pp . 170174 .a. bekkali , s. zou , a. kadri , m. crisp , and r. penty , `` performance analysis of passive uhf rfid systems under cascaded fading channels and interference effects , '' _ ieee trans .wireless commun ._ , vol . 14 , no . 3 , pp . 14211433 , mar | passive radio - frequency identification ( rfid ) systems carry critical importance for internet of things ( iot ) applications due to their energy harvesting capabilities . rfid based position estimation , in particular , is expected to facilitate a wide array of location based services for iot applications with low - power requirements . in this paper , considering monostatic and bistatic configurations and 3d antenna radiation pattern , we investigate the accuracy of received signal strength based wireless localization using passive ultra high frequency ( uhf ) rfid systems . the cramer - rao lower bound ( crlb ) for the localization accuracy is derived , and is compared with the accuracy of maximum likelihood estimators for various rfid antenna configurations . numerical results show that due to rfid tag / antenna sensitivity , and the directional antenna pattern , the localization accuracy can degrade at _ blind _ locations that remain outside of the rfid reader antennas main beam patterns . in such cases optimizing elevation angle of antennas are shown to improve localization coverage , while using bistatic configuration improves localization accuracy significantly . beamforming , bistatic , crlb , iot , localization , maximum likelihood estimation , monostatic , position estimation , public safety , radiation pattern , uhf rfid . |
membrane proteins are fundamental for life , and their structures and dynamics are essential for their biological functions . about 30 % of proteins encoded in genomes are estimated to be of membrane proteins by bioinformatics. although membrane - protein folding has been studied extensively by experiments, only about 2 % in whole known structures in pdb are membrane proteins , because biomembrane environment makes crystallization very difficult. thus , simulation studies are getting more important ( for previous simulations , see , for instance , ) .however , simulations often suffer from sampling insufficiency and the efficient sampling methods like generalized - ensemble algorithms and/or the reduction of the size of systems are required . in particular , replica - exchange method ( rem ) and its extensions are often used in generalized ensemble algorithms due to their efficiency , parallelization ease , and usability ( for reviews , see , e.g. , refs. ) .one of useful approaches to reduce system sizes is to employ an implicit membrane model , which mimics some elements of membrane properties such as dielectric profile , chain order , pressure profile , and intrinsic curvature by parameters for electrostatic solvent free energy. while these methods are mainly based on the free energy difference between solvent and solute , simpler implicit membrane model was introduced previously , where transmembrane helices keep a helix structure and are always restricted within membrane regions during folding , which greatly reduces the effort for the search in the conformational space during folding processes. this model assumed that the native structure of membrane proteins can be predicted by helix - helix interactions between transmembrane helices with fixed helix structures , and that the membrane environment constraints the regions where helices can exist ( namely , within membranes ) and stabilizes transmembrane helix structures .this model is supported by many experimental data such as those leading to the two - stage model , in which each helix structure is formed first , and they aggregate each other by helix - helix packing in membrane protein folding to reach the native conformation ( for a review , see ref . ) . the previous method could predict the native structures by the rem simulation using known native helix structures ( for a review , see ref . ) . however ,if the native structures consist of distorted helix structures , the previous prediction method will not work because the method treated helix structures as rigid bodies .it is actually known from experimental structures in pdb that transmembrane helices are distorted or bent in about 25 % of all transmembrane helix structures . therefore ,in this article , we propose a new treatment of helix structures by taking into account helix distortions and kinks instead of treating them as rigid bodies .we tested our new prediction method for native structures .our test systems consist of the case with only ideal helix structures and that with a distorted helix structure .this article is organized as follows . in section 2 , we explain the details of our methods .the potential energy function used for our new models and the method to introduce the helix kinks are described . in section 3 , we show the results of the rem simulation applied to glycophorin a and phospholamban .after we check that rem simulation are properly performed , the free energy minimum states are identified by the principal component analysis .finally , section 4 is devoted to the conclusions .we first review our previous method .only the transmembrane helices are used in our simulations , and loop regions of membrane proteins as well as lipid and water molecules are neglected .our assumptions are that a role of water is to push the hydrophobic transmembrane regions of membrane proteins into the lipid bilayer and that a major role of lipid molecules is to prepare a hydrophobic environment and construct helix structures in the transmembrane regions .loop regions of membrane proteins are often outside the membrane and we assume that they do not directly affect the structure of transmembrane regions . due to the difference in surface shapes of helices and lipids , the stabilization energy forhelix - helix packing will be larger than that for helix - lipid packing .therefore , water , lipids , and loop - region of proteins are not treated explicitly in our simulations , although the features of membrane boundaries are taken into account by the constraint conditions below .we update configurations with a rigid translation and rotation of each -helix and torsion rotation of side - chains by monte carlo ( mc ) simulations .we use mc method although we can also use molecular dynamics in principle .there are 2 + kinds of mc move sets , where is the total number of transmembrane helices in the protein , and is the total number of dihedral angles in the side - chain of helices .we add the following three elementary harmonic constraints to the original potential energy function .the constraint function is given by where each term on the right - hand side is defined as follows : ^ 2 , \label{const - ene1}\end{aligned}\ ] ] ^ 2\right .\nonumber \\ & + \left.k_2~ \theta \left ( \left| z^{\rm u}_{i}-z^{\rm u}_{0 } \right| -d^{\rm u } \right ) \left [ \left| z^{\rm u}_{i}-z^{\rm u}_{0 } \right| -d^{\rm u } \right]^2 \right\ } , \label{const - ene2}\end{aligned}\ ] ] ^ 2 .\label{const - ene3}\end{aligned}\ ] ] is the energy that constrains pairs of adjacent helices along the amino - acid chain not to be apart from each other too much ( loop constraints ) . is the distance between the c atom of the c - terminus of the -th helix and the c atom of the n - terminus of the -th helix , and and are the force constant and the central value constant of the harmonic constraints , respectively , and is the step function : this term has a non - zero value only when the distance becomes longer than .only the structures in which the distance between neighboring helices in the amino - acid sequence is short are searched because of this constraint term . is the energy that constrains helix n - terminus and c - terminus to be located near membrane boundary planes . here, the z - axis is defined to be the direction perpendicular to the membrane boundary planes . is the force constant of the harmonic constraints . and are the z - coordinate values of the c atom of the n - terminus or c - terminus of the -th helix near the fixed lower membrane boundary and the upper membrane boundary , respectively . and are the fixed lower boundary z - coordinate value and the upper boundary z - coordinate value of the membrane planes , respectively . and are the corresponding central value constants of the harmonic constraints .this term has a non - zero value only when the c atom of the n - terminus or c - terminus of the -th helix are apart more than ( or ) .this constraint energy was introduced so that the helix ends are not too much apart from the membrane boundary planes . is the energy that constrains all c atoms within the sphere ( centered at the origin ) of radius . is the distance of c atoms from the origin , and and are the force constant and the central value constant of the harmonic constraints , respectively .this term has a non - zero value only when c atoms go out of this sphere and is introduced so that the center of mass of the molecule stays near the origin .the radius of the sphere is set to a large value in order to guarantee that a wide conformational space is sampled .these constraints are considered to be a simple implicit membrane model which mimics membrane environment during membrane protein folding .moreover , all constraints limit the conformational space of proteins to improve sampling and are useful when we use limited computational resources . in summary , this procedure is consistent with the two - stage model , and it assumes that side - chain flexibility is essential in their folding . because backbone structures of main chain are treated as rigid bodies in the previous method , the method can not be applied if transmembrane helices are distorted . however , most of transmembrane helix structures in pdb have distorted or bent helix structures .we , therefore , need to treat the deformations of backbone helix structures during simulations .namely , the and torsion rotations and concerted rotation of backbone are used to reproduce the distorted helix structures of experimental structures from the initial ideal helix structures in monte carlo move sets . here , we also update configurations with a rotation of torsion angles of backbones by directional manipulation and concerted rotation. there are 2 + + + kinds of mc moves now , where is the total number of torsion angles in the helix backbones , and is the total number of the combination of seven successive backbone torsion angle by the concerted rotation in the helix backbone .one mc step in this article is defined to be an update of one of these degrees of freedom , which is accepted or rejected according to the metropolis criterion . in order to keep helix conformations of the distortions ,we introduce the fourth constraint term as follows : where is the newly - introduced energy term which constrains dihedral angles of main chains within bending or kinked helix structures from ideal helix structures and prevent them from bending and distortions too much . and are the main - chain torsion angles of the -th residue . and are the fixed reference values of the harmonic constraint , and are the force constants , and are the central values of the harmonic constraint .we now explain the replica - exchange method briefly .this method prepares non - interacting replicas at different temperatures .while conventional canonical mc simulation is performed for each replica , temperature exchange between pairs of replicas corresponding to temperatures is attempted at a fixed interval based on the following metropolis criterion .let the label ( = 1 , , ) correspond to the replica index and label ( = 1 , , ) to the temperature index .we represent the state of the entire system of replicas by } , \cdots , x_{m(m)}^{[m ] } \right\} ] are the set of coordinates of replica ( at temperature ) , and is the permutation of . the boltzmann - like probability distribution for state is given by })]}.\ ] ] we consider exchanging a pair of temperatures and , corresponding to replicas and : } , \cdots , x_n^{[j ] } , \cdots \right\ } \rightarrow \nonumber \\ & x^\prime = \left\ { \cdots , x_m^{[j ] } , \cdots , x_n^{[i ] } , \cdots \right\ } .\end{aligned}\ ] ] the transition probability of metropolis criterion is given by } \mid x_n^{[j ] } ) \nonumber \\ & = { \rm min}\left(1 , \frac{w_{{\rm rem } } ( x^\prime)}{w_{{\rm rem } } ( x)}\right ) \nonumber \\ & = { \rm min}(1,\exp(- \delta ) ) , \end{aligned}\ ] ] where } ) - e(q^{[i ] } ) ) $ ] . because each replica reaches various temperatures followed by replica exchange , the rem method performs a random - walk in temperature space during the simulation .expectation values of physical quantities are given as functions of temperatures by solving the multiple - histogram reweighting equations. the density of states and dimensionless helmholtz free energy are obtained by solving the following equations iteratively : and where and be the energy histogram and the total number of samples obtained of temperature , respectively .after we obtained at each temperature , the expectation value of a physical quantity at any temperature is given by where are the set of coordinates at temperature obtained from the trajectories of the simulation .we analyze the simulation data by the principal component analysis ( pca). the structures are superimposed on an arbitrary reference structure , for example , the native structure from pdb .the variance - covariance matrix is defined by where and . are cartesian coordinates of the -th atom , and is the total number of atoms .this symmetric 3 3 matrix is diagonalized , and the eigenvectors and eigenvalues are obtained . for this calculation , we used the r program package. the first superposition is performed to remove large eigenvalues from the translations and rotations of the system , because we want to analyze the internal differences of structures .therefore , this manipulation results in the smallest value close to zero for the six eigenvalues corresponding to translations and rotations of the center of geometry .the eigenvalues are ordered in the decreasing order of magnitude .thus , the first , second , and third principal component axes are defined as the eigenvectors corresponding to the largest , second largest , and third largest eigenvalues , respectively .the -th principal component of each sampled structure is defined by the following inner product : where is the ( normalized ) -th eigenvector .the mc program is based on charmm macromolecular mechanics program, and replica - exchange monte carlo method was implemented in it . in this work ,we studied two membrane proteins : glycophorin a and phospholamban .both proteins are registered in orientation of proteins in membrane ( opm). the former has a dimer of an almost ideal helix structure in pdb ( pdb code : 1afo ) .the number of amino - acid residues in the helix is 18 , and the sequence is identical and tliifgvmagvigtilli .the other has a single transmembrane helix structure in pdb ( pdb code : 1fjk ) .the number of amino - acid residues in the helix is 25 , and the sequence is lqnlfinfclilifllliciivmll .the n - terminus and the c - terminus of each helix were blocked with the acetyl group and the n - methyl group , respectively . in the previous works , a 13-replica rem mc simulation of glycophorin a was performed with 13 replicas with the following temperatures : 200 , 239 , 286 , 342 , 404 , 489 , 585 , 700 , 853 , 1041 , 1270 , 1548 , and 1888 k. this simulation predicted the structures close to the native one successfully , the backbones structures were fixed to the ideal helix structures . in the present simulation , the flexibility of backbone helix structures is newly taken into account , and 16 replicas were used with the following temperatures : 300 , 333 , 371 , 413 , 460 , 512 , 571 , 635 , 707 , 787 , 877 , 976 , 1087 , 1210 , 1347 , and 1499 k. the total number of mc steps was 60,000,000 . for phospholamban , 16 replicas were also used with the following temperatures : 300 , 340 , 386 , 438 , 497 , 564 , 640 , 727 , 825 , 936 , 1062 , 1205 , 1368 , 1553 , 1762 , and 2000 k. the total number of mc steps was 100,000,000 .the above temperatures were chosen so that all acceptance ratios of replica exchange are almost uniform and sufficiently large for computational efficiency .the highest temperature was chosen sufficiency high so that no trapping in local - minimum - energy states occurs in both simulations .replica exchange was attempted once at every 1000 mc steps for glycophorin a and 100 mc steps for phospholamban , respectively .we used the charmm19 parameter set ( polar hydrogen model ) for the original potential energy of the system. no cutoff was introduced to the non - bonded terms .each structure was first minimized subjected to harmonic restraint on all the heavy atoms .the value of the dielectric constant was set as = 1.0 , as in the previous works. because previous studies showed that this value was better for the predictions of transmembrane helix structures than that of = 4.0 , although = 4.0 is close to the lipid environment of electrostatic potential effects .this may be due to the fact that few lipid molecules lie between helices in native transmembrane structures . for concerted rotation we selected the backbone atoms except for those in cysteine residues .we selected 6 or 7 continuous bonds from the first atom along backbone for the driver torsion .third bond and fifth bond were allowed to rotate following the driver bonds .the number of degrees of freedom in total was equal to 190 in glycophorin a and 132 in phospholamban .we set for glycophorin a and for phospholamban ( kcal / mol)/ , , ( kcal / mol)/ , ( kcal / mol)/ , , ( kcal / mol)/degrees , degrees , degrees , degrees , and degrees for our simulations . for membrane thickness parameters, we set , , and for glycophorin a , and , , and for phospholamban .for pca analyses , 60,000 and 100,000 conformational data were chosen in a fixed interval at each temperature from the rem simulation for glycophorin a and phospholamban , respectively .we used the pdb structures ( pdb codes : 1afo for glycophorin a and 1fjk for phospholamban ) as the reference structures to judge the prediction ability .we first examine how the replica - exchange simulation performed . fig .[ 1afo - integrate](a ) shows the time series of the replica index at the lowest temperature of 300 k. we see that the minimum temperature visited different replicas many times during the rem simulation , and we observe a random walk in the replica space .the complementary picture is the temperature exchange for each replica .[ 1afo - integrate](b ) shows the time series of temperatures for one of the replicas ( replica 11 ) .we see that replica 11 visited various temperatures during the rem simulation .we observe random walks in the temperature space between the lowest and highest temperatures .other replicas behaved similarly .[ 1afo - integrate](c ) shows the corresponding time series of the total potential energy for replica 11 .we see a strong correlation between time series of temperatures ( fig .[ 1afo - integrate](b ) ) and that of potential energy ( fig .[ 1afo - integrate](c ) ) , as is expected .we next examine how widely the conformational space was sampled during the rem simulation .we plot the time series of the root mean - square deviation ( rmsd ) of all the c atoms from the experimental structure ( pdb code : 1afo ) for replica 11 in fig .[ 1afo - integrate](d ) .when the temperature becomes high , the rmsd takes large values , and when the temperature becomes low , the rmsd takes small values . by comparing figs .[ 1afo - integrate](b ) and [ 1afo - integrate](d ) , we see that there is a positive correlation between the temperature and the rmsd values .the fact that the rmsds at high temperatures are large implies that our simulations did not get trapped in local - minimum potential - energy states .these results confirm that the rem simulation was properly performed .time series of various quantities for the rem simulation of glycophorin a. ( a ) time series of replica index at temperature 300 k. ( b ) time series of temperature change for replica 11 .( c ) time series of total potential energy change for replica 11 .( d ) time series of the rms deviation ( in ) of all the c from the pdb structures for replica 11 . ]table [ 1afo accept ] lists the acceptance ratios of replica exchange between all pairs of nearest neighboring temperatures .we find that the acceptance ratio is high enough ( 0.1 ) in all temperature pairs .[ 1afo - wham](a ) shows the canonical probability distributions of the potential energy obtained from the rem simulation at 16 temperatures .we see that the distributions have enough overlaps between the neighboring temperature pairs .this ensures that the number of replicas was sufficient . in fig .[ 1afo - wham](b ) , the average potential energy and its components , namely , the electrostatic energy , van der waals energy , torsion energy , and constraint energy , are shown as functions of temperature , which were calculated by eq .( [ wham ] ) . because the helices are generally far apart from each other at high temperatures , the energy components , especially electrostatic energy and van der waals energy , are higher at high temperatures . at low temperatures , on the other hand, the side - chain packing among helices is expected .we see that as the temperature becomes lower , , , and decrease almost linearly up to 1200 k , and as a result is also almost linearly decreasing up to 1200 k. on the other hand , when the temperature becomes 1200 k , contributes more to the decrease of .this is reasonable , because decreases as a result of side - chain packing and the stability of the conformation increases .note that we used only transmembrane regions in the rem simulation .transmembrane helices are generally considered to be hydrophobic , and helix - helix association is sometimes considered only by vdw packing ( lock - and - key model ) .however , fig .[ 1afo - wham](b ) shows that also changes much as a function of temperature .this implies that electrostatic effects also contribute to the formation of the native protein conformation ..acceptance ratios of replica exchange corresponding to pairs of neighboring temperatures from the rem simulation of glycophorin a.[1afo accept ] [ cols="<,^,<,^ " , ] the following abbreviations are used : str : the number of structures , tote : total potential energy , elec : electrostatic energy , vdw : van der waals energy , dih : dihedral energy , geo : constraint energy ( all in kcal / mol ) , rmsd : root - mean - square deviation of all c atoms ( in ) .( color online ) typical structures of phospholamban in each cluster selected by free energy local minimum state .the purple structure is native structure .the rmsd from the native conformation with respect to the backbone atoms is 1.27 and 2.89 for cluster 1 and cluster 2 , respectively . ]table [ 1fjk allcluster ] lists average quantities of two clusters of similar structures .the rows of cluster 1 and cluster 2 represent various average values for the structures that belong to each cluster .we see that rmsd is as small as 2.06 for cluster 1 , while it is 2.93 for cluster 2 .hence , cluster 1 has very similar structures to the native one .however , it is not the global - minimum free energy state but a local - minimum one , comparing the number of conformations ( str entries in table [ 1fjk allcluster ] ) in both clusters . in fig .[ 1fjk - minimumenestr ] , representative structures of each cluster in table [ 1fjk allcluster ] and the structure obtained by solution nmr experiments ( pdb code : 1fjk ) are shown .we confirm that cluster 1 is very similar to the native structure .it is bent at the same position and in the same direction , although the amount of bent is not as much as the native one .cluster 2 is also bent at the same position and about the same amount as the native one , but it has a bend in the opposite direction .hence , the present simulation can predict the position of bend , but it gives both directons of bend as local - minimum free energy states and cluster 2 as the global - minimum one .the present system is a helix monomar , and without interactions with other helices , it seems very difficult to decide the direction of distorsions within the approximation of the present method .we remark that a preliminary rem simulation of bacteriorhodopsin with seven helices predicts correct directions of helix bending ( manuscript in preparation ) .in this article , we introduced deformations of helix structures to the replica - exchange monte carlo simulation for membrane protein structure predictions .the membrane bilayer environment was approximated by restraining the conformational space in virtual membrane region .the sampled helix structures were limited so that helix structures by introducing the restraints on the backbone and angles are not completely destroyed . in order to check the effectiveness of the method , we first applied it to the prediction of a dimer membrane protein , glycophorin a. we successfully reproduced the native - like structure as the global - minimum free energy state .we next applied the method to phospholamban , which has one distorted transmembrane helix structure in the pdb structure .the results implied that a native - like structure was obtained as a local - minimum free energy state .two local - minimum free energy states were found with the same bend position as the native one , but the global - minimum free energy state had an opposite direction of helix bend .therefore , our results seem to imply that the location of bends of helix structures in transmembrane helices are determined by their amino - acid sequence , but the direction and amount of distortion of helices are dependent on the interactions with surrounding lipid molecules , which we represented only implicitly .our next targets will be more complicated membrane proteins with multiple transmembrane helices such as g protein coupled receptors . our preliminary results for bacteriorhodopsin show that native - like structures with the correctly bent helices can be predicted by our method .some of the computations were performed on the supercomputers at the institute for molecular science , at the supercomputer center , institute for solid state physics , university of tokyo , and center for computational sciences , university of tsukuba .this work was supported , in part , grants - in - aid for scientific research ( a ) ( no .25247071 ) , for scientific research on innovative areas ( dynamical ordering & integrated functions ) , program for leading graduate schools integrative graduate education and research in green natural sciences , and for the computational materials science initiative , and for high performance computing infrastructure from the ministry of education , culture , sports , science and technology ( mext ) , japan . | we propose an improved prediction method of the tertiary structures of -helical membrane proteins based on the replica - exchange method by taking into account helix deformations . our method allows wide applications because transmembrane helices of native membrane proteins are often distorted . in order to test the effectiveness of the present method , we applied it to the structure predictions of glycophorin a and phospholamban . the results were in accord with experiments . |
the paper is devoted to the numerical solution of small - strain quasi - static elastoplastic problems .such a problem consists of the constitutive initial value problem ( civp ) and the balance equation representing the principle of virtual work .a broadly exploited and universal numerical / computational concept includes the following steps : * time - discretization of civp leading to an incremental constitutive problem ; * derivation of the constitutive and consistent tangent operators ; * substitution of the constitutive ( stress - strain ) operator into the balance equation leading to the incremental boundary value problem in terms of displacements ; * finite element discretization and derivation of a system of nonlinear equations ; * solving the system using a nonsmooth variant of the newton method .civp satisfies thermodynamical laws and usually involves internal variables such as plastic strains or hardening parameters .several integration schemes for numerical solution of civp were suggested . for their overview , we refer , e.g. , and references introduced therein . if the implicit or trapezoidal euler method is used then the incremental constitutive problem is solved by the elastic predictor / plastic corrector method .the plastic correction leads to the return - mapping scheme .we distinguish , e.g. , implicit , trapezoidal or midpoint return - mappings depending on a chosen time - discretization ( * ? ? ?* chapter 7 ) . in this paper, we assume that the plastic flow direction is generated by the plastic potential , .if is smooth then the corresponding plastic flow direction is uniquely determined by the derivative of and consequently , the plastic flow rule reads as follows , e.g. ( * ? ? ?* chapter 8) : here , , , , and denotes the plastic strain rate , the plastic multiplier rate , the stress tensor and the hardening thermodynamical forces , respectively .the corresponding return - mapping scheme is relatively straightforward and leads to solving a system of nonlinear equations .a difficulty arises when is _nonsmooth_. mostly , it happens if the yield surface contains singular points , such as apices or edges .then the function is rather pseudo - potential than potential and its derivative need not exist everywhere . in such a case , the rule ( [ eqn_flow_rule ] ) is usually completed by some additive formulas depending on particular cases of and in an ad - hoc manner .for example , the implementation of the mohr - coulomb model reported in ( * ? ? ?* chapter 6 , 8) employs one , two , or six plastic multipliers , depending on the location of on the yield surface . since the stress tensor is unknown in civp one must blindly guess its right location .moreover , for each tested location , one must usually solve an auxilliary system of nonlinear equations whose solvability is not guaranteed in general ._ these facts are evident drawbacks of the current return - mapping schemes . _ in associative plasticity, it is well - known that the plastic flow rule ( [ eqn_flow_rule ] ) together with a hardening law and loading / unloading conditions can be equivalently replaced by the principle of maximum plastic dissipation within the constitutive model .this alternative formulation of civp does not require special treatment for nonsmooth and enables to solve civp by techniques based on mathematical programming . in particular , if the implicit or trapezoidal euler method is used then the incremental constitutive problem can be interpreted by a certain kind of the closest - point projection . for some nonassociative models ,civp can be re - formulated using a theory of bipotentials that leads to new numerical schemes .these alternative definitions of the flow rule enable a variational re - formulation of the initial boundary value elastoplastic problem .consequently , solvability of this problem can be investigated ( see , e.g. , ) .therefore , the corresponding numerical techniques are usually also correct from the mathematical point of view . on the other hand ,such a numerical treatment is not so universal and its implementation is more involved / too complex in comparison with standard procedures of computational inelasticity .the approach pursued in this paper builds on the subdifferental formulation of the plastic flow rule , e.g. ( * ? ? ?* section 6.3.9 ) , for nonsmooth . here, denotes the subdifferential of at with respect to the stress variable .if is convex at least in vicinity of the yield surface then this definition is justified , e.g. , by ( * ? ? ?* corollary 23.7.1 ) and is valid even when is not smooth at . on the first sight , it seems that ( [ inclusion_flow_rule ] ) is not convenient for numerical treatment due to the presence of the multivalued flow direction .the main goal of this paper is to show that the _ opposite is true _ , by demonstrating that the implicit return - mapping scheme based on ( [ inclusion_flow_rule ] ) leads to solving a just one system of nonlinear equations regardless whether the unknown stress tensor lies on the smooth portion of the yield surface or not at least for a wide class of models with nonsmooth plastic pseudo - potentials . using this technique , we eliminate the blind guessing andthus considerably simplify the solution scheme .moreover , the new technique enables to investigate some useful properties of the constitutive operator , like uniqueness or semismoothness , that are not obvious for the current technique .first of all , we illustrate the new technique on a simple 2d projective problem that mimics the structure of an incremental elastoplastic constitutive problem .consider the convex set and define the projection of a point as follows : the scheme of the projection is depicted in figure [ fig_projection ] . clearly , the function is convex in , nondifferentiable at and if then .conversely , if it follows from the karush - kuhn - tucker conditions and ( [ motivation0 ] ) that the projective problem can be written as follows : _find and the lagrange multiplier : where _ [ -1,1 ] , & w_2^*=0 , \end{array } \right.\ ] ] to find a solution to ( [ motivationb2 ] ) , it is crucial to rewrite the inclusion ( [ motivationb2]) as an equation .observe that where denotes a positive part of a function .this crucial transformation will be derived in detail in section [ sec_dp ] on an analogous elastoplastic example .thus ( [ motivationb2 ] ) leads to the following system of equations : since ( [ motivationb3]) implies , the system of three nonlinear equations reduces to a single one consequently , can be found in the closed form as from which one can easily compute by ( [ motivationb3]) and ( [ motivationb3]) .the presented idea is systematically extended on some elastoplastic models .this paper , part i , is focused on isotropic models containing : yield surfaces with one or two apices ( singular points ) laying on the hydrostatic axis ; plastic pseudo - potentials that are independent of the lode angle ; nonlinear isotropic hardening ( optionally ) .such models are usually formulated by the haigh - westergaard coordinates .further , the implicit euler discretization of civp is considered and thus two types of return on the yield surface within the plastic correction are distinguished : return to the smooth portion of the yield surface ; return to the apex ( apices ) .the paper is organized as follows .section [ sec_preliminaries ] contains some preliminaries related to invariants of the stress tensor and semismooth functions .section [ sec_dp ] is devoted to the drucker - prager model including the nonlinear isotropic hardening .although the plastic corrector can not be found in closed form , the new technique enables to a priori decide about the return type and prove existence , uniqueness and semismoothness of the implicit constitutive operator. the consistent tangent operator is also introduced . in section [ sec_jg ], we derive similar results for the perfect plastic part of the jirsek - grassl model . in section [ sec_general ] ,the new technique is extended on an abstract model written by the haigh - westergaard coordinates . in particular , within the plastic correction , we formulate a unique system of nonlinear equations which is common for the both type of the return .it can lead to a more correct and/or simpler solution scheme in comparison with the current technique .section [ sec_realization ] is devoted to numerical realization of the incremental boundary value elastoplastic problem using the semismooth newton method . in section [ sec_experiments ] , illustrative numerical examples related to a slope stability benchmark are considered .here , limit load is analyzed by an incremental method depending on a mesh type and mesh density for the drucker - prager and jirsek - grassl models . within this paper , second order tensors , matrices and vectorsare denoted by bold letters . as usual ,small letters are used for vectors and capitals for matrices ( see section [ sec_realization ] ) .further , the fourth order tensors are denoted by capital blackboard letters , e.g. , or .the symbol means the tensor product .we also use the following notation : and for the space of symmetric , second order tensors .consider a stress tensor and its splitting into the volumetric and deviatoric parts : here , , , and denote the identity second order tensor , the fourth order deviatoric projection tensor , the hydrostatic pressure , and the deviatoric stress , respectively . _the haigh - westergaard coordinates _ are created by the invariants , and , where clearly , and ] .it holds : is a bounded and smooth function for any , ; when ; , when .we will also use the following derivatives : ,\quad \frac{\partial r_e}{\partial{\mbox{\boldmath}}}=-r_e'(\cos\theta)\sin\theta\frac{\partial \theta}{\partial{\mbox{\boldmath}}}. \label{inv_der2}\ ] ] notice that the derivatives of , , and do not exist when .further , is not differentiable when satisfies either or . on the other hand, has derivatives for such stresses . for purposes of this paper , it is crucial to derive the subdifferential of at when : if then by ( [ inv_der1]) .it is readily seen that regardless or not .semismoothness was originally introduced by mifflin for functionals .qi and j. sun extended the definition of semismoothness to vector - valued functions to investigate the superlinear convergence of the newton method .we introduce a definition of _ strongly semismooth _ functions . to this end , consider finite dimensional spaces and with the norms and , respectively . in the context of this paper ,the abstract spaces represent either subspaces of or the space .let be locally lipschitz function in a neighborhood of some and denote the generalized jacobian in the clarke sense .we say that is strongly semismooth at if 1 . is directionally differentiable at , 2 . for any , , and any , [ def_semi ]notice that the estimate ( [ semi_1 ] ) is called the quadratic approximate property in or the strong -semismoothness in . in literature, there exists several equivalent definitions of strongly semismooth functions , see .for example the condition ( [ semi_1 ] ) can be replaced with where is the subset of , is frchet differentiable and denotes the directional derivative of at a point and a direction .we say that is strongly semismooth on an open set if is strongly semismooth at every point of . since it is difficult to straightforwardly prove ( [ semi_1 ] ) or ( [ semi_3 ] ) , we summarize several auxilliary results .firstly , piecewise smooth ( ) functions with lipschitz continuous derivatives of selected functions belong among strongly semismooth functions . especially , we mention the max - function , , .further , scalar product , sum , compositions of strongly semismooth functions are strongly semismooth . finally , we will use the following version of the implicit function theorem .let be a locally lipschitz function in a neighborhood of , which solve .let denote the generalized derivatives of at with respect to the variables .if is of maximal rank , i.e. the following implication holds , then there exists an open neighborhood of and a function such that is locally lipschitz continuous in , and for every in , .moreover , if is strongly semismooth at , then strongly semismooth at .[ implicit ] the semismoothness of constitutive operators in elastoplasticity has been studied e.g. in .namely in , one can find an abstract framework how to investigate it for operators in an implicit form .however , the framework can not be straightforwardly used for models investigated in this paper .therefore , we introduce the following auxilliary result .let , , , , , and be the functions introduced in section [ sec_invariants ] .further , ler be strongly semismooth functions and assume that vanishes for any , .define , then the functions , , and are strongly semismooth in .[ prop_semi ] since the functions and are bounded and have lipschitz continuous derivatives in , it is easy to see that the functions , , and are locally lipschitz continuous in and strongly semismooth for any , . therefore , it remains to show strong semismoothness at , . to this end, we show that ( [ semi_3 ] ) holds for , , and at such .let be such that .then hence , and {\mbox{\boldmath}}_e(\cos\theta({\mbox{\boldmath}})){=}o(\|{\mbox{\boldmath}}\|^2),\\ { \mbox{\boldmath}}'({\mbox{\boldmath}}+{\mbox{\boldmath}};{\mbox{\boldmath}})-{\mbox{\boldmath}}'({\mbox{\boldmath}};{\mbox{\boldmath}})&=&[\hat p'({\mbox{\boldmath}}+{\mbox{\boldmath}};{\mbox{\boldmath}})-\hat p'({\mbox{\boldmath}};{\mbox{\boldmath}})]{\mbox{\boldmath}}+[\hat\varrho'({\mbox{\boldmath}}+{\mbox{\boldmath}};{\mbox{\boldmath}})-\hat\varrho'({\mbox{\boldmath}};{\mbox{\boldmath}})]{\mbox{\boldmath}}({\mbox{\boldmath}}){=}o(\|{\mbox{\boldmath}}\|^2),\end{aligned}\ ] ] since the functions , satisfy ( [ semi_3 ] ) by the assumption .the function introduced in proposition [ prop_semi ] has the same scheme as a mapping between trial and unknown stress tensors for models introduced in section [ sec_dp][sec_general ] . here, the trial stress is represented by and the unknown stress is in the form .therefore , it is sufficient to prove only semismoothness of the scalar functions , representing invariants of the unknown stress tensor .the semismoothness of has been derived to prove theorem [ th_semi_jg ] .we consider the elastoplastic problem containing the drucker - prager criterion , a nonassociative plastic flow rule and a nonlinear isotropic hardening . within a thermodynamical framework with internal variables ,we introduce the corresponding constitutive initial value problem , see : 1 ._ additive decomposition of the infinitesimal strain tensor on elastic and plastic parts _ : 2 ._ linear isotropic elastic law between the stress and the elastic strain _ : where denotes the bulk , and shear moduli , respectively .3 . _ non - linear isotropic hardening _ : here denotes an isotropic ( scalar ) hardening variable , is the corresponding thermodynamical force and is a nondecreasing , strongly semismooth function satisfying .drucker - prager yield function _ : here , the parameters are usually calculated from the friction angle using a sufficient approximation of the mohr - coulomb yield surface and denotes the initial cohesion ._ plastic pseudo - potential_. here denotes a parameter depending on the dilatancy angle ._ nonassociative plastic flow rule _ where is a multiplier and denotes the subdifferential of the convex function at . using ( [ inv_der1 ] ) , ( [ rho_subgrad ] ) and ( [ potential_function ] ) , the flow rule ( [ flow_rule ] ) can be written as consequently by ( [ split ] ) , ( [ elastic_law ] ) and ( [ rho_subgrad_prop ] ) , 7 ._ associative hardening law : _. _ loading / unloding criterion : _ then the elastoplastic constitutive initial value problem reads as follows : _ given the history of the strain tensor , ]_. we discretize civp using the implicit euler method . to this endwe assume a partition of the pseudo - time interval and fix a step .for the sake of brevity , we omit the index and write , , and .further , we define the following trial variables : , and .then the discrete elastoplastic constitutive problem for the -step reads as follows : _ given , and .find , and satisfying : _ notice that the remaining input parameter for the next step , , can be computed using the formula after finding a solution to problem ( [ k - step_problem ] ) .we standardly use the elastic predictor / plastic corrector method for solving ( [ k - step_problem ] ) . *elastic predictor * applies when then the triplet is the solution to ( [ k - step_problem ] ) .* plastic corrector * applies when ( [ trial_admissibility ] ) does not hold . then and ( [ k - step_problem ] ) reduces into since the functions and depend on only through the variables and , it is natural to reduce a number of uknowns in problem ( [ k - step_problem_plast ] ) . to this end , we split ( [ k - step_problem_plast]) into the deviatoric and volumetric parts : where , denotes the deviatoric stress , and the hydrostatic stress related to , respectively . using ( [ inv_der1]) , the equality ( [ flow_dev ] ) yields \triangle\lambda g\sqrt{2}\hat{{\mbox{\boldmath}}}&\mbox{if } & \varrho= 0. \end{array}\right .\label{flow_s_k}\ ] ] denote and recall that for by ( [ rho_subgrad ] ). then from ( [ flow_s_k ] ) we obtain following the arguments developed in section 1.1 , we now rewrite ( [ flow_rho ] ) as follows : notice that ( [ flow_rho ] ) and ( [ flow_rho_improve ] ) are equivalent .further from ( [ flow_s_k]) , we standardly have : the following theorem summarizes and completes the derived results .let .if is a solution to problem ( [ k - step_problem_plast ] ) and , , , then is a solution to the following system : conversely , if is a solution to ( [ k - step_problem_plast_red ] ) then is the solution to ( [ k - step_problem_plast ] ) where [ lem_auxilliary_problem ] notice that the knowledge of the subdifferential of enables us to formulate the plastic corrector problem as a unique system of nonlinear equations in comparison to the current technique introduced in .moreover , one can eliminate the unknowns similarly as for the current return - mapping scheme of this model .inserting of ( [ k - step_problem_plast_red]) into ( [ k - step_problem_plast_red]) leads to the nonlinear equation where using ( [ yield_function ] ) .we have the following solvability result .let .then there exists a unique solution , , of the equation .furthermore , problems ( [ k - step_problem_plast_red ] ) , ( [ k - step_problem_plast ] ) and ( [ k - step_problem ] ) have also unique solutions .in addition , if then and .conversely , if then and .[ th_solvability_dp ] from ( [ q_dp ] ) and the assumptions on , it is readily seen that is a continuous and decreasing function .further , as and . therefore , the equation has just one solution in .if then .otherwise , .the rest of the proof follows from theorem [ lem_auxilliary_problem ] and the elastic prediction .the second part of theorem [ th_solvability_dp ] is very useful from the computational point of view : one can a priori decide whether return to the smooth portion of the yield surface happens or not .this is the main difference in comparison with the current return - mapping scheme .the improved return - mapping scheme reads as follows .* return to the smooth portion * 1 ._ necessary and sufficient condition _ : and ._ find _ : 3 . _* return to the apex * 1 ._ necessary and sufficient condition _ : .find _ : 3 . nonlinear equations ( [ lambda_problem1 ] ) and ( [ lambda_problem2 ] ) can be solved by the newton method .then it is natural to use the initial choice , for ( [ lambda_problem1 ] ) , and ( [ lambda_problem2 ] ) , respectively . in case of perfect plasticity , , or linear hardening , , , equations ( [ lambda_problem1 ] ) and ( [ lambda_problem2 ] )are linear and thus can be found in the closed form . solving the problem ( [ k - step_problem ] ), we obtain a nonlinear and implicit operator between the stress tensor , , and the strain tensor , .the stress - strain operator , , also depends on and through the trial variables . to emphasize this fact we write . from the results introduced in section [ subsec_const_sol ] , we have , where where is the solution to ( [ lambda_problem1 ] ) , ( [ lambda_problem2 ] ) in ( [ s_def]) , and ( [ s_def]) , respectively , i.e. . the function is strongly semismooth in with respect to .[ th_semi_dp ] we use the framework introduced in section [ sub_semi ] . consider the function satisfying if , otherwise . applying theorem [ implicit ] on the implicit function , one can easily find that the function is strongly semismooth .consequently , the functions and are strongly semismooth . since , we obtain strong semismoothness of the functions and using proposition [ prop_semi ] . notice that is not smooth if or or if has not derivative at .we introduce the derivative under the assumption that any of these conditions does not hold .set . using ( [ inv_der1 ] ) , ( [ inv_der2 ] ) ,( [ elastic_law ] ) and the chain rule , we obtain the following auxilliary derivatives : we distinguish three possible cases : 1 .let ( _ elastic response _ ) . then clearly , 2 .let and ( _ return to the smooth surface _ ) .then the derivative of ( [ sigma_smooth ] ) reads applying the implicit function theorem on ( [ lambda_problem1 ] ) , we obtain hence , 3 .let ( _ return to the apex _ ) .then the derivative of ( [ sigma_apex ] ) yields applying the implicit function theorem on ( [ lambda_problem2 ] ) , we obtain hence , the derivatives ( [ deriv_elast])([deriv_apex ] ) define the consistent tangent operator , .it is readily seen that the tangent operator is symmetric if , i.e. for the associative plasticity . for purposes of section [ sec_realization ] ,it is useful to extend the definition of for nondifferential points .for example , one can write ( \ref{deriv_smooth } ) & \mbox{if}&q_{tr}(0)>0,\;q_{tr}\left({\varrho^{tr}}/{g\sqrt{2}}\right)<0,\\[2pt ] ( \ref{deriv_apex } ) & \mbox{if } & q_{tr}\left({\varrho^{tr}}/{g\sqrt{2}}\right)\geq0 , \end{array } \right.\ ] ] where in ( [ deriv_smooth ] ) , ( [ deriv_apex ] ) is the derivative from left of at .notice that jirsek - grassl model was introduced in .it is a plastic - damage model proposed for complex modelling of concrete failure .the model has been further developed .for example , unteregger and hofstetter have improved a hardening law and used the model in rock mechanics . for the sake of simplicity, we only consider a perfect plastic part of this model to illustrate the suggested idea and improve the implicit return - mapping scheme .the whole plastic part of the jirsek - grassl model can be included to an abstract model studied in the next section .the perfect plastic model contains the yield function proposed in : where is the friction parameter and is the uniaxial compressive strength .the invariants , and were introduced in section [ sec_preliminaries ] .notice that the couple defines the apex of the yields surface generated by the function .the yield surface is not smooth only at this apex .scheme of the yield surface can be found in .further , the following plastic pseudo - potential is considered : where the subdifferential of consists of the following directions : where is defined by ( [ rho_subgrad ] ) and the -step of the incremental constitutive problem received by the implicit euler method reads as follows ._ given and .find and satisfying : ,\quad \hat{{\mbox{\boldmath}}}\in\partial\varrho({\mbox{\boldmath}}),\\[5pt ] \triangle\lambda\geq0,\quad \hat f(p({\mbox{\boldmath}}),\varrho({\mbox{\boldmath}}),\varrho_e({\mbox{\boldmath}}))\leq0,\quad \triangle\lambda\hat f(p({\mbox{\boldmath}}),\varrho({\mbox{\boldmath}}),\varrho_e({\mbox{\boldmath}}))=0 .\end{array } \right\ } \label{k - step_problem_jg}\ ] ] _we solve this problem again by the elastic predictor / plastic corrector method . within the plastic correction, we define the trial variables , , , , , and associated with and obtain the following result .let .if is a solution to problem ( [ k - step_problem_jg ] ) and , , , then is a solution to the following system : \varrho=\left[\varrho^{tr}-\triangle\lambda 2g\left(\frac{3\varrho}{\bar f_c^2}+\frac{m_0}{\sqrt{6}\bar f_c}\right)\right]^+,\\[5pt ] \hat f(p,\varrho,\varrho r_e^{tr})=0 .\end{array } \right\ } \label{k - step_problem_plast_jg}\ ] ] conversely , if is a solution to ( [ k - step_problem_plast_jg ] ) then is the solution to ( [ k - step_problem_jg ] ) where p{\mbox{\boldmath } } & \mbox{if } & \varrho^{tr}\leq\triangle\lambda 2g\left(\frac{3\varrho}{\bar f_c^2}+\frac{m_0}{\sqrt{6}\bar f_c}\right).\\ \end{array } \right.\ ] ] [ lem_auxilliary_problem_jg ] to prove theorem [ lem_auxilliary_problem_jg ] we use the same technique as in section [ subsec_const_sol ] .it is based on the splitting the stress tensor on the deviatoric and volumetric parts , and on using linear dependence between and to reduce a number of unknowns .in particular , we have for .consequently , we obtain ( [ k - step_problem_plast_jg]) , and also for using ( [ theta ] ) . finally , notice that as .indeed , as and the function is bounded .analogously to the drucker - prager model , one can analyze existence and uniqueness of a solution to problem ( [ k - step_problem_plast_jg ] ) , and a priori decide whether the return to the smooth portion of the yield surface happens or not . to this end , we define implicit functions and such that ^+=0,\ ] ] respectively , for any .the following lemma is a consequence of the implicit function theorem .the functions and are well - defined in .further , is smooth and decreasing in , is decreasing in the interval and its closed form reads as follows : [ lem_jg ] now , consider the function , let .then there exists a unique solution , , of the equation .furthermore , problems ( [ k - step_problem_plast_jg ] ) and ( [ k - step_problem_jg ] ) have also unique solutions .in addition , if then and .conversely , if then and .[ th_solvability_jg ] since and in , the functions , are decreasing in this interval . for ,these functions vanish .therefore , from ( [ q_jg ] ) and lemma [ lem_jg ] , it is follows that is a continuous and decreasing function in .furthermore , as and .hence , the equation has a unique solution in . if then , .the rest of the proof follows from theorem [ lem_auxilliary_problem_jg ] and the elastic prediction .although the function is implicit the decision criterion introduced in theorem [ th_solvability_jg ] can be found in closed form . since , hence , . using the definitions of and , we have by theorem [ lem_auxilliary_problem_jg ] and theorem [ th_solvability_jg ] , the return - mapping scheme reads as follows. * return to the smooth portion * 1 ._ necessary and sufficient condition _ : and .2 . _ find _ , and : \varrho+\triangle\lambda 2g\left(\frac{3\varrho}{\bar f_c^2}+\frac{m_0}{\sqrt{6}\bar f_c}\right)-\varrho^{tr}=0,\\[5pt ] \frac{3}{2}\left(\frac{\varrho}{\bar f_c}\right)^2+m_0\left(\frac{\varrho r_e^{tr}}{\sqrt{6}\bar f_c}+\frac{p}{\bar f_c}\right)-1=0 .\end{array } \right\ } \label{lambda_problem1_jg}\ ] ] 3 .* return to the apex * 1 ._ necessary and sufficient condition _ : .set _ system ( [ lambda_problem1_jg ] ) of nonlinear equations can be solved by the newton method with the initial choice , , .it was shown that the system has a unique solution subject to and . without these conditions, one can not guarantee existence and uniqueness of the solution to ( [ lambda_problem1_jg ] ) .solving the problem ( [ k - step_problem_jg ] ) , we obtain a nonlinear and implicit operator between the stress tensor , , and the strain tensor , .the stress - strain operator , , also depends on through the trial stress . to emphasize this fact we write .we have where the function is defined by ( [ q_jg ] ) and , are components of the solution to ( [ lambda_problem1_jg ] ) .the function is strongly semismooth in with respect to .[ th_semi_jg ] consider the function satisfying if , otherwise . to apply theorem [ implicit ] on the implicit function ,it is necessary to show that is strongly semismooth w.r.t .the variables .this follows from ( [ hat_varrho ] ) and proposition [ prop_semi ] .the rest of the proof coincides with the proof of theorem [ th_semi_dp ] . if or then is not smooth .we derive the derivative under the assumption that any of these conditions does not hold . if ( elastic response ) then . if ( return to the apex ) then ( vanishes ) .let and , i.e. , return to the smooth portion happens .then the derivative can be found as follows . 1 .find the solution to ( [ lambda_problem1_jg ] ) .2 . use ( [ inv_der1 ] ) , ( [ inv_der2 ] ) , ( [ elastic_law ] ) and the chain rule and compute : ,\quad \frac{\partial r_e^{tr}}{\partial{\mbox{\boldmath}}}=-r_e'(\cos\theta^{tr})\sin\theta^{tr}\frac{\partial \theta^{tr}}{\partial{\mbox{\boldmath}}}.\ ] ] 3 . compute : \frac{\partial \varrho}{\partial{\mbox{\boldmath}}}\\[5pt ] \frac{\partial \triangle\lambda}{\partial{\mbox{\boldmath } } } \end{array } \right)=\left ( \begin{array}{c c c } 1+\triangle\lambda k\frac{m_g''(p)}{\bar f_c } & 0 & k\frac{m_g'(p)}{\bar f_c}\\ 0 & 1+\triangle\lambda \frac{6g}{\bar f_c^2 } & 2g\left(\frac{3\varrho}{\bar f_c^2}+\frac{m_0}{\sqrt{6}\bar f_c}\right)\\ \frac{m_0}{\bar f_c } & \frac{3\varrho}{\bar f_c^2}+\frac{m_0r_e^{tr}}{\sqrt{6}\bar f_c } & 0 \end{array } \right)^{-1}\left(\begin{array}{c } \frac{\partial p^{tr}}{\partial{\mbox{\boldmath}}}\\[5pt ] \frac{\partial \varrho^{tr}}{\partial{\mbox{\boldmath}}}\\[5pt ] -\frac{m_0\varrho}{\sqrt{6}\bar f_c}\frac{\partial r_e^{tr}}{\partial{\mbox{\boldmath } } } \end{array}\right ) .\label{linearized_system}\ ] ] notice that the matrix in ( [ linearized_system ] ) arises from linearization of ( [ lambda_problem1_jg ] ) around the solution .the matrix is invertible since its determinant is negative .4 . compute for numerical purposes , we use the following generalized consistent tangent operator : ( \ref{deriv_smooth_jg } ) & \mbox{if}&q_{tr}(0)>0,\;q_{tr}\left(\sqrt{6}\bar f_c\varrho^{tr}/2gm_0\right)<0,\\[2pt ] \mathbb o & \mbox{if } & \qquad \qquad \quad q_{tr}\left(\sqrt{6}\bar f_c\varrho^{tr}/2gm_0\right)\geq0 . \end{array } \right.\ ] ]the aim of this section is an extension of theorem [ lem_auxilliary_problem ] and [ lem_auxilliary_problem_jg ] on a specific class of elastoplastic models that are usually formulated in the haigh - westergaard coordinates .we consider an abstract model containing the isotropic hardening and the plastic flow pseudo - potential ._ given the history of the strain tensor , ]_. further , we have the following assumptions on ingredients of the model : 1 . where is increasing with respect to and , convex and continuously differentiable at least in vicinity of the yield surface .2 . where is an increasing function with respect to , convex and twice continuously differentiable at least in vicinity of the yield surface .3 . is a nondecreasing , continuous and strongly semismooth function satisfying .4 . is a positive function .invariants , , and are the same as in section [ sec_preliminaries ] .notice that the assumptions on and guarantee convexity of and using properties of introduced in .let then one can write the plastic flow rule as follows : ,\quad \hat{{\mbox{\boldmath}}}\in\partial\varrho({\mbox{\boldmath } } ) .\label{flow_rule2_abstract}\ ] ] the -th step of the incremental constitutive problem received by the implicit euler method reads as follows . _ given , and .find , and satisfying : ,\quad \hat{{\mbox{\boldmath}}}\in\partial\varrho({\mbox{\boldmath}}),\\[3pt ] \bar{\varepsilon}^{p}=\bar{\varepsilon}^{p , tr}+\triangle\lambda \ell({\mbox{\boldmath}},\kappa),\\[3pt ] \triangle\lambda\geq0,\quad f({\mbox{\boldmath}},h(\bar{\varepsilon}^{p}))\leq0,\quad \triangle\lambda f({\mbox{\boldmath}},h(\bar{\varepsilon}^{p}))=0 .\end{array } \right\ } \label{k - step_problem_abstract}\ ] ]_ if we use the elastic predictor / plastic corrector method then we derive the following straightforward extension of theorem [ lem_auxilliary_problem ] and theorem [ lem_auxilliary_problem_jg ] within the plastic correction .let .if is a solution to problem ( [ k - step_problem_abstract ] ) then , , is a solution to the following system : \varrho=\left[\varrho^{tr}-\triangle\lambda2g\hat g_\varrho(p,\varrho)\right]^+,\\[3pt ] \bar{\varepsilon}^{p}=\bar{\varepsilon}^{p , tr}+\triangle\lambda\hat\ell\left(p,\varrho,\varrho\tilde r(\cos \theta^{tr})\right),\\[3pt ] \hat f\left(p,\varrho,\varrho r_e(\cos \theta^{tr}),h(\bar{\varepsilon}^{p})\right)=0 .\end{array } \right\ } \label{k - step_problem_abstract_red}\ ] ] conversely , if is a solution to ( [ k - step_problem_abstract_red ] ) then solves ( [ k - step_problem_abstract ] ) , where p{\mbox{\boldmath } } & \mbox{if } & \varrho=0 . \end{array } \right .\label{sigma_k_abstract}\ ] ] [ lem_auxilliary_problem_abstract ] notice that it is generally impossible to a priori decide about the type of the return as in the models introduced above . to be in accordance with the current approach introduced e.g. in can split ( [ k - step_problem_abstract_red ] ) into the following two systems : and guess which of these systems provides an admissible solution . beside the blind guessing , the current approach has another drawback : it can happen that ( [ k - step_problem_abstract_red ] ) has a unique solution and mutually one of the systems ( [ f_equality_apex ] ) , ( [ f_equality_smooth ] ) does not have any solution or have more than one solution .therefore , we recommend to solve ( [ k - step_problem_abstract_red ] ) directly by a nonsmooth version of the newton method with the standard initial choice , , and .consider an elasto - plastic body occupying a bounded domain with the lipschitz continuous boundary .it is assumed that , where and are open and disjoint sets . on ,the homogeneous dirichlet boundary condition is prescribed .surface tractions of density are applied on and the body is subject to a volume force .notice that the above defined stress , strain and hardening variables depend on the spatial variable , i.e. , etc .let ^ 3\ |\;{\mbox{\boldmath } } = { \mbox{\boldmath}}\ \mbox{on } \gamma_d \right\}$ ] denote the space of kinematically admissible displacements . under the infinitesimal small strain assumption, we have substitution of the stress - strain operator into the principle of the virtual work leads to the following problem at the -th step : where and are the prescribed volume , and surface forces at , respectively . after finding a solution ,the remaining unknown fields important for the next step can be computed at the level of integration points .problem can be standardly written as the operator equation in the dual space to : , where since we plan to use the semismooth newton method , we also introduce the operator as follows : to discretize the problem in space we use the finite element method. then the space is approximated by a finite dimensional one , .if linear simplicial elements are not used then it is also necessary to consider a suitable numerical quadrature on each element .let , , denote the approximation of operators , , , respectively , and , , be their algebraic counterparts. then the discretization of problem leads to the system of nonlinear equations , , and the semismooth newton method reads as follows : 1.2 initialization : find : compute set .if is strongly semismooth in then is strongly semismooth in .notice that the strong semismoothness is an essential assumption for local quadratic convergence of this algorithm .in numerical examples introduced below , we observe local quadratic convergence when the tolerance is sufficiently small .in particular , we set .the improved return - mapping schemes in combination with the semismooth newton method have been partially implemented in codes sifel and matsol . here , for the sake of simplicity, we consider the slope stability benchmark ( * ? ? ?* page 351 ) for the presented models , the drucker - prager ( dp ) and the jirasek - grassl ( jg ) ones .the benchmark is formulated as a plane strain problem .we focus on : a ) incremental limit analysis and b ) dependence of loading paths on element types and mesh density . for purposes of such an experiment ,special matlab codes have been prepared to be transparent .these experimental codes are available in together with selected graphical outputs .the geometry of the body is depicted in figure [ fig.mesh_p1 ] or [ fig.mesh_q2 ] .the slope height is 10 m and its inclination is . on the bottom , we assume that the body is fixed and , on the left and right sides , zero normal displacements are prescribed . the body is subjected to self - weight .we set the specific weight / m with being the mass density and the gravitational acceleration .such a volume force is multiplied by a scalar factor , .the loading process starts from .the gravity load factor , , is then increased gradually until collapse occurs .the initial increment of the factor is set to 0.1 . to illustrate loading responses we compute settlement at the corner point on the top of the slope depending on .as in ( * ? ? ?* page 351 ) , we set , , and , where denotes the cohesion for the perfect plastic model . hence , and . in comparison to , we use the presented models instead of the mohr - coulomb one . the remaining parameters for these models will be introduced below .we analyze the problem for linear triangular ( ) elements and eight - pointed quadrilateral ( ) elements . in the latter case ,-point gauss quadrature is used . for each element type, a hierarchy of four meshes with different densities is considered .the -meshes contain 3210 , 12298 , 48126 , and 190121 nodal points , respectively .the -meshes consist of 627 , 2405 , 9417 , and 37265 nodal points , respectively .the coarsest meshes for and elements are depicted in figure [ fig.mesh_p1 ] and [ fig.mesh_q2 ] .let us complete that the mesh in figure [ fig.mesh_p1 ] is uniform in vicinity of the slope and consists of right isoscales triangles with the same diagonal orientation .further , it is worth mentioning that the -meshes are chosen much more finer in vicinity of the slope than their -counterparts within the same level .the drucker - prager parameters , and are computed from the friction angle , , and the dilatancy angle , , as follows : at first , we introduce results obtained for the model with associative perfect plasticity . in such a case , , and .the received loading curves for the investigated meshes and elements are depicted in figure [ fig.load_path_h_p1 ] and [ fig.load_path_h_q2 ] .although -meshes are much finer , we observe more significant dependence of the curves on the mesh density for -elements than for -elements .also computed limit load factors are greater and tend more slowly to a certain value as for -meshes than for -meshes .the expected limit value is 4.045 as follows from considerations introduced in . using the finest and meshes, we receive the values 4.230 , and 4.056 , respectively . in general , higher order elements are recommended when a locking effect is expected . in this example, it can be caused due to the presence of the limit load and/or .on the other hand , the strong dependence on mesh density is influenced by other factors like mesh structure or choice of a model .for example , this dependence is not so significant for the jirasek - grassl model ( see the next subsection ) .further , in , there is theoretically justified and illustrated that the dependence of the limit load on the mesh density is minimal for bounded yield surfaces and that an approximation of unbounded yield surfaces by bounded ones ( the truncation ) leads to a lower bound of the limit load . for illustration , we add figure [ fig.multiplier_perf_plas ] and [ fig.displacement_perf_plas ] with plastic multipliers and total displacements at collapse , respectively .the figures are in accordance with literature .to compare the current return - mapping scheme with improved one , we have also considered the nonassociative model with nonlinear hardening where here , represents the initial slope of the hardening function and the material response is perfect plastic for sufficiently large values of the hardening variable .this nonassociative model yields a slightly lower values of the limit load factors and also the other results are very similar to the associative model .the related graphical outputs are available in ( * ? ? ?* ss - dp - nh ) .further , in vicinity of the limit load , we have observed lower rounding errors for the improved return - mapping scheme and thus lower number of newton steps is necessary to receive the prescribe tolerance than for the current scheme. however , the computational time for both schemes are practically the same since return to the apex happens only on a few elements lying in vicinity of the yield surface . to be the simplified jirasek - grassl ( jg ) model applicable for the investigated soil material we fit its parameters using the associative perfect plastic drucker - prager ( dp ) model as follows : , , , , , and .recall that implies .further the value of corresponds to the uniaxial compressive strength computed from the drucker - prager model . to eliminate the influence of the exponential term in the function , the value of is chosen sufficiently large .then the model is insensitive on and one can vanish it .finally , we require the same flow direction for both the models under the uniaxial compressive strength .since the yield function in the jg model is normalized in comparison to the dp model it is convenient to introduce the following relation between the plastic multipliers : , where is a scale factor .then the values of and are determined from the following equations : to be positive , must be greater than . to be in accordance with results of the dp model , we set . , the limit load factor is underestimated and for greater values of the limit load factor is overestimated .] comparison of yield surfaces ( in the meridean plane ) and flow directions for the dp and jg models is illustrated in figure [ fig_model_comparison ] . here , the fixed value is used for vectors representing the flow directions .loading curves for the investigated , meshes and the jg model are depicted in figure [ fig.load_path_h_jg ] and [ fig.load_path_h_q2_jg ] .we observe much faster convergence of the -loading curves than for the dp model .moreover , the results for and elements are comparable .the computed values of the limit load factor on the finest and meshes are 4.124 , and 4.107 , respectively .the main idea of this paper is that the subdifferential formulation of the plastic flow rule is also useful for computational purposes and numerical analysis .namely , it has been shown that such an approach improves the implicit return - mapping scheme for non - smooth plastic pseudo - potentials as follows . *the unique system of nonlinear equations is solved regardless on a type of the return .* it can be a priori determined the type of the return from a given trial state for some models ( without knowledge of the solution ) .* the scheme can be more correct than the current one , and its form enables to study properties of constitutive operators like existence , uniqueness and semismoothness . in this paper ( part i ), the new technique has been systematically built on a specific class of models containing singularities only along the hydrostatic axis . beside an abstract model , two particular modelshave been studied : the drucker - prager and the simplified jirasek - grassl model. however , the presented idea seems to be more universal .for example , it has been successfully used for the mohr - coulomb model in `` part ii '' .the authors would like to thank to pavel marlek for generating the quadrilateral meshes with midpoints .this work has been supported by the project 13 - 18652s ( ga cr ) and the european regional development fund in the it4innovations centre of excellence project ( cz.1.05/1.1.00/02.0070 ) .armero f , prez - foguet a. on the formulation of closest - point projection algorithms in elastoplasticity - part i : the variational structure ._ international journal for numerical methods in engineering _ 2002 ; * 53 * : 297 - 329 . de angelis f , taylor rl .an efficient return mapping algorithm for elastoplasticity with exact closed form solution of the local constitutive problem , _ engineering computations _ 2015 ; * 32 * : 2259 - 2291 .de saxc g. the biponential method , a new variational and numerical treatment of the dissipative laws of materials . in : _ 10th .int . conf . on mathematical and computer modelling and scientific computing _ ,pp . 1 - 6 , 1995 . hjiaj m , fortin j , de saxc g. a complete stress update algorithm for the non - associated drucker - prager model including treatment of the apex ._ international journal of engineering science _2003 * 41 * : 1109 - 1143 . prez - foguet a , armero f. on the formulation of closest - point projection algorithms in elastoplasticity - part ii : globally convergent schemes . _international journal for numerical methods in engineering _ 2002 ; * 53 * : 331 - 374. willam kj , warnke ep .constitutive model for the triaxial behavior of concrete .i. : concrete structure subjected to triaxial stresses .19 of iabse report , international association of bridge and structural engineers _ , zurich , 1974 , pp . 1 - 30 . zouain n. some variational formulations of non - associated hardening plasticity ._ mechanics of solids in brasil _ 2009 , h.s .da costa mattos and m. alves eds ., _ brasilian society of mechanical sciences and engineering _ , pp .503 - 512 .isbn 978 - 85 - 85769 - 43 - 7 . | the paper is devoted to the numerical solution of elastoplastic constitutive initial value problems . an improved form of the implicit return - mapping scheme for nonsmooth yield surfaces is proposed that systematically builds on a subdifferential formulation of the flow rule . the main advantage of this approach is that the treatment of singular points , such as apices or edges at which the flow direction is multivalued involves only a uniquely defined set of non - linear equations , similarly to smooth yield surfaces . this paper ( part i ) is focused on isotropic models containing : yield surfaces with one or two apices ( singular points ) laying on the hydrostatic axis ; plastic pseudo - potentials that are independent of the lode angle ; nonlinear isotropic hardening ( optionally ) . it is shown that for some models the improved integration scheme also enables to a priori decide about a type of the return and investigate existence , uniqueness and semismoothness of discretized constitutive operators in implicit form . further , the semismooth newton method is introduced to solve incremental boundary - value problems . the paper also contains numerical examples related to slope stability with available matlab implementation . keywords : elastoplasticity , nonsmooth yield surface , multivalued flow direction , implicit return - mapping scheme , semismooth newton method , limit analysis |
physical objects in the world appear differently depending on the scale of observation / measurement .take the tree as an example , meaningful observations range from molecules at the scale of nanometers , to leaves at centimeters , to branches at meters , and to forest at kilometers .this inherent property is ubiquitous and holds equally true for natural language . on the one hand ,concepts are meaningful only at the right resolution , for instance , named entities usually range from unigram ( e.g. , new " ) to bigram ( e.g. , new york " ) , to multigram ( e.g. , new york times " ) , and even to a whole long sequence ( e.g. , a song name another lonely night in new york " ) . on the other hand, our understanding of natural language depends critically on the scale at which it is examined , for example , depending on how much detailed we would like to get into a document , our knowledge could range from a collection of _ keywords _ " , to a sentence sketch named _ title _ " , to a paragraph summary named _ abstract _ " , to a page long _ introduction _ " and finally to the _ entire content_.the notion of scale is fundamental to the understanding of natural language , yet it was largely ignored by existing models for text representation , which include simple bag - of - word ( bow ) or unigram language model ( lm ) , n - gram or higher order lms , and other more advanced text / language models . one key problem with many of these models is their inflexibility they capture the semantic structure rather rigidly at only a single resolution ( e.g. , -gram with a single fixed value of ) .however , which scale is appropriate for a specific task is usually unknown a priori and in many cases even not homogeneous ( e.g. , a document may contain named entities of different length ) , making it impossible to capture the right meanings with a fixed single scale .scale space theory is a well - established and promising framework for multi - resolution representation , developed primarily by the computer vision and signal processing communities with complimentary motivations from physics and bio - vision .the key idea is to embed a signal into the _ scale space _, i.e. , to represent it as a family of progressively smoothed signals parameterized by a continuous variable of _ scale _ , where fine - resolution detailed structures are progressively suppressed by the convolution of the original signal with a smoothing kernel ( i.e. , a low pass filter with certain properties ) . in this paper, we adapt the scale - space model from image to text signals , proposing a novel framework that enables multi - resolution representation for documents .the adaptation poses substantial challenges as the structure of the semantic domain is nontrivially complicated than the spatial domains in traditional image scale space .we show how this can be made possible with a set of assumptions and simplifications . the scale - space model for textnot only provides new perspectives for how text analysis tasks can be formulated and addressed , but also enables well - established computer vision tools to be adapted and applied to text processing , e.g. , matching , segmentation , description , interests points detection , and classification . to stimulate further investigation in this promising direction, we initiate a couple of instantiations to demonstrate how this model can be used in a variety of nlp and text analysis tasks to make things easier , better , and most importantly , scale - invariant .the notion of scale space is applicable to signals of arbitrary dimensions .let us consider the most common case , where it is applied to 2-dimensional signals such as images .given an image , its scale - space representation is defined by : where denotes the convolution operator , and is a smoothing kernel ( i.e. , a low pass filter ) with a set of desired properties ( i.e. , the scale - space axioms ) .the bandwidth parameter is referred to as scale parameter since as increases , the derived image will become gradually smoother ( i.e. , blurred ) and consequently more and more fine - scale structures will be suppressed .it has been shown that the gaussian kernel is the unique option that satisfies the conditions for _ linear scale space _ : the resultant linear scale space representation can be obtained equivalently as a solution to the diffusion ( heat ) equation with initial condition , where denotes the laplace operator which in a 2-dimensional spatial space corresponds to .if we view as a heat distribution , the equation essentially describes how it diffuses from initial value , , in a homogeneous media with uniform conductivity over time .as we can imagine , the distribution will gradually approach uniform and consequently the fine - scale structure of will be lost .scale - space theory provides a formal framework for handling the multi - scale nature of both the physical world and the human perception . since its introduction in 1980s, it has become the foundation of many computer vision techniques and been widely applied to a large variety of vision / image processing tasks . in this paper, we show how this powerful tool can be adapted and applied to natural language texts .a straightforward step towards textual sale space would be to represent texts in the way as image signal . in this section ,we show how this can be made possible .other alternative signal formulations will be discussed in the followed section .let be our vocabulary consisting of words , given a document comprised of a finite -word sequence , without any information loss , we can characterize as a 2d binary matrix , with the -th entry indicates whether or not the -th vocabulary word is observed at the -th position , i.e. : , where if and 0 otherwise .hereafter , we will refer to the -axis as _ spatial domain _( i.e. , positions in the document , ) , and -axis as the _ semantic axis _ ( i.e. , indices in the vocabulary , ) .this representation provides an image analogy to text , i.e. , a document is equivalent to a black - and - white image except that here we have _ one spatial _ and _ one semantic _ domains , , instead of two spatial domains , .interestingly , scale - space representation can also be motivated by this binary model from a slightly different perspective , as a way of robust density estimation .we have the following definition : definition 1 . _ a 2d text model _ _ is a probabilistic distribution over the joint spatial - semantic space _ : , , .this 2d text model defines the probability of observing a semantic word at a spatial position . the binary matrix representation ( after normalization )can be understood as an estimation of with kernel density estimators : where is the -th column vector of an identity matrix , denotes the -th row vector and the -th column vector .note that here the dirac impulse kernels is used , i.e. , words are unrelated either spatially or semantically .this contradicts the common knowledge since neighboring words in text are highly correlated both semantically and spatially .for instance , observing the word new " indicates a high likelihood of seeing the other word york " at the next position . as a result, it motivates more reliable estimate of by using smooth kernels such as gaussian , which , as we will see , leads exactly to the gaussian filtering used in the linear scale - space theory .the 2d binary matrix described above is not the only option we can work with in scale space . generally speaking, any vector , matrix or even tensor representation of a document can be used as a signal upon which scale space filtering can be applied . in particular, we use the following in the current paper : _ word - level 2d _ signal , , is the binary matrix we described in [ sec : word2d ] .it records the spatial position for each word , and is defined on the joint spatial - semantic domains ._ bag - of - word 1d _ signal is the bow representation , i.e. , the 2d matrix is collapsed to a 1d vector .since the spatial axis is wiped out , this signal is defined on the semantic domain alone ._ sentence - level 2d _ signal is a compromise between word - level 2d and the bow signals . instead of collapsing the spatial dimension for the whole document , we do it for each sentence . as a result , this signal , , records the position of each sentence ; for a fixed position , records the bow of the corresponding sentence ._ topic 1d _ signal , , is composed of the topic embedding of each sentence and defined on the spatial domain only .assume we have trained a topic model ( e.g. , latent dirichlet allocation ) on a universal corpus in advance , this signal is obtained by applying topic inference to each sentence and recording the topic embedding , where is the dimensionality of the topic space .topic embedding is beneficial since it endows us the ability to address synonyms and polysemy .also note that the semantic correlation may have been eliminated and consequently semantic smoothing is no longer necessary .in other words , although is a matrix , we would rather treat it as a vector - variate 1d signal .all these textual signals involve either a semantic domain or both semantic and spatial domains . in the following ,we investigate how scale - space filtering can be applied to these domains respectively .spatial filtering has long been popularized in signal processing , and was recently explored in nlp by .it can be achieved by convolution of the signal with a low - pass spatial filter , i.e. , . for texts ,this amounts to borrowing the occurrence of words at one position from its neighboring positions , similar to what was done by a cache - based language model . in order not to introduce spurious information , the filter need to satisfy a set of scale - space axioms .if we view the positions in a text as a spatial domain , the gaussian kernel or its discrete counterpart are singled out as the unique options that satisfy the set of axioms leading to the linear scale space , where denotes the modified bessel functions of integer order . alternatively ,if we view the position as a time variable as in the markov language models , a poisson kernel is more appropriate as it retains temporal causality ( i.e. , inaccessibility of future data ) .semantic filtering attempts to smooth the probabilities of seeing words that are semantically correlated .in contrast to the spatial domain , the semantic domain has some unique properties .the first thing we notice is that , as semantic coordinates are nothing but indices to the dictionary , we can permute them without changing the semantic meaning of the representation .we refer to this property as _permutation invariance_. semantic smoothing has been extensively explored in natural language processing .classical smoothing methods , e.g. , laplacian and dirichlet smoother , usually shrink the original distributions to a predefined reference distribution .recent advances explored local smoothing where correlated words are smoothed according to their interrelations defined by a semantic network . given a semantic graph , where two correlated words and are connected with weight , semantic smoothing can be formulated as solving a graph - based optimization problem : where defines the tradeoff , weights the importance of the node .interestingly , the solution to eqn.([eq7 ] ) is simply the convolution of the original signal with a specific kernel ) . ] , i.e. , .compared with spatial filtering , semantic filtering is , however , more challenging .in particular , the semantic domain is heterogeneous and not shift - invariant the degree of correlation depends on both coordinates and rather than their difference . as a result , kernels that provably satisfy scale - space axioms are no longer feasible . to this end, we simply set aside these requirements and define kernels in terms of the dissimilarity between a pair of words and rather than their direct difference , that is , , where we use to denote semantic kernel to distinguish from spatial kernels .for gaussian , this means .scale is vital for the understanding of natural language , yet it is nontrivial to determine which scale is appropriate for a specific task at hand in advance . as a matter of fact, the best choice usually varies from task to task and from document to document . even within one document , it could be heterogeneous , varying from paragraph to paragraph and sentence to sentence .for the purpose of automatic modeling , there is no way to decide _ a priori _ which scale fits the best .more importantly , it might be impossible to capture all the right meanings at a single scale .therefore , the only reasonable way is to simultaneously represent the document at multiple scales , which is exactly the notion of _ scale space_. scale space representation embeds a textual signal into a _continuous _ scale - space , i.e. , by a family of progressively smoothed signals parameterized by continuous scale parameters .in particular , for a 2d textual signal , we have : where the 2d smoothing kernel is separable between spatial and semantic domains , i.e. , note that we have two continuous scale parameters , the spatial scale and the semantic scale . the case for 1d signals are even simpler as they only involve one type of kernels ( spatial or semantic ) . for a 1d spatial signal , we have , and for a semantic signal , . andif is a vector - variate signal , we just apply smoothing to each of its dimensions independently .[ [ example . ] ] example .+ + + + + + + + as an example , figure [ fig1 ] shows four samples , , from the scale - space representation of a synthetic short text _ new york times offers free iphone 3 g as gifts for new customers in new york _ " , where , the two scales are set equal for ease of explanation and is obtained based on the word - level 2d signal .we use a vocabulary containing 12 words ( in order ) : new " , york " , time " , free " , iphone " , gift " , customer " , apple " , egg " , city " , service " and coupon " , where the last four words are chosen because of their strong correlations with those words that appear in this text .the semantic graph is constructed based on pairwise mutual information scores estimated on the rcv1-v2 corpus as well as a large set of web search queries .the ( 0,0)-scale sample , or the original signal , is a binary matrix , recording precisely which word appears at which position .the smoothed signals at ( 1,1 ) , ( 2,2 ) and ( 8,8)-scales , on the other hand , capture not only short - range spatial correlations such as bi - gram , tri - gram and even higher orders ( e.g. , the named entities new york " and new york times " ) , but also long - range semantic dependencies as they progressively boost the probability of latent but semantically related topics , e.g. , iphone " apple " , customer " service " , free " and gift " coupon " , new " and iphone " egg " ( due to the online electronics store ` newegg.com ` ) .the scale - space representation creates a new dimension for text analysis . besides providing a multi - scale representation that allows texts to be analyzed in a scale - invariant fashion, it also enables well - established computer vision tools to be adapted and applied to analyzing texts .the scale space model can be used in nlp and text mining in a variety of ways . to stimulate further research in this direction, we initiate a couple of instantiations . in this section ,we show how to make text classification scale - invariant by exploring the notion of _ scale - invariant text kernel _ ( sitk ) . given a pair of documents , and , at any fixed scale , the representation induces a single - scale kernel , where denotes any inner product ( e.g. , frobenius product , gaussian rbf similarity , jensen - shannon divergence ) .this kernel can be made scale - invariant via the expectation : =\int_0^\infty k_s(d , d^\prime)q(s)ds,\label{eq10}\end{aligned}\ ] ] where is a probabilistic density over the scale space with and , which in essence characterizes the distribution of the most appropriate scale . can be learned from data via a em procedure or in a bayesian framework if our belief about the scale can be encoded into a prior distribution . as an example, we show below one possible formulation . given a training corpus , where is a document and its label, our goal in text classification is to minimize the expected classification error . to simplify matters ,we assume a parametric form for .particularly , we use the gamma distribution due to its flexibility .moreover , we propose a formulation that eliminates the dependence on the choice of the classifier , which approximately minimizes the bayes error rate , i.e. : \\ \end{split}\ ] ] where is a heuristic margin ; , called nearest - hit " , is the nearest neighbor of with the same class label , whereas , the nearest - miss " , is the nearest neighbor of with a different label , and the distance .this above formulation can be solved via a em procedure .alternatively , we can discretize the scale space ( preferably in log - scale ) , i.e. , , and optimize a discrete distribution directly from the same formulation . in particular , if we regularize the -norm of , eq([eq11 ] ) will become a convex optimization with a close - form solution that is extremely efficient to obtain : where ^\top ] with entry , and denotes the positive - part operator .[ [ experiments . ] ] experiments .+ + + + + + + + + + + + we test the scale - invariant text kernels ( sitk ) on the rcv1-v2 corpus with focus on the 161,311 documents from ten leaf - node topics : ` c11 , c24 , c42 , e211 , e512 , gjob , gpro , m12 , m131 ` and ` m142 ` .each text is stop - worded and stemmed .the top 20k words with the highest dfs ( document frequencies ) are selected as vocabulary ; all other words are discarded .the semantic network is constructed based on pairwise mutual information scores estimated on the whole rcv1 corpus as well as a large scale repository of web search queries , and further sparsified with a cut - off threshold .we implemented the sentence - level 2d , the lda 1d signals and bow 1d for this task .for the first two , the documents are normalized to the length of the longest one in the corpus via bi - linear interpolation .we examined the classification performance of the svm classifiers that are trained on the _ one - vs - all _ splits of the training data , where three types of kernels ( i.e. , linear ( frobenius ) , rbf gaussian and jensen - shannon kernels ) were considered .the average test accuracy ( i.e. , micro - averaged f1 ) scores are reported in table [ tab1 ] . as a reference , the results by bow representations with tf or tfidf attributes are also included .for all the three kernel options , the scale - space based sitk models significantly ( according to -test at level ) outperform the two bow baselines , while the sentence level sitk performs substantially the best with 7.8% accuracy improvement ( i.e. , 56% error reduction ) ..text classification test accuracy .we compared five models : the bag - of - word vector space models with tf or tfidf attributes , and the scale - invariant text kernels with bow 1d ( sitk.bow ) , lda 1d ( sitk.lda ) and sentence - level 2d ( sitk.sentence ) textual signal .best results are highlighted in * bold*.[tab1 ] [ cols="<,^,^,^",options="header " , ] the extrema ( i.e. , maxima and minima ) of a signal and its first a few derivatives contain important information for describing the structure of the signal , e.g. , patches of significance , boundaries , corners , ridges and blobs in an image .scale space model provides a convenient framework to obtain the extrema of a signal at different scales .in particular , the extrema in the -th derivative of a signal is given by the zero - crossing in the -the derivative , which can be obtained at any scale in the scale space conveniently via the convolution of the original signal with the derivative of the gaussian kernel , i.e. : since gaussian kernel is infinitely differentiable , the scale - space model makes it possible to obtain local extrema / derivatives of a signal to arbitrary orders even when the signal itself is undifferentiable .moreover , due to the non - enhancement of local extrema " property , local extrema are created _ monotonically _ as we decrease the scale parameter . in this section ,we show how this can be used to detect keywords from a document in a _ hierarchical _ fashion .the idea is to work with the word - level 2d signal ( other options are also possible ) and track the extrema ( i.e. , patterns of significance ) of the scale - space model through the zero - crossing of its first derivative to see how extrema progressively emerge as the scale goes from coarse to finer levels .this process reduces the scale - space representation to a simple ternary tree in the scale space , i.e. , the so - called _ interval tree _ " in . since defines a probability over the spatial - semantic space , it is straightforward to interpret the identified intervals as keywords .this algorithm therefore yields a _ keyword tree _ that defines topics we could perceive at different levels of granularities from the document . [[ experiments.-2 ] ] experiments . + + + + + + + + + + + + as an illustrative example , we apply the hierarchical keywording algorithm described above to the current paper .the keywords that emerged in order are as follows : scale space " kernel " , signal " , text " smoothing " , spatial " , semantic " , domains " , gaussian " , filter " , text analysis " , natural language " , word " . in the previous section , we show how semantic keywords can be extracted from a text in a hierarchical way by tracking the extrema of its scale space model . in the same spirit , here we show how topic boundaries in a text can be identified by tracking the extrema of the first derivative .text segmentation is an important topic in nlp and has been extensively investigated previously .many existing approaches , however , are only able to identify a flat structure , i.e. , all the boundaries are identified at a flat level .a more challenging task is to automatically identify a _ hierarchical _table - of - content style structure for a text , that is , to organize boundaries of different text units in a tree structure according to their topic granularities , e.g. , chapter boundaries at the top - level , followed in order by boundaries of sections , subsections , paragraphs and sentences as the level of depth increases .this can be achieved conveniently by the _ interval tree _ and _ coarse - to - fine tracking _ idea presented in . in particular ,if we keep tracking the extrema of the 1st order derivatives ( i.e. , rate of changes ) by looking at the points satisfying : due to the monotonicity nature of scale space representation , such contours are closed above but open below in the scale space .they naturally illustrate how topic boundaries appear progressively as scale goes finer . andthe _ exact localization _ of a boundary can be obtained by tracking back to the scale .also note that this algorithm , unlike many existing ones , does not require any supervision information .[ [ experiments.-3 ] ] experiments . + + + + + + + + + + + + as an example , we apply the hierarchical segmentation algorithm to the current paper .we use the sentence level 2d signal .let denote the vector , where the semantic scale is fixed to a constant , and the semantic index enumerates through the whole vocabulary .we identify hierarchical boundaries by tracking the zero contours ( where denotes -norm ) to the scale , where the length of the projection in scale space ( i.e. , the vertical span ) reflects each contour line s topic granularity , as plotted in figure [ fig2 ] ( top ) . as a reference , the velocity magnitude curve ( bottom ) , and the true boundaries of sections ( red - dashed vertical lines in top figure ) and subsections ( green - dashed )are also plotted .as we can see , the predictions match the ground truths with satisfactorily high accuracy .this paper presented scale - space theory for text , adapting concepts , formulations and algorithms that were originally developed for images to address the unique properties of natural language texts .we also show how scale - space models can be utilized to facilitate a variety of nlp tasks .there are a lot of promising topics along this line , for example , algorithms that scale up the scale - space implementations towards massive corpus , structures of the semantic networks that enable efficient or even close - form scale - space kernel / relevance model , and effective scale - invariant descriptors ( e.g. , named entities , topics , semantic trends in text ) for texts similar to the sift feature for images . | scale - space theory has been established primarily by the computer vision and signal processing communities as a well - founded and promising framework for multi - scale processing of signals ( e.g. , images ) . by embedding an original signal into a family of gradually coarsen signals parameterized with a continuous scale parameter , it provides a formal framework to capture the structure of a signal at different scales in a consistent way . in this paper , we present a scale space theory for text by integrating semantic and spatial filters , and demonstrate how natural language documents can be understood , processed and analyzed at multiple resolutions , and how this scale - space representation can be used to facilitate a variety of nlp and text analysis tasks . |
binary systems are among the best known sources of gravitational waves . specially interestingare the double neutron star systems ( nsns systems ) , the binaries formed by a neutron star and a black hole ( bhns systems ) and the binaries of black holes ( bhbh systems ) , because their emissions have a high probability of being detected in the near future .besides , it is well known that binaries lose energy and momentum via emission of gravitational waves , which cause reductions in their orbital distances , consequently increasing the orbital frequencies . moreover , for systems in circular orbits , the frequency of the emitted waves is twice the orbital frequency .the process continues until the systems reach the coalescing phase , where the systems leave the periodic regime and start the merging phase . on the other hand ,concerning the study of the gravitational radiation itself , the stochastic backgrounds are of special interest .backgrounds can be generated when , for example , there is a superposition of signals of several sources , resulting in smooth - shaped spectra spanning a wide range of frequencies . in particular ,we are concerned in this paper with the backgrounds generated by a population of coalescing compact binaries formed from redshifts ranging from up to , i.e. , cosmological binaries .we can find in the literature some very interesting works on this issue , such as , where the authors calculated the backgrounds generated by coalescing nsns systems . generally speaking , they used monte carlo techniques to simulate the extragalactic population of compact binaries . besides , these authors considered the time evolution of the orbital frequency by using the delay time " ( that is , the interval of time between the formation and the coalescence of the systems ) in their calculations . in our method , we consider the time evolution in an explicit form , where the equation describing the evolution of the orbital frequency is taken at each instant of life of a given binary . in zhuet al. , the authors consider the spectra generated by coalescing bhbh systems . in this paper , they assume average quantities for the energy emissions of single sources ; on the other hand , in our calculations we consider all the values for the orbital parameters a system can have , in the form of distribution functions . still concerning the spectra generated by coalescing bhbh systems , marassi et al . adopted an updated version of the seba population synthesis code , in which the masses of the black holes range from and . roughly speaking , the various papers cited above show results that are characterized by spectra with frequencies ranging from and and with maximum amplitudes located in the interval ranging from to . as shown below, the backgrounds we generated have similar forms to the ones we mentioned , though our results show , in general , higher amplitudes .this difference will be discussed timely .further , as it will be shown , our method has the very useful characteristic of being numerically simple , in the sense that it does not demand a heavy computational work . in this paper, the spectra generated by the coalescing binaries will be calculated by means of where represents the dimensionless amplitude of the spectrum , is the observed frequency , is the amplitude of the signals generated by each source and is the differential rate of generation of gravitational waves .it is worth pointing out that ( [ bg ] ) was deduced from an energy - flux relation .in fact , in a paper by de araujo et al. , the authors gave a detailed derivation of this equation , showing its robustness .actually , one can use ( [ bg ] ) in the calculation of different types of stochastic backgrounds , provided that one knows and the corresponding to the case one is dealing with . here , has the form where is the emitted frequency that , in this case , is the frequency emitted by a coalescing system ( note that and are related to each other by [ redshift ] ) , is the reduced mass and is the total mass .the differential rate in writing in following form where is the comoving volume element , is the redshift and and are the mass distribution functions of the components of the systems . in fact , for neutron stars such distributions are given by dirac s delta functions , since we are considering that all these objects have the same mass of ; for black holes we use the function \mbox{,}\ ] ] where is given in solar mass units .the term is known from cosmology , and the problem of determining comes down to the calculation of . at this point , one could ask in what the present study differs from the previous ones .we will see that the main difference has to do with the application of a new method to calculate developed in our previous papers .the paper is organized as follows : in section [ sec2 ] , we show the main steps to obtain ; then , with ( [ bg ] ) , and ( [ source ] ) at hand , we calculate for the three families of compact binaries , which are explained in section [ sec3 ] ; in section [ sec4 ] we present the results and discuss , in particular , the detectability of the backgrounds studied here by the interferometric detectors laser interferometer space antenna ( lisa ) ( now evolved lisa ( elisa ) ) , big bang observer ( bbo ) , deci - hertz interferometer gravitational wave observatory ( decigo ) , advanced laser interferometer gravitational wave observatory ( aligo ) , einstein telescope ( et ) , and the cross - correlation of pairs of aligos and et ; finally , in section [ sec5 ] we present our conclusions .we present here the main steps for the calculation of ( we refer the reader to refs . for details ) .we start by writing in the form where the expression in the right - hand side is the formation rate of systems per comoving volume that reach the frequency at the instant ( see the derivation in appendix [ apena ] and also in ref. ) .also , refers to the instant of birth of the systems . in ( [ dr2 ] ) , is the frequency distribution function of the binaries , which has the form ( we refer the reader to appendix [ apenb ] and ref. for the derivation of this equation ) : ^{1/3}\nu^{-11/3}\nu_{0}^{2}\mbox{exp}\left[\frac{-(r-\bar{r})^{2}}{2\sigma^{2}}\right ] , \label{dist2}\end{aligned}\ ] ] where is the initial frequency ; , and are constants given in table [ tab2 ] ; and is the orbital distance , related to by means of kepler s third law . on the other hand , is the binary formation rate density that , for nsns systems is given bys here , refers to the redshift of birth of the systems , which is related to via the usual expression that can be found in any textbook of cosmology ; and is the star formation rate density ( sfrd ) .there are , in the literature , many different proposals to the sfrd , although they do not differ from each other very significantly . herewe adopt , as a fiducial one , that given by springel and hernquist , namely where , , and with fixing the normalization .besides , in ( [ bfrd ] ) , ( see ) is the mass fraction of stars that is converted into neutron stars , where is the fraction of binaries that survive to the second supernova event ; gives the fraction of massive binaries ( that is , those systems where both components can generate supernovae ) and is the mass fraction of progenitors that originates neutron stars , which , in the present case , is calculated by where is the salpeter mass distribution with and .numerically , we have , and . now , considering ( [ bfrd ] ) , we can write ( [ dr2 ] ) in the form where , following the notation adopted by zhu et al , is calculated by {0}.\ ] ] in this context , following wu et al , is given by where is the local coalescence rate and has the form {z=0}dt_{0}.\ ] ] for bhns and bhbh systems , we have similar expressions , but with different values of .such values can be estimated by means of the results found in belczynski et al. , where these authors claim that the population of binaries is formed by % of nsns , % of bhbh and % of bhns binaries .therefore , and can be related to by means of these proportions .it is worth mentioning that belczynski et al. studied compact binaries with merger times lower than ; that is , they considered coalescing binaries .basically , in the simulations they used , the binaries are formed through several different channels .specifically , nsns systems are formed through different channels , where there is a predominance of channels containing hypercritical accretion between a low - mass helium giant and its companion neutron star . on the other hand , bhns and bhbh systemsare formed through just for and three channels , respectively , where there is a moderate predominance of mass transfer events .in this section , we present the calculations of the spectra generated by nsns , bhbh and bhns systems .although we are using ( [ bg ] ) for the three cases , the calculations are different for each family of binaries .therefore , we will show the calculations separately .neutron stars , according to theories of stellar evolution and observations ( see , e.g. , ref . ) have characteristic masses that fall in a narrow interval around .so , in this paper we are considering that all neutron stars have masses of , which is a realistic choice , and at the same time , a simplification .one could ask how the results would be affected by the choice of the neutron star ( ns ) equation of state ( eos ) .since we are considering the coalescing phase , the results depend mainly on the mass of the nss .nsns systems are characterized by a specific coalescence frequency that , according to , e.g. , ref . , may be considered as for a pair of two nss .the choice of the eos is certainly important for the subsequent phase of evolution of the system , when the merger phase takes place .although there is just a value for the coalescence frequency , the spectrum will be spread over a wide range of frequencies .this behavior is due to the cosmic redshift , because systems emitting at the same frequency , but at different redshifts , will generate signals with different observed frequencies , obeying where , in this case , we are considering .moreover , as we are considering that the redshift has the minimum and maximum values given by and , respectively , the observed frequencies will have minimum and maximum values of and , respectively .since the masses of the components of the systems and their coalescing frequencies are the same , the spectra will only depend on the redshift .however , from ( [ redshift ] ) one notices that there will be just one value of for each value of , such that ( [ bg ] ) must be handled in a particular way .first , let us rewrite ( [ bg ] ) in the form where , and is the value of redshift corresponding to each observed frequency by means of ( [ redshift ] ) .now , in order to calculate ( [ coalnsns1 ] ) , can be written in the following form where is the value of the comoving volume at and is the dirac s delta function .therefore , substituting ( [ coalnsns2 ] ) in ( [ coalnsns1 ] ) and integrating , we have the study of the coalescence of bhns systems is more complicated than the case of nsns systems , because the frequency of coalescence depends on the mass of the black hole .it is usually assumed that the coalescence occurs when the neutron star reaches the innermost stable circular orbit ( isco ) of the black hole .so , using kepler s third law and recalling that the emitted frequency is twice the orbital frequency , we have where is the radius of the isco of the black hole , which is related to the schwarzschild radius , namely substituting ( [ schw ] ) in ( [ kepler ] ) we have since we are considering that black holes have masses in the interval , the emitted frequencies will have minimum and maximum values given by and , respectively .moreover , considering ( [ redshift ] ) and ( [ massbh ] ) , one notes that for each value of , one has continuous ranges of values for and .however , in the case of bhns systems , ( [ bg ] ) must be integrated over but not over , since these variables are not independent .moreover , for a given value of one needs to determine the maximum and minimum limits of integration , which are given by when or , we set and , respectively . once the interval of integration is set , the variables and in the integral can be written as functions of by solving ( [ massbh ] ) and ( [ redshift ] ) . in addition , note that in this case one needs to consider in the calculation of ( [ bg ] ) the distribution function given by ( [ blackhole ] ) .as a result , we have in this case , the frequency of coalescence depends on the values of the two components of the system .following marassi et al. , is given by where is the symmetric mass ratio and is the total mass .the polynomial coefficients are , and . according to ( [ bhbh1 ] ) , for each value of , there will be a continuous set of pairs of values for and that satisfies this equation .moreover , the masses are not independent , in fact they are related to each other by ( [ bhbh1 ] ) .therefore , we should integrate over one of the masses ( or one parameter which describes both masses . )however , it is not possible to solve ( [ bhbh1 ] ) analytically , i.e , to write the masses as functions of .therefore , we need to use approximations . first , consider that and ( which we refer as to and ) are related to each other by where the variable is greater than or equal to one . substituting ( [ bhbh2 ] ) in ( [ bhbh1 ] ) we can write the masses and as functions of and , namely \\\nonumber m_{2}&=&\frac{c^{3}}{g}\frac{1}{\pi \nu}\left[\frac{a_{0}k^{2}}{(1+k)^{5}}+\frac{b_{0}k}{(1+k)^{5}}+\frac{c_{0}}{1+k}\right].\end{aligned}\ ] ] a suitable approximation for ( [ bhbh1 ] ) is given by since for all values of and .considering that , we use ( [ bhbh4 ] ) in order to estimate a first value for ; next , we take this pair of values for the masses and calculate a first value for ; we then correct the value for using with this new value of , we repeat the above process : we calculate again and use ( [ bhbh5 ] ) , bearing in mind that this process may be performed an arbitrary number of times in order to yield more accurate values for . in the cases where we have at the end of the process , we consider and perform an analogous process to find out . with the pair at hand ,we calculate the maximum value of : thus , in order to cover all possible values of and , we consider that , in ( [ bhbh3 ] ) , ranges form one to . finally , ( [ bg ] ) can be written as follow usual , in the literature , we represent the backgrounds in terms of the strain amplitude , which is given by and also in terms of the energy density parameter , which reads ( see , e.g. , ref . ) the spectra are shown in fig .[ fig1 ] and fig .[ fig2 ] for and , respectively .note that these spectra are mainly compared to the sensitivity curves of et and aligo , since the spectra are in their frequency bands .the sensitivity curves of the proposed space - based antennas lisa , elisa , bbo , decigo are also shown , but given their frequency bands they can not detect the spectra studied here . from fig .[ fig1 ] , one notices that the background generated by the three families of compact binaries are below the sensitivity curves of the interferometric detectors . besides , one notices that the background generated by the bhns systems have higher amplitudes when compared to the ones generated by nsns and bhbh systems . on the other hand ,the spectrum corresponding to bhbh systems has the lowest amplitudes .since our calculations depend on some parameters and functions , it is worth investigating how our results are affected by different choices of these quantities .first , let us consider the masses of the components : in ( [ source ] ) , if we multiply both masses by a factor of , will be multiplied by a factor of .important variations in the amplitudes would occur only if or . from ( [ bg ] ) , ( [ bfrd ] ) and ( [ strain ] ) ,one notices that and .therefore , if ones multiply or by , say , a factor of , the amplitudes shown in fig . [ fig1 ] will increase by a factor of .therefore , for realistic scenarios , different choices for and would have small effects on the amplitudes of the backgrounds . comparing fig .[ fig2 ] with similar studies found in the literature ( see , e.g. , refs . ) , one sees a good agreement concerning the shapes of the spectra , although our results show higher amplitudes .for example , in one sees that at for the backgrounds generated by nsns systems , while for our corresponding spectrum , shown in fig .[ fig1 ] , we have at the same frequency . comparing our results for bhbh systems ( see fig .[ fig2 ] ) with the results found in , one can note some similarities : the amplitudes increase until a maximum value in the range and then they have a sharp decrease . concerning the amplitudes , we have a maximum value of , while in zhu et al the value is .marassi et al also study backgrounds generated by bhbh binaries .these authors discussed different models , and the resulting spectra present maximum amplitudes ranging from for a frequency band around .it is worth mentioning that , generally speaking , the spectra is model dependent .therefore , different assumptions lead to different backgrounds . in ref. , for example , the population of binaries is such that the maximum probability of coalescence is around . therefore , for there is a relatively small proportion of coalescing systems emitting ; in our calculations we do not consider such a behavior .this difference in the proportion of systems at lower redshifts could explain our higher amplitudes as compared to the ones of ref . .although the spectra ( signals ) shown in fig .[ fig1 ] are below the sensitivity curves of the detectors , it could well be possible detect them by correlating the outputs of two or more detectores . for the correlation of two interferometers , the detectability of a given signal can be quantified by means of the so called signal - to - noise ratio ( s ) , namely : ,\ ] ] where and are the spectral noise densities , is the integration time , and is the overlap reduction function , which depends on the relative positions , spatial orientation , and distances of the detectors ; and is given by ( [ omega ] ) . in table[ tab1 ] one can see the s for the three families of compact binaries , in particular for pairs of aligos and et .lll system & aligo & et + nsns & & + bhns & & + bhbh & & + from table [ tab1 ] , one notices that et could in principle detect the backgrounds where the spectrum generated by bhns systems would have higher probability of detection ; for pairs of aligos , the low values of the s ratio indicate a non detection .in this paper , we calculate the stochastic background of gravitational waves generated by coalescing compact binaries , using a new method developed in our previous studies .we show that , of the three spectra considered in this paper , the one generated by bhns systems has the highest amplitudes , while the background by bhbh systems show the lowest amplitudes . moreover , one notices slight differences in the forms of the spectra , which are due to the different methods used to calculate them by means of ( [ bg ] ) .we found that the backgrounds calculated here would not be detected by the interferometric detectors such as ligo and et , although thanks to the cross - correlation of signals et could , in principle , detect such signals .particularly , we found that the spectrum generated by bhns systems have the highest s ratio , while the one corresponding to bhbh systems presents the lowest s .concerning the dependence of our results on the parameters used in the calculations , we found that the masses of the components of the binaries , as well as and , do not strongly influence the backgrounds . besides , a particular choice for the ns eos does not affect the results either .we compared the spectra studied here with some interesting results found in the literature .one notices similarities in their shapes , namely : maximum frequencies of and maximum amplitudes in the range . roughly speaking ,these characteristics are common to the three families of binaries . on the other hand ,our amplitudes given in terms of are in general higher than the ones found in the literature by roughly one order of magnitude .we concluded that such a difference is mainly due to population characteristics assumed .therefore , generally speaking , the spectra is model dependents .efde would like to thank capes for support and jcna would like to thank fapesp and cnpq for partial support .finally , we thank the referee for the careful reading of the paper , the criticisms , and the very useful suggestions which greatly improved our paper .in this appendix , we present the main steps of the derivation of . for further details ,we refer the reader to refs . .we derive by means of an analogy with a problem of statistical mechanics . in this problem ,the aim is to calculate the number of particles that reach a given area in a time interval , i.e. , the objective is to calculate the flux of particles .basically , this flux is calculated by counting the particles inside the volume , adjacent to the area , that are moving towards with velocity , where obeys a distribution function .hence , the flux is obtained by integrating over all the positive values of . with some modifications ,this method can be used to determine .first , we substituted the spatial coordinate by the frequency and the velocity by the time variation of the frequency , which is defined by .therefore , the number of systems in the interval adjacent to a particular frequency is given by considering that the distribution gives the number of systems which have in the interval , the number of systems in and with values of in the interval is given by notice that the denominator of the term between parenthesis in ( [ back2 ] ) is the total number of systems .now , using the function given by ( [ xi ] ) and changing the differential by means of the chain rule , ( [ dmu ] ) assumes the form where , and are the masses of the components of the system and is the initial frequency .so , carrying out a change of variables via , where was associated with the variable in , one has finally , it would be necessary to perform a further coordinate transformation in order to write as a function of the emitted frequency .such a transformation , calculated by means of , is trivial . besides, is written as a function of by means of kepler s third law . | gravitational waves are perturbations in the spacetime that propagate at the speed of light . the study of such phenomenon is interesting because many cosmological processes and astrophysical objects , such as binary systems , are potential sources of gravitational radiation and can have their emissions detected in the near future by the next generation of interferometric detectors . concerning the astrophysical objects , an interesting case is when there are several sources emitting in such a way that there is a superposition of signals , resulting in a smooth spectrum which spans a wide range of frequencies , the so - called stochastic background . in this paper , we are concerned with the stochastic backgrounds generated by compact binaries ( i.e. binary systems formed by neutron stars and black holes ) in the coalescing phase . in particular , we obtain such backgrounds by employing a new method developed in our previous studies . example.eps 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath setlinewidth gsave .4 setgray fill grestore stroke grestore |
the work to be presented in this article lies at the boundary between physics and electrical engineering and aims at the experimental implementation of practical hardware technology for quantum computation .at this point in time , one of the main objectives in the quantum computing community is to build and prototype tools for scalable architectures that may lead to the realization of a universal quantum computer . in this project, we undertake the task of implementing an extensible wiring method for the operation of a quantum processor based on solid - state devices , e.g. , superconducting qubits .possible experimental solutions based on wafer bonding techniques or coaxial through - silicon vias as well as theoretical proposals have recently addressed the wiring issue , highlighting it as a priority for quantum computing .building a universal quantum computer will make it possible to execute quantum algorithms , which would have profound implications on scientific research and society . for a quantum computer to be competitive with the most advanced classical computer, it is widely believed that the qubit operations will require error rates on the order of or less .achieving such error rates is only possible by means of quantum error correction ( qec ) algorithms , which allow for the implementation of fault tolerant operations between logical qubits .a logical qubit is realized as an ensemble of a large number of physical qubits ( on the order or larger ) , where each physical qubit behaves as an effective quantum - mechanical two - level system . among various qec algorithms ,the most practical at present is the surface code algorithm .the surface code requires only nearest - neighbor interactions between physical qubits on a two - dimensional lattice , one- and two - qubit physical gates , and qubit measurement with error rates below approximately .both fast gates and measurements are indispensable to run quantum algorithms efficiently ; execution times on the order of tens of nanoseconds for physical operations ( e.g. , a gate or measurement operation ) are highly desirable and represent the state - of - the - art in qubit technology today . under these conditions , to factorize a -bit number using shor s algorithm requires more than million physical qubits ( i.e. , approximately logical qubits storing meaningful data ; note that these logical qubits occupy less than of the quantum computer , the remainder of which is used to generate special states facilitating computation ) , with an overall computation time of .quantum computing architectures can be implemented using photons , trapped ions , spins in molecules and quantum dots , spins in silicon , and superconducting quantum circuits .the last are leading the way for the realization of the first surface code logical qubit , which is one of the priorities in the quantum computing community at present . recently, several experiments based on superconducting quantum circuits have demonstrated the principles underlying the surface code .these works have shown a complete set of physical gates with fidelities beyond the surface code threshold , the parity measurements necessary to detect quantum errors , and have realized a classical version of the surface code on a one - dimensional array of nine physical qubits .notably , the planar design inherent to the superconducting qubit platform will make it possible to implement large two - dimensional qubit arrays , as required by the surface code . despite all these accomplishments ,a truly scalable qubit architecture has yet to be demonstrated .wiring is one of the most basic unsolved scalability issues common to most solid - state qubit implementations , where qubit arrays are fabricated on a chip .the conventional wiring method based on wire bonding suffers from fundamental scaling limitations as well as mechanical and electrical restrictions .wire bonding relies on bonding pads located at the edges of the chip .given a two - dimensional lattice of physical qubits on a square chip , the number of wire bonds that can be placed scales approximately as ( bonds for each chip side ) .wire bonding will thus never be able to reach the required law according to which physical qubits scale on a two - dimensional lattice .furthermore , for large , wire bonding precludes the possibility of accessing physical qubits in the center region of the chip , which is unacceptable for a physical implementation of the surface code . in the case of superconducting qubits , for example , qubit control and measurement are typically realized by means of microwave pulses or , in general , pulses requiring large frequency bandwidths . by their nature, these pulses can not be reliably transmitted through long sections of quasi - filiform wire bonds .in fact , stray capacitances and inductances associated with wire bonds as well as the self - inductance of the bond itself limit the available frequency bandwidth , thus compromising the integrity of the control and measurement signals .additionally , the placement of wire bonds is prone to errors and inconsistency in spacing . in this work, we set out to solve the wiring bottleneck common to almost all solid - state qubit implementations .our solution is based on suitably packaged _ three - dimensional micro wires _ that can reach any area on a given chip from above .we define this wiring system as the _ quantum socket_. the wires are coaxial structures consisting of a spring - loaded inner and outer conductor with diameters of and , respectively , at the smallest point and with a maximum outer diameter of .the movable section of the wire is characterized by a maximum stroke of approximately , allowing for a wide range of on - chip mechanical compression .all wire components are non magnetic , thereby minimizing any interference with the qubits .the three - dimensional wires work both at room temperature and at cryogenic temperatures as low as .the wires test - retest reliability is excellent , with marginal variability over hundreds of measurements .their electrical performance is good from dc to at least , with a contact resistance smaller than and an instantaneous impedance mismatch of approximately .notably , the coaxial design of the wires strongly reduces unwanted crosstalk , which we measured to be at most for a realistic quantum computing application . in a recent work ,seven sequential stages necessary to the development of a quantum computer were introduced . at this time , the next stage to be reached is the implementation of a single logical qubit characterized by an error rate that is at least one order of magnitude lower than that of the underlying physical qubits . in order to achieve this task , a two - dimensional lattice of physical qubits with an error rate of at most is required . in the case of superconducting qubitssuch a lattice can be realized on a chip area of ( the largest square that can be diced from a standard inch wafer ) and wired by means of a quantum socket .it is feasible to further miniaturize the three - dimensional wires so as to achieve a wire density of .this would allow the manipulation of physical qubits and , possibly , the realization of simple fault tolerant operations .furthermore , the wires could serve as interconnect between a quantum hardware layer fabricated on one chip and a classical hardware layer realized on a separate chip , just above the quantum layer .the classical hardware would be used to manipulate the qubits and could be implemented by means of rapid single - flux - quantum ( rsfq ) digital circuitry .the implications of our cryogenic micro - wiring method go beyond quantum computing applications , providing a useful addition to the packaging industry for research applications at low temperatures . with this work ,we demonstrate that the laborious and error - prone wire bonding technique can be substituted by the simple procedure of inserting the chip into a sample box equipped with three - dimensional wires .our concept will expedite sample packaging and , thus , experiment turn over even for research not directly related to quantum information .this research article is organized as follows . in sec .[ the : quantum : socket : design ] , we introduce the quantum socket design , with special focus on the three - dimensional wires ( cf . subsec .[ three : dimensional : wires ] ) , the microwave package ( cf . subsec .[ microwave : package ] ) , and the package holder ( cf .[ package : holder ] ) . in the same section ,we show microwave simulations of a bare three - dimensional wire , of a wire connected to a pad on a chip , and of an entire microwave package ( cf .[ microwave : simulations ] ) . in sec .[ the : quantum : socket : implementation ] , we show the physical implementation of all components described in sec .[ the : quantum : socket : design ] . in this section ,we describe the materials used in the quantum socket and show how the socket is assembled .in particular , we describe the magnetic and thermal properties of the quantum socket components ( cf .[ magnetic : properties ] and subsec .[ thermal : properties ] , respectively ) as well as the spring characterization ( cf .[ spring : characterization ] ) .moreover , we discuss in detail the quantum socket alignment procedure ( cf . subsec .[ alignment ] ) . in sec .[ characterization ] , we present a variety of measurements used to characterize the quantum socket operation .we first focus on a four - port measurement used to estimate a wire contact resistance ( cf .[ four : point : measurements ] ) .we then show a series of microwave measurements on samples with different geometries and materials , both at room temperature and at .these measurements comprise two - port scattering parameter ( s - parameter ) experiments ( cf .[ two : port : scattering : parameters ] ) , time - domain reflectometry ( tdr ) analysis ( cf . subsec .[ time : domain : reflectometry ] ) , and signal crosstalk tests ( cf .[ signal : crosstalk ] ) . in sec .[ applications : to : superconducting : resonators ] , we show an application of the quantum socket relevant to superconducting quantum computing , where the socket is used to measure aluminum ( al ) superconducting resonators at a temperature of approximately . finally , in sec .[ conclusions ] , we envision an extensible quantum computing architecture where a quantum socket is used to connect to a lattice of superconducting qubits and comment on the possibility to use the socket in conjunction with rsfq electronics .the development of the quantum socket required a stage of meticulous micro - mechanical and microwave design and simulations .it was determined that a spring - loaded interconnect the three - dimensional wire was the optimal method to electrically access devices lithographically fabricated on a chip and operated in a cryogenic environment .an on - chip contact pad geometrically and electrically matched to the bottom interface of the wire can be placed easily at any desired location on the chip as part of the fabrication process , thus making it possible to reach any point on a two - dimensional lattice of qubits .the contact pad is realized as a thin metallic film deposited on a dielectric substrate ; arbitrary film thicknesses can be deposited .the rest of the sample is fabricated on chip following a similar process .the coaxial design of the wire provides a wide operating frequency bandwidth , while the springs allow for mechanical stress relief during the cooling process .the three - dimensional wires used in this work take advantage of the knowledge in the existing field of microwave circuit testing . however , reducing the wire dimensions to a few hundred micrometers and using it to connect to quantum - mechanical micro - fabricated circuits at low temperatures resulted in a significant extension of existing implementations and applications . in this section , we will describe the design of the three - dimensional wires , of the microwave package used to house the wires , and of the microwave package holder .additionally , we will show a set of microwave simulations of the main components of the quantum socket .figure [ figure01:bejanin ] shows the design of the quantum socket components .figure [ figure01:bejanin ] ( a ) displays a model of a three - dimensional wire .the coaxial design of the wire is visible from the image , which features a wire long when uncompressed .the wire is characterized by an inner cylindrical pin of diameter and an outer cylindrical body ( the electrical ground ) of diameter at its narrowest region ; this region is the bottommost section of the wire and , hereafter , will be referred to as the wire _ contact head _ ( cf . the inset of fig . [ figure01:bejanin ] ( a ) , as well as the dashed box on the left of fig . [ figure02:bejanin ] ( a ) ) .the contact head terminates at the wire _bottom interface _ ; this interface is designed to mate with a pad on a chip ( cf .[ figure02:bejanin ] ( b ) and ( c ) ) .the outer body includes a rectangular aperture , the _ tunnel _ , to prevent shorting the inner conductor of an on - chip coplanar waveguide ( cpw ) transmission line ; the transmission line can connect the pad with any other structure on the chip .two different tunnel dimensions were designed , with the largest one reducing potential alignment errors .these errors can result in undesired short - circuit connections to ground .the tunnel height was in both cases , with a width of or .the internal spring mechanisms of the wire allow the contact head to be compressed ; the maximum stroke was designed to be , corresponding to a working stroke of .the outer body of the three - dimensional wire is an m male thread used to fix the wire to the lid of the microwave package ( cf .[ figure01:bejanin ] ( b ) and ( d ) ) .the thread is split into two segments of length and that are separated by a constriction with outer diameter .the constriction is necessary to assemble and maintain in place the inner components of the three - dimensional wire .a laser - printed marker is engraved into the top of the outer body .the marker is aligned with the center of the tunnel , making it possible to mate the wire bottom interface with a pad on the underlying chip with a high degree of angular precision .the grooves partially visible on the bottom of fig .[ figure01:bejanin ] ( a ) are used to torque the wire into a thread in the package s lid .figure [ figure02:bejanin ] ( a ) shows a lateral two - dimensional cut view of the three - dimensional wire .two of the main wire components are the inner and outer barrel , which compose part of the inner and outer conductor .the inner conductor barrel is a hollow cylinder with outer and inner diameters of and ( indicated as part iv in fig . [ figure02:bejanin ] ( a ) ) , respectively .this barrel encapsulates the inner conductor spring . the outer conductor barrel is a hollow cylinder as well , in this case with an inner diameter of ( parts ii and vii ) .three polytetrafluoroethylene ( ptfe ) disks serve as spacers between the inner and outer conductor ; such disks contribute marginally to the wire dielectric volume , the majority of which is air or vacuum . the outer spring is housed within the outer barrel towards its back end , just before the last ptfe disk on the right - hand side of the wire .the _ back end _ of the wire is a region comprising a female thread on the outer conductor and an inner conductor barrel ( cf .dashed box on the right - hand side of fig .[ figure02:bejanin ] ( a ) ) .the inner conductor tip is characterized by a conical geometry with an opening angle of .such a sharp design was chosen to ensure that the tip would pierce through any possible oxide layer forming on the contact pad metallic surface , thus allowing for a good electrical contact .figure [ figure02:bejanin ] ( c ) shows the design of a typical on - chip pad used to make contact with the bottom interface of a three - dimensional wire .the pad comprises an inner and outer conductor , with the outer conductor being grounded .the pad in the figure was designed for a silver ( ag ) film of thickness .a variety of similar pads were designed for gold ( au ) and al films with thickness ranging between approximately and .the pad inner conductor is a circle with diameter that narrows to a linear trace ( i.e. , the inner conductor of a cpw transmission line ) by means of a raised - cosine taper .the raised cosine makes it possible to maximize the pad area , while minimizing impedance mismatch .as designed , the wire and pad allow for lateral and rotational misalignment of and , respectively .the substrate underneath the pad is assumed to be silicon ( si ) with a relative permittivity .the dielectric gap between the inner and outer conductor is in the circular region of the pad ; the outer edge of the dielectric gap then follows a similar raised - cosine taper as the inner conductor .the pad characteristic impedance is designed to be .the microwave package comprises three main parts : the lid ; the sample holder ; the grounding washer .the package is a parallelepiped with a height of and with a square base of side length .the chip is housed inside the sample holder .all these components mate as shown in fig .[ figure01:bejanin ] ( b ) and ( c ) . in order to connect a three - dimensional wire to a device on a chip , the wire is screwed into an m female thread that is tapped into the lid of the microwave package , as depicted in fig .[ figure01:bejanin ] ( b ) .the pressure applied by the wire to the chip is set by the depth of the wire in the package .the wire stroke , thread pitch , and alignment constraints impose discrete pressure settings ( cf . appendix [ wire : compression ] ) . in the present implementation of the quantum socket ,the lid is designed to hold a set of six three - dimensional wires , which are arranged in two parallel rows . in each row , the wires are spaced by from center to center , with the two rows being separated by a distance of .a square chip of lateral dimensions is mounted in the sample holder in a similar fashion as in ref .the outer edges of the chip rest on four protruding lips , which are wide .hereafter , those lips will be referred to as the _ chip recess_. for design purposes , a chip thickness of is assumed .correspondingly , the chip recess is designed so that the top of the chip is above the adjacent surface of the chip holder , i.e. , the depth of the recess is ( cf . fig .[ figure01:bejanin ] ( c ) ) .the outer edges of the chip are pushed on by a spring - loaded grounding washer .the chip protrusion ensures a good electrical connection between chip and washer , as shown in fig .[ figure01:bejanin ] ( c ) .the grounding washer was designed to substitute the large number of lateral bonding wires that would otherwise be required to provide a good ground to the chip ( as shown , for example , in fig .6 of ref .the washer springs are visible in fig . [ figure01:bejanin ] ( b ) , which also shows a cut view of the washer .the washer itself is electrically grounded by means of the springs as well as through galvanic connection to the surface of the lid .the four feet of the washer , which can be seen in the cut view of fig .[ figure01:bejanin ] ( b ) , can be designed to be shorter or longer .this makes it possible to choose different pressure settings for the washer . after assembling the package ,there exist two electrical cavities ( cf .[ figure01:bejanin ] ( d ) ) : one above the chip , formed by the lid , washer , and metallic surface of the sample ( _ upper cavity _ ) , and one below the chip , formed by the sample holder and metallic surface of the sample ( _ lower cavity _ ) .the hollow cavity above the sample surface has dimensions .the dimensions of the cavity below the sample are .the lower cavity helps mitigate any parasitic capacitance between the chip and the box ( ground ) .additionally , it serves to lower the effective permittivity , increasing the frequency of the substrate modes ( cf .[ microwave : simulations ] ) .a pillar of square cross section with side length of is placed right below the chip at its center ; the pillar touches the bottom of the chip , thus providing mechanical support .the impact of such a pillar on the microwave performance of the package will be described in subsec .[ microwave : simulations ] .a channel with a cross - sectional area of connects the inner cavities of the package to the outside , thus making it possible to evacuate the inner compartments of the package .this channel meanders to prevent external electromagnetic radiation from interfering with the sample .the three - dimensional wires , which are screwed into the microwave package , must be connected to the qubit control and measurement electronics .in addition , for cryogenic applications , the package must be thermally anchored to a refrigeration system in order to be cooled to the desired temperature .figure [ figure01:bejanin ] ( d ) shows the mechanical module we designed to perform both electrical and thermal connections . in this design, each three - dimensional wire is connected to a _ screw - in micro connector _ , which is indicated by an arrow in fig .[ figure01:bejanin ] ( b ) and is shown in detail in fig .[ figure02:bejanin ] ( d ) .one end of the micro connector comprises a male thread and an inner conductor pin that mate with the back end of the three - dimensional wire .the other end of the micro connector is soldered to a coaxial cable .the micro connector is necessary because the high temperatures generated by soldering a coaxial cable directly to the wire back end would damage some of the inner wire components .the end of each coaxial cable opposite to the three - dimensional wire is soldered to a sub - miniature push - on ( smp ) connector .the smp connectors are bolted to a horizontal plate attached to the microwave package by means of two vertical fixtures , as shown in fig .[ figure01:bejanin ] ( d ) .the vertical fixtures and the horizontal plate constitute the package holder .the package holder and microwave package form an independent assembly .a horizontal mounting plate , designed to interface with the package holder , houses a set of matching smp connectors .the mounting plate is mechanically and , thus , thermally anchored to the mixing chamber ( mc ) stage of a dilution refrigerator ( dr ) .this design significantly simplifies the typical mounting procedure of a sample box to a cryostat , since the package holder and microwave package can be conveniently assembled remotely from the dr and attached to it just prior to commencing an experiment .the three - dimensional wires , the transition between the wire and the on - chip pad as well as the inner cavities of the fully - assembled microwave package were extensively simulated by means of the high frequency three - dimensional full - wave electromagnetic field simulation software ( hfss ) by ansys , inc .the results for the electromagnetic field distribution at a frequency of approximately , which is a typical operation frequency for superconducting qubits , are shown in fig .[ figure03:bejanin ] .figure [ figure03:bejanin ] ( a ) shows the field behavior for a bare three - dimensional wire .the field distribution resembles that of a coaxial transmission line except for noticeable perturbations at the dielectric ptfe spacers .figure [ figure03:bejanin ] ( b ) shows the transition region .this is a critical region for signal integrity since abrupt changes in physical geometry cause electrical reflections . in order to minimize such reflections ,an impedance - matched pad was designed .however , this leads to a large electromagnetic volume in proximity of the pad , as seen in fig .[ figure03:bejanin ] ( b ) , possibly resulting in parasitic capacitance and crosstalk .cccc [ 3mm][0 mm ] & & & + [ 0mm][2 mm ] & & & + [ 3mm][0mm]vacuum & & & + [ 3mm][0mm]vacuum with pillar & & & + [ 3mm][0mm]si & & & + [ 3mm][0mm]si with pillar & & & + [ table01:bejanin ] in addition to considering the wire and the transition region , the electrical behavior of the inner cavities of the package was studied analytically and simulated numerically . as described in subsec .[ microwave : package ] , the metallic surface of the chip effectively divides the cavity of the sample holder into two regions : a vacuum cavity above the metal surface and a cavity partially filled with dielectric below the metal surface .the latter is of greatest concern as the dielectric acts as a perturbation to the cavity vacuum , thus lowering the box modes . for a simple rectangular cavity, the frequency of the first mode due to this perturbation can be found as , where is the frequency of the unperturbed mode , the relative permittivity of the dielectric , the substrate thickness , and the cavity height . from eq .( [ equation:01 ] ) , we estimated this box mode to be . however , considering the presence of the pillar , the three - dimensional wires , etc . , we had to use numerical simulations to obtain a more accurate estimate of the lowest box modes .the results for the first three modes are reported in table [ table01:bejanin ] .discounting the pillar , the analytical and simulated values are in good agreement with each other .the addition of the support pillar significantly lowers the frequency of the modes .in fact , it increases the relative filling factor of the cavity by confining more of the electromagnetic field to the dielectric than to vacuum .given the dimensions of this design , the pillar leads to a first mode which could interfere with typical qubit frequencies . in spite of this ,the pillar was included in the final design in order provide a degree of mechanical support .note that the pillar can alternatively be realized as a dielectric material , e.g. , ptfe ; a dielectric pillar would no longer cause field confinement between the top surface of the pillar and the metallic surface of the chip .the physical implementation of the main components of the quantum socket is displayed in fig .[ figure04:bejanin ] .in particular , fig . [ figure04:bejanin ] ( a ) shows a macro photograph of a three - dimensional wire .the inset shows a scanning electron microscope ( sem ) image of the wire contact head , featuring the version of the tunnel .this wire was cycled approximately ten times ; as a consequence , the center conductor of the contact head , which had a conical , sharp shape originally , flattened at the top .the metallic components of the wire were made from bronze and brass ( cf .[ magnetic : properties ] ) , and all springs from hardened beryllium copper ( becu ) . except for the springs ,all components were gold plated without any nickel ( ni ) adhesion underlayer .the estimated mean number of cycles before failure for these wires is approximately .figure [ figure04:bejanin ] ( b ) displays the entire microwave package in the process of locking the package lid and sample holder together , with a chip and grounding washer already installed .as shown in the figure , two rows of three - dimensional wires , for a total number of six wires , are screwed into the lid with pressure settings as described in appendix [ wire : compression ] ; each wire is associated with one on - chip cpw pad .the four springs that mate with the grounding washer feet are embedded in corresponding recesses in the lid ; the springs are glued in these recesses by way of vibra - tite ( from nd industries , inc . ) , a medium - strength thread locker that works well at low temperatures .all package components were made from high - purity al .figure [ figure04:bejanin ] ( c ) shows a picture of the assembled microwave package attached to the package holder ; the entire structure is attached to the mc stage of a dr .all parts of the assembly were made from high thermal conductivity c10100 oxygen - free electrolytic ( ofe ) copper alloy . ] .the parts were polished to a mirror finish before being gold plated .the coaxial cables between the screw - in micro connectors and the smp connectors are from the ez form cable corporation , model ez 47-cu - tp ( ez 47 ) .the smp connectors , also from ez form , are models smp bulkhead jack for inch coaxial cables ( smp 047 ; installed in the package holder horizontal plate ) and smp bulkhead plug with limited detent for inch cables ( smp 086 ; installed in the mounting plate attached to the mc stage of the dr ) .all smp connectors were custom - made non - magnetic connectors . in the remainder of this section , we will discuss the magnetic and thermal properties of the materials used to implement the quantum socket as well as the spring characterization and the alignment procedure .an important stage in the physical implementation of the quantum socket was the choice of materials to be used for the three - dimensional wires , the microwave package , and the package holder .in fact , it has been shown that non - magnetic components in proximity of superconducting qubits are critical to preserve long qubit coherence .the three - dimensional wires are the closest devices to the qubits .for this reason , all their components should be made using non - magnetic materials .due to machining constraints , however , alloys containing ferromagnetic impurities ( iron ( fe ) , cobalt ( co ) , and ni ) had to be used . for the outer conductor components we used brass , which is easy to thread ; the chosen grade was iso cuzn21si3p ( en cw724r ) . for the inner conductor components ,brass cw724r did not meet the machining requirements .consequently , we decided to use copper alloy ( phosphor bronze ) grade din 2.1030 - cusn8 ( en cw453k ) .the chemical composition for these two materials is reported in table [ table04:bejanin ] of appendix [ the : quantum : socket : magnetism ] . the dielectric spacers were made from ptfe and the rest of the components from hardened becu ; both materials are non - magnetic .the weight percentage of ferromagnetic materials is non - negligible for both cw453k and cw724r .thus , we performed a series of tests using a zero gauss chamber ( zgc ) in order to ensure both materials were sufficiently non - magnetic .the results are given in appendix [ the : quantum : socket : magnetism ] and show that the magnetic impurities should be small enough not to disturb the operation of superconducting quantum devices .the microwave package and grounding washer were made from high - purity al alloy 5n5 ( purity ) provided by laurand associates , inc .the very low level of impurities in this alloy assures minimal stray magnetic fields generated by the package itself , as confirmed by the magnetic tests discussed in appendix [ the : quantum : socket : magnetism ] .the thermal conductance of the three - dimensional wires is a critical parameter to be analyzed for the interconnection with devices at cryogenic temperatures .low thermal conductivity would result in poor cooling of the devices , which , in the case of qubits , may lead to an incoherent thermal mixture of the qubit ground state and excited state .even a slightly mixed state would significantly deteriorate the fidelity of the operations required for qec .it has been estimated that some of the qubits in the experiment of ref . , which relies solely on al wire bonds as a means of thermalization , were characterized by an excited state population . among other possible factors, it is believed that this population was due to the poor thermal conductance of the al wire bonds .in fact , these bonds become superconductive at the desired qubit operation temperature of , preventing the qubits from thermalizing and , thus , from being initialized in with high fidelity . in order to compare the thermal performance of an al wire bond with that of a three - dimensional wire ,we estimated the heat transfer rate per kelvin of a wire , , using a simplified coaxial geometry . at a temperature of , we calculated . at the same temperature , the heat transfer rate per kelvin of a typical al wire boundwas estimated to be ( cf .appendix [ thermal : conductance : of : a : three : dimensional : wire ] for more details ) . a very large number of al wire bondswould thus be required to obtain a thermal performance comparable to that of a single three - dimensional wire .another critical step in the physical implementation of the quantum socket was to select springs that work at cryogenic temperatures .in fact , the force that a wire applies to a chip depends on these springs . this force , in turn , determines the wire - chip contact resistance , which impacts the socket s dc and microwave performance . among various options , we chose custom springs made from hardened becu . to characterize the springs ,their compression was assessed at room temperature , in liquid nitrogen ( i.e. , at a temperature ) , and in liquid helium ( ) .note that a spring working at is expected to perform similarly at a temperature of .a summary of the thermo - mechanical tests is reported in appendix [ thermo : mechanical : tests ] .the main conclusion of the tests is that the springs do not break ( even after numerous temperature cycles ) and have similar spring constants at all measured temperatures . and are indicated in ( a ) by means of magenta bars .( c)-(d ) al pad before and after a cooling cycle to .center conductor dragging due to cooling is indicated by a green bar .the magenta dashed line in ( a ) indicates tunnel ( i.e. , rotational ) alignment for the ag pad . the images ( dark field )were taken with an olympus mx61 microscope at magnification , manual exposure , and exposure time .note that the geometries for the pads in panels ( a ) and ( b ) are optimized for a ag film and , thus , are slightly different than those for the pads in panels ( c ) and ( d ) , which are designed for a al film.,scaledwidth=49.0% ] in order to implement a quantum socket with excellent interconnectivity properties , it was imperative to minimize machining errors and mitigate the effects of any residual errors .these errors are mainly due to : dicing tolerances ; tapping tolerances of the m-threaded holes of the lid ; tolerances of the mating parts for the inner cavities of the lid and sample holder ; tolerances of the chip recess .these errors can cause both lateral and rotational misalignment and become likely worse when cooling the quantum socket to low temperatures .more details on alignment errors can be found in appendix [ alignment : errors ] .the procedure to obtain an ideal and repeatable alignment comprises three main steps : optimization of the contact pad and tunnel geometry ; accurate and precise chip dicing ; accurate and precise package machining . for the quantum socket described in this work, the optimal tunnel width was found to be .this maintained reasonable impedance matching , while allowing greater cpw contact pad and tapering dimensions .the contact pad width and taper length were chosen to be and .these are the maximum dimensions allowable that accommodate the geometry of the wire bottom interface for a nominal lateral and rotational misalignment of and , respectively . in order to select the given pad dimensions, we had to resort to a matched raised - cosine tapering .the majority of the chips used in the experiments presented here was diced with a dicing saw from the disco corporation , model dad3240 . to obtain a desired die length ,both the precision of the saw stage movement and the blade s kerf had to be considered .for the dad3240 saw , the former is , whereas the latter changes with usage and materials . for the highest accuracy cut, we measured the kerf on the same type of wafer just prior to cutting the actual die . in order to achieve maximum benefit from the saw ,rotational and lateral alignment dicing markers were incorporated on the wafer .such a meticulous chip dicing procedure is only effective in conjunction with a correspondingly high level of machining accuracy and precision .we used standard computer numerical control ( cnc ) machining with a tolerance of thou ( ) , although electrical discharge machining can be pursued if more stringent tolerances are required .following the aforementioned procedures we were able to achieve the desired wire - pad matching accuracy and precision , which resulted in a test - retest reliability ( repeatability ) of over instances .these figures of merit were tested in two steps : first , by micro imaging several pads that were mated to a three - dimensional wire ( cf .[ on : chip : pad : micro : imaging ] ) ; second , by means of dc resistance tests ( cf .[ dc : resistance : tests ] ) .micro imaging was performed on a variety of different samples , four of which are exemplified in fig .[ figure05:bejanin ] .the figure shows a set of micro images for ag and al pads ( details regarding the fabrication of these samples are available in appendix [ sample : fabrication ] ) .figure [ figure05:bejanin ] ( a ) and ( b ) show two ag pads that were mated with the three - dimensional wires at room temperature .panel ( a ) shows a mating instance where the wire bottom interface perfectly matched the on chip pad .panel ( b ) shows two mating instances that , even though not perfectly matched , remained within the designed tolerances .notably , simulations of imperfect mating instances revealed that an off - centered wire does not significantly affect the microwave performance of the quantum socket .finally , panels ( c ) and ( d ) display two al pads which were both mated with a wire one time .while the pad in ( c ) was operated only at room temperature , the pad in ( d ) was part of an assembly that was cooled to for approximately three months .the image was taken after the assembly was cycled back to room temperature and shows dragging of the wire by a few tens of micrometers .such a displacement can likely be attributed to the difference in the thermal expansion of si and al ( cf . appendix [ alignment : errors ] ) . as a diagnostic tool , micro images of a sample already mounted in the sample holder after a mating cycle can be obtained readily by means of a handheld digital microscope .cccccccccccc [ 3mm][0mm]metal & & & & & & & & & & & + [ 0mm][2mm]- & & & & & & & & & & & + [ 3mm][0mm]au & & & & & & & & & & & + [ 3mm][0mm]au & & & & & & & & & & & + [ 3mm][0mm]au & & & & & & & & & & & + [ 3mm][0mm]au & & & & & & & & & & & + [ 3mm][0mm]au & & & & & & & & & & & + [ 3mm][0mm]au & & & & . ] & & & & & & & + [ 3mm][0mm]ag & & & & & & & & & & & + [ 3mm][0mm]al & & & & & & & & & & & + [ table02:bejanin ] in contrast to the micro imaging tests , which require the removal of the microwave package s lid , dc resistance tests can be performed _ in situ _ at room temperature after the package and package holder have been fully assembled .these tests were performed on all devices presented in this work , including au , ag , and al samples .the typical test setup comprises a microwave package with two three - dimensional wires each mating with an on - chip pad .the two pads are connected by means of a cpw transmission line with series resistance .the back end of the wires is connected to a coaxial cable ending in a microwave connector , similar to the setup in figs .[ figure01:bejanin ] ( d ) and [ figure04:bejanin ] ( c ). the dc equivalent circuit of this setup can be represented by way of a four - terminal pi network .the circuit comprises an input `` '' and output `` '' terminal , two terminals connected to a common ground `` , '' an input - output resistor with resistance , and two resistors to ground with resistance and .the and terminals correspond to the inner conductor of the two microwave connectors .the outer conductor of both connectors is grounded .the resistance is that of the center conductor of the cpw transmission line , including the contact resistance for each wire - pad interface and the series resistance of the wire s and coaxial cable s inner conductor , .the resistances and are those of the path between each center conductor and ground and include the resistance of the inner and outer conductor of the various coaxial cables and wires as well as any wire - pad contact resistance . ideally , these ground resistances should be open circuits .in reality , they are expected to have a finite but large value because of the intrinsic resistance of the si wafers used as a substrate .the design parameters , electrical properties , measurement conditions as well as the measured values of , , and for various au , ag , and al samples are reported in table [ table02:bejanin ] .the resistances were probed by means of a multimeter from the fluke corporation , model 289 .measuring resistances significantly different from the expected values means that either a lateral or rotational misalignment occurred .the resistances for some au samples were also measured at to verify whether a good room temperature alignment persisted in cryogenic conditions .the cold measurements were realized by dunking the package holder into liquid nitrogen . the measured value of for the ag samples is larger than the estimated trace resistance by .this simple result makes it possible to find an upper bound value for the contact resistance , .a more accurate estimate of the contact resistance based on four - point measurements will be described in subsec.[four : point : measurements ] .the dc resistance testing procedure presented here will be useful in integrated - circuit quantum information processing , where , for example , cpw transmission lines can serve as qubit readout lines .these tests can be expanded to encompass different circuit structures such as the qubit control lines utilized in ref .the three - dimensional wires are multipurpose interconnects that can be used to transmit signals over a wide frequency range , from dc to . these signals can be : the current bias used to tune the transition frequency of a superconducting qubit ; the gaussian - modulated sinusoidal or the rectangular pulses that , respectively , make it possible to perform xy and z control on a qubit ; the continuous monochromatic microwave tones used to read out a qubit state or to populate and measure a superconducting resonator . in general , the wires can be used to transmit any baseband modulated carrier signal within the specified frequency spectrum , at room and cryogenic temperatures . in this section ,we report experimental results for a series of measurements aiming at a complete electrical characterization of the quantum socket at room temperature and at approximately ( i.e. , in liquid nitrogen ) .first , we performed four - point measurements to estimate the contact resistance of a three - dimensional wire .second , we measured the s - parameters of a wire at room temperature .third , we measured the s - parameters of the quantum socket with an au sample at room temperature and at and an ag sample at room temperature .fourth , we realized time - domain measurements of the quantum socket .last , we performed four - port s - parameter measurements in order to assess the socket crosstalk properties .the wire - pad contact resistance is an important property of the quantum socket .in fact , a large would result in significant heating when applying dc bias signals and rectangular pulses , thus deteriorating qubit performance . in order to assess for the inner and outer conductor of a three - dimensional wire, we performed four - point measurements using the setup shown in the inset of fig .[ figure06:bejanin ] . using this setup, we were able to measure both the series resistance of the wire and the contact resistance .this allows us to estimate the overall heating that could be generated during a qubit experiment .the setup comprises a microwave package with a chip entirely coated with a thick al film ; no grounding washer was used .the package featured three three - dimensional wires , of which two were actually measured ; the third wire was included to provide mechanical stability .the package was attached to the mc stage of a dr and connected to a set of phosphor bronze twisted pairs .the twisted pairs were thermally anchored at all dr stages and connected at room temperature to a precision source - measure unit ( smu ) from keysight technologies inc . , model b2911a .we measured the resistance between the inner conductor of a wire and ground , .this resistance comprises the inner conductor wire resistance in series with the inner conductor contact resistance and any resistance to ground , . note that , at the operation temperature of the experiment ( ) , al is superconducting and , thus, the metal resistance can be neglected .figure [ figure06:bejanin ] shows the current - voltage ( i - v ) characteristic curve for . with increasing bias currents , the contact resistance results in hot - spot generation leading to a local breakdown of superconductivity . for sufficiently high bias currents , superconductivity breaks down completely . at such currents ,the observed hysteretic behavior indicates the thermal limitations of our setup .note , however , that these currents are at least one order of magnitude larger than the largest bias current required in typical superconducting qubit experiments . in order to estimate from the i - v characteristic curve , we selected the bias current region from to and fitted the corresponding slope .we obtained .this value , which represents an upper bound for the wire resistance and the wire - pad contact resistance , , is significantly larger than that associated with al wire bonds . in future versions of the three - dimensional wires we will attempt to reduce the wire - pad contact resistance by rounding the tip of the center conductor , stiffening the wire springs , using a thicker metal film for the pads , and plating the contact pads with au or titanium nitride .we note , however , that even a large value of the wire and/or wire - pad contact resistance will not significantly impair the quantum socket microwave performance . .the sweeps were conducted by both increasing ( red ) and decreasing ( blue ) the applied current between and .the voltage measurements were delayed by and averaged over .the displayed data is averaged over sweeps .the shaded region indicates two standard deviations .the dashed black lines indicate the region ( ) for which the resistance value was found using linear regression . the origin of the hystereses is explained in the main text .the inset shows the circuit diagram of the device under test , including all resistors measured by means of the four - point measurement .the position of the pad is indicated by an arrow.,scaledwidth=49.0% ] the s - parameter measurements of a bare three - dimensional wire were realized by means of the setup shown in the inset of fig .[ figure07:bejanin ] ( a ) .the device under test ( dut ) comprises a cable assembly attached to a three - dimensional wire by means of a screw - in micro connector .the cable assembly is made of an approximately long semi - rigid coaxial cable ez 47 , which is soldered to an ez form custom - made sub - miniature type a ( sma ) male connector , model 705538 - 347 .the other end of the coaxial cable is soldered to the screw - in micro connector .the sma connector of the dut is connected to one port of a vector network analyzer ( vna ) from keysight , model pna - l n5230a by means of a flexible coaxial cable .the bottom interface of the wire is connected to a 2.92 mm end launch connector from southwest microwave , inc . , model 1092 - 01a-5 , which then connects to the other port of the vna through a second flexible coaxial cable .the 2.92 mm adapter is characterized by a flush coaxial back plane , which mates with the wire bottom interface well enough to allow for s - parameter measurements up to . in order to measure the s - parameters of the dut, a two - tier calibration was performed .first , a two - port electronic calibration module ( ecal ) from keysight , model n4691b , with 2.92 mm male connectors was used to set the measurement planes to the end of the flexible cables closer to the dut .second , a port - extension routine was performed to correct for the insertion loss , phase , and delay of the 2.92 mm adapter .this made it possible to set the measurement planes to the ports of the dut .the magnitudes of the measured reflection and transmission s - parameters are displayed in fig .[ figure07:bejanin ] ( a ) .we performed microwave simulations of a three - dimensional wire for the same s - parameters ( cf .[ microwave : simulations ] for the electric field distribution ) , the results of which are plotted in fig . [ figure07:bejanin ] ( b ) .the s - parameters were measured and simulated between and .the s - parameters and show a featureless microwave response , similar to that of a coaxial transmission line .the attenuation at is and the magnitude of the reflection coefficients at the same frequency is and .the phase of the various s - parameters ( not shown ) behaves as expected for a coaxial transmission line .all measurements were performed at room temperature .the s - parameter measurements of a three - dimensional wire indicate a very good microwave performance . however , these measurements alone are insufficient to fully characterize the quantum socket operation . a critical feature that deserves special attention is the transition region between the wire bottom interface and the on - chip cpw pad .it is well - known that transitions can cause significant impedance mismatch and , thus , signal reflection . in quantum computing applications, these reflections could degrade both the qubit control and readout fidelity .figure [ figure08:bejanin ] shows a typical setup for the characterization of a wiring configuration analogous to that used for qubit operations .the setup comprises a vna from keysight , model pna - x n5242a , with ports and connected to a pair of flexible coaxial cables from huber+suhner ag , model sucoflex 104-pe ( with sma male connectors ) . in order to calibrate the measurement ,the flexible cables were first connected to a two - port ecal module from keysight , model n4691 - 60006 , featuring 3.5 mm female connectors .these cables were then connected to the sma female bulkhead adapter at the input and output ports of the dut shown in fig .[ figure08:bejanin ] .the dut incorporates a microwave package with a pair of three - dimensional wires , which address one cpw transmission line on an au or ag chip .the microwave package was attached to the package holder , as described in subsec .[ package : holder ] and sec .[ the : quantum : socket : implementation ] ( cf . also figs .[ figure01:bejanin ] ( d ) and [ figure04:bejanin ] ( c ) ) . for the measurements in this section, however , the smp adapters were substituted by sma female - female bulkhead adapters .the transmission line geometrical dimensions and wire pressure settings are reported in table [ table02:bejanin ] ; only the au samples and the ag samples were characterized at microwave frequencies .the back end of each three - dimensional wire is connected to one end of an ez 47 cable by means of the screw - in micro connector described in subsec .[ package : holder ] ; the other end of the ez 47 cable is soldered to an sma male connector .one of the ez cables is long and the other is long ; the longer cable ( output ) is connected directly to the sma bulkhead adapter , whereas the shorter cable ( input ) is prolonged using an sma female - male adapter and , then , connected to the bulkhead adapter .these bulkhead adapters are the reference planes ii and xii associated with the input and output ports of the dut , respectively , as shown in fig .[ figure08:bejanin ] .we performed a two - port s - parameter measurement of the dut from to .we selected an intermediate frequency ( if ) bandwidth , a constant excitation power ( dbm ) , and or measurement points for the au and ag samples , respectively .the measurement results at room temperature for the au and ag samples are shown in figs .[ figure09:bejanin ] ( a ) and [ figure10:bejanin ] ( a ) , respectively .the results for the au sample at are shown in fig .[ figure09:bejanin ] ( b ) .the s - parameter measurements of the au sample show that the quantum socket functions well at microwave frequencies , both at room temperature and at .since most of the mechanical shifts have already occurred when cooling to , this measurement allows us to deduce that the socket will continue functioning even at lower temperatures , e.g. , .the au sample , however , is characterized by a large value of , which may conceal unwanted features both in the transmission and reflection measurements .therefore , we prepared an ag sample that exhibits a much lower resistance even at room temperature .the behavior of the ag s - parameters is similar to that of a transmission line or coaxial connector .for example , is approximately ; as a reference , for a high - precision sma connector at the same frequency .the presence of the screw - in micro connector can occasionally deteriorate the microwave performance of the quantum socket .in fact , if the micro connector is not firmly tightened , a dip in the microwave transmission is observed . at room temperature ,it is straightforward to remove the dip by simply re - tightening the connector when required . on the contrary , for the measurements at and for any other application in a cryogenic environment assuring that the micro connector is properly torqued at all timescan be challenging .figure [ figure09:bejanin ] ( b ) , for example , shows the s - parameters for an au sample measured at .a microwave dip appeared at approximately , with a bandwidth of approximately .the inset in fig .[ figure09:bejanin ] ( b ) displays the phase angle of between and , showing that the dip is unlikely a lorentzian - type resonance ( more details in the supplemental material at http://www.supplemental-material-bejanin ) .note that the dip is far from the typical operation frequencies for superconducting qubits .additionally , as briefly described in sec .[ conclusions ] , we will remove the screw - in micro connector from future generations of the three - dimensional wires .figure [ figure10:bejanin ] ( b ) shows a simulation of the s - parameters for the ag sample , for the same frequency range as the actual measurements . while there are visible discrepancies between the measured and simulated s - parameters , the latter capture well some of the characteristic features of the microwave response of the dut .in particular , the measured and simulated reflection coefficients display a similar frequency dependence .it is worth mentioning that we also simulated the case where the wire bottom interface is not perfectly aligned with the on - chip pad ( results not shown ) .we considered lateral misalignments of and rotational misalignments of .this allowed us to study more realistic scenarios , such as those shown in fig .[ figure05:bejanin ] .we found that the departure between the misaligned and the perfectly aligned simulations was marginal .for example , the transmission s - parameters varied only by approximately . in appendix [microwave : parameters ] , we show a set of microwave parameters obtained from the measured s - parameters for the au sample at room temperature and at and for the ag sample at room temperature .these parameters make it possible to characterize the input and output impedance as well as the dispersion properties of the quantum socket . in tdr measurements , a rectangular pulse with fast rise time and fixed lengthis applied to a dut ; the reflections ( and all re - reflections ) due to all reflection planes in the system ( i.e. , connectors , geometrical changes , etc . )are then measured by way of a fast electrical sampling module .the reflections are , in turn , related to the impedances of all of the system components .thus , tdr makes it possible to estimate any impedance mismatch and its approximate spatial location in the system .tdr measurements were performed on the dut shown in fig .[ figure08:bejanin ] , with the same au or ag sample as for the measurements in subsec .[ two : port : scattering : parameters ] .as always , the au sample was measured both at room temperature and at , whereas the ag sample was measured only at room temperature .the tdr setup is analogous to that used for the s - parameter measurements , with the following differences : the dut input and output reference planes were extended to include the sucoflex flexible coaxial cables ( i.e. , these cables were not calibrated out ) ; when testing the dut input port , the output port was terminated in a load with impedance and vice versa when testing the dut output port .the tdr measurements were realized by means of a sampling oscilloscope from teledyne lecroy , model waveexpert 100h ; the oscilloscope features an electrical sampling module with bandwidth and a tdr step generator , model st-20 .the generated signal is a voltage square wave characterized by a nominal pulse rise time of , amplitude of , pulse width of , and pulse repetition rate of .the voltage reflected by the dut , , is acquired as a function of time by means of the sampling module .this time is the round - trip interval necessary for the voltage pulse to reach a dut reflection plane and return back to the sampling module .the measured quantity is given by into distance is only possible with detailed knowledge of geometries and materials for all regions of the dut . since this informationis not known to a high degree of accuracy , we prefer to express all measured quantities as a function of . ] where is the amplitude of the incident voltage square wave . from eq .( [ equation:02 ] ) , we can obtain the first - order instantaneous impedance as where .figure [ figure11:bejanin ] shows for the dut with the au sample at room temperature and at ; the measurement refers to the input port of the dut , including a flexible cable .the figure inset shows the room temperature data for a shorter time interval .this corresponds to a space interval beginning at a point between planes iv and v and ending at a point between planes vii and viii ( cf .[ figure08:bejanin ] ) . figure [ figure12:bejanin ] ( a ) shows for the ag sample at room temperature .figure [ figure12:bejanin ] ( b ) displays the data in ( a ) for a time interval corresponding to a space interval beginning at a point between planes iv and v and ending at a point between planes x and xi ; as a reference , the au data are overlaid with the ag data . of the setup in fig .[ figure08:bejanin ] .( b ) zoomin of ( a ) addressing the three - dimensional wire and the transition region between the wire and the cpw transmission line ( blue ) .the room temperature au data ( red ) is also displayed as a reference.,scaledwidth=49.0% ] for the au sample , the first main reflection plane ( plane ii ) is encountered at .the second main reflection plane ( plane v ) appears after relative to the first plane , at . from that time instant and for a span of approximately , the tdr measurement corresponds to of the three - dimensional wire itself .the maximum impedance mismatch between the ez form cable and the three - dimensional wire is approximately .the third main reflection plane ( plane vii ) corresponds to the transition region ; for the au sample , it is impossible to identify features beyond this plane owing to the large series resistance of the on - chip cpw transmission line . from empirical evidence ,the impedance of a lossy line with series resistivity increases linearly with the length of the line as .in fact , for the au sample we measured an impedance step across the cpw transmission line of approximately at room temperature and at .these steps are approximately the values reported in table [ table02:bejanin ] . in order to obtain a detailed measurement of the impedance mismatch beyond the transition region, we resorted to the tdr measurements of the dut with the much less resistive ag sample .first , we confirmed that of the input three - dimensional wire for the ag sample is consistent with the tdr measurements of the au sample ; this is readily verified by inspecting fig .[ figure12:bejanin ] ( b ) .the three - dimensional wire is the structure ending at the onset of the large impedance step shown by the au overlaid data .the structure spanning the time interval from to is associated with the input transition region , the cpw transmission line , and the output transition region .the output three - dimensional wire starts at , followed by the ez form coaxial cable , which finally ends at the sma bulkhead adapter at . the maximum impedance mismatch associated with the transition regions and the cpw transmission line is .notably , this mismatch is smaller than the mismatch between the three - dimensional wire and the coaxial cable .this is an important result .in fact , while it would be hard to diminish the impedance mismatch due to the transition region , it is feasible to further minimize the wire mismatch by creating accurate lumped - element models of the wire and use them to minimize stray capacitances and/or inductances .it is worth comparing of the quantum socket with that of a standard package for superconducting qubits , where wire bonds are used to make interconnections between a printed circuit board and the control and measurement lines of a qubit on a chip .a detailed study of the impedance mismatch associated with wire bonds is found in ref . , where the authors have shown that a long wire bond ( of length between and ; typical length in most applications ) can lead to an impedance mismatch larger than ( cf .s3 in the supplementary information of ref . ) ; on the contrary , a short wire bond ( between and ; less typical ) results in a much smaller mismatch , approximately . in terms of impedance mismatch the current implementation of the quantum socket , which is limited by the mismatch of the three - dimensional wires , lies in between these two extreme scenarios .crosstalk is a phenomenon where a signal being transmitted through a channel generates an undesired signal in a different channel .inter - channel isolation is the figure of merit that quantifies signal crosstalk and that has to be maximized to improve signal integrity .crosstalk can be particularly large in systems operating at microwave frequencies , where , if not properly designed , physically adjacent channels can be significantly affected by coupling capacitances and/or inductances . in quantum computing implementations based on superconducting quantum circuits ,signal crosstalk due to wire bonds has been identified to be an important source of errors and methods to mitigate it have been developed . however, crosstalk remains an open challenge and isolations ( opposite of crosstalk ) lower than are routinely observed when using wire bonds .the coaxial design of the three - dimensional wires represents an advantage over wire bonds .the latter , being open structures , radiate more electromagnetic energy that is transferred to adjacent circuits .the former , being enclosed by the outer conductor , limit crosstalk due to electromagnetic radiation . in realistic applications of the quantum socket, the three - dimensional wires must land in close proximity of several on - chip transmission lines . in order to study inter - channel isolation in such scenarios, we designed a special device comprising a pair of cpw transmission lines , as shown in the inset of fig .[ figure13:bejanin ] ( a ) .one transmission line connects two three - dimensional wires ( ports and ) , exactly as for the devices studied in subsecs .[ two : port : scattering : parameters ] and [ time : domain : reflectometry ] ; the other line , which also connects two three - dimensional wires ( ports and ) , circumvents the wire at port by means of a cpw semicircle . the distance between the semicircle and the wire outer conductor is designed to be as short as possible , . the chip employed for the crosstalk tests is similar to the ag sample used for the socket microwave characterization and was part of a dut analogous to that shown in fig .[ figure08:bejanin ] .the dc resistances of the center trace of the and transmission lines were measured and found to be and , respectively ( note that the transmission line is long ) .all dc resistances to ground and between the two transmission lines were found to be on the order of a few kilohms , demonstrating the absence of undesired short circuit paths . a four - port calibration and measurement of the dutwere conducted by means of the ecal module and pna - x .we selected a frequency range from to , , dbm , and . among the s - parameters , fig .[ figure13:bejanin ] ( a ) shows the magnitude of the transmission coefficients and , along with the magnitude of the crosstalk coefficients , and .the results show that the isolation in the typical qubit operation bandwidth , between and , is larger than .note that the crosstalk coefficients shown in fig .[ figure13:bejanin ] ( a ) include attenuation owing to the series resistance of the ag transmission lines . the actual isolation ,due only to spurious coupling , would thus be smaller by a few decibels .figure [ figure13:bejanin ] ( b ) shows the microwave simulations of the crosstalk coefficients , which agree reasonably well with the experimental results .these simulations are based on the models explained in subsec .[ microwave : simulations ] . from simulations, we believe the isolation is limited by the crosstalk between the cpw transmission lines , instead of the three - dimensional wires .note that the peaks at approximately correspond to an enhanced crosstalk due to a box mode in the microwave package .the peaks appear in the simulations , which are made for a highly conductive package , and may appear in measurements performed below , when the al package becomes superconductive .for the room temperature measurements shown in fig .[ figure13:bejanin ] ( a ) , these peaks are smeared out due to the highly lossy al package .-wave resonators .the grounding washer , with its four protruding feet , is placed above the chip covering the chip edges .the marks imprinted by the bottom interface of the three - dimensional wires on the al pads are noticeable .more detailed images of these marks are shown in fig .[ figure05:bejanin ] . ]thus far , we have shown a detailed characterization of the quantum socket in dc and at microwave frequencies , both at room temperature and at . in order to demonstrate the quantum socket operation in a realistic quantum computing scenario, we used a socket to wire a set of superconducting cpw resonators cooled to approximately in a dr .we were able to show an excellent performance in the frequency range from to , which is the bandwidth of our measurement apparatus .the experimental setup is described in appendix [ dilution : refrigerator : setup ] and shown in fig .[ figure19:bejanin ] .figure [ figure14:bejanin ] shows a macro photograph of a chip housed in the sample holder ; the chip is the al sample described in subsec .[ alignment ] , with geometrical and dc electrical parameters reported in table [ table02:bejanin ] .the sample comprises a set of three cpw transmission lines , each connecting a pair of three - dimensional wire pads ; multiple shunted cpw resonators are coupled to each transmission line . in this section , we will focus only on transmission line three and its five resonators .the transmission line has a center conductor width of and gap width of , resulting in a characteristic impedance of approximately .the resonators are -wave resonators , each characterized by a center conductor of width and a dielectric gap of width .the open end of the resonators runs parallel to the transmission line for a length , providing a capacitive coupling ; a ground section separates the gaps of the transmission line and resonators ( cf .s2 of the supplemental material at http://www.supplemental-material-bejanin ) .the nominal resonance frequency as well as all the other resonator parameters are reported in table [ table03:bejanin ] .cccccccccccccc [ 3mm][0mm] & & & & & & & + [ 0mm][2mm](- ) & & & & & & & + [ 3mm][0mm]1 & & & & & & & + [ 3mm][0mm]2 & & & & & & & + [ 3mm][0mm]3 & & & & & & & + [ 3mm][0mm]4 & & & & & & & + [ 3mm][0mm]5 & & & & & & & + [ table03:bejanin ] a typical dr experiment employing the quantum socket consists of the following steps .first , the sample is mounted in the microwave package , which has already been attached to the package holder ( cf .[ package : holder ] and sec .[ the : quantum : socket : implementation ] ) .second , a series of dc tests is performed at room temperature .the results for a few al samples are reported in table [ table02:bejanin ] .third , the package holder assembly is characterized at room temperature by measuring its s - parameters .the results of such a measurement are shown in fig .[ figure15:bejanin ] ( a ) .fourth , the package holder is mounted by means of the smp connectors to the mc stage of the dr and an measurement is performed .the results ( magnitude only ) are shown in fig .[ figure15:bejanin ] ( b ) in the frequency range between and .fifth , the various magnetic and radiation shields of the dr are closed and the dr is cooled down .sixth , during cooldown the measurement is repeated first at and , then , at the dr base temperature of approximately .the results are shown in figs .[ figure15:bejanin ] ( b ) and ( c ) , respectively . at note the appearance of a shallow dip at approximately , probably due to a screw - in micro connector becoming sightly loose while cooling ( cf .[ two : port : scattering : parameters ] ) .it is important to mention that in the next generation of three - dimensional wires we will eliminate the screw - in micro connector , since we believe we found a technique to overcome the soldering issues detailed in subsec .[ package : holder ] ( cf .[ conclusions ] for a brief description ) . at the base temperature ,all five resonators are clearly distinguishable as sharp dips on the relatively flat microwave background of the measurement network .we then select a narrower frequency range around each resonator and make a finer measurement .for example , fig .[ figure15:bejanin ] ( d ) shows the magnitude and phase of the resonance dip associated with resonator number .the s - parameters of each resonator were measured with the pna - x power set to m. considering that the total input channel attenuation at room temperature is at , the power at the resonator input is approximately m ( a few higher when cold ) .the normalized inverse transmission coefficient was fitted as in ref .this procedure makes it possible to accurately estimate both the internal and the rescaled coupling quality factors of a resonator .the fit results are shown in table [ table03:bejanin ] .the plot of the fits for the magnitude and phase of for resonator are overlaid with the measured data in fig .[ figure15:bejanin ] ( d ) .the real and imaginary parts of for the same resonator , as well as the associated fit , are shown in fig .s3 in the supplemental material at http://www.supplemental-material-bejanin .figure [ figure16:bejanin ] shows an extensible quantum computing architecture where a two - dimensional square lattice of superconducting qubits is wired by means of a quantum socket analogous to that introduced in this work .the architecture comprises three main layers : the quantum hardware ; the shielding interlayer ; the three - dimensional wiring mesh . as shown in fig .[ figure16:bejanin ] ( a ) , the quantum hardware is realized as a two - dimensional lattice of superconducting qubits with nearest neighbor interactions .the qubits are a modified version of the xmon presented in ref .each qubit is characterized by seven arms that make it possible to connect it to one xy and one z control line as well as one measurement resonator and four inter - qubit coupling resonators .we name this type of qubit the _ heptaton_. the inter - qubit coupling is mediated by means of superconducting cpw resonators that allow the implementation of control z ( cz ) gates between two neighboring qubits .a set of four heptatons can be readout by way of a single cpw transmission line connected to four cpw resonators , each with a different resonant frequency .figure [ figure16:bejanin ] also shows the on - chip pads associated with each three - dimensional wire . in the supplemental material at http://www.supplemental-material-bejanin , we propose a more general surface code architecture where each qubit can be measured by means of two different resonators , one with frequency above and the other with frequency below all coupling resonator frequencies . assuming a pitch between two adjacent three - dimensional wires of, the lateral dimension of one square cell having four heptatons at its edges is .the three distances , , and between wire pads and resonators leading to this quantity are indicated in fig .[ figure16:bejanin ] ( b ) .it is thus possible to construct a two - dimensional lattice of heptatons on a square chip with lateral dimension .a square chip is the largest chip that can be diced from a standard inch wafer. this will allow the implementation of a logical qubit based on the surface code , with at least distance five .in this architecture , the coupling resonators act as a _ coherent spacer _ between pairs of qubits , i.e. , they allow a sufficient separation to accommodate the three - dimensional wires , while maintaining qubit coherence during the cz gates . additionally , these resonators will help mitigate qubit crosstalk compared to architectures based on direct capacitive coupling between adjacent qubits ( cf . ref .in fact , they will suppress qubit - mediated coupling between neighboring control lines .it is worth noting that adjacent coupling resonators can be suitably designed to be at different frequencies , thus further diminishing qubit - mediated crosstalk .implementing a large qubit chip with a lateral dimension of presents significant challenges to the qubit operation at microwave frequencies .a large chip must be housed in a large microwave package , causing the appearance of box modes that can interfere with the qubit control and measurement sequences . moreover, a large chip will inevitably lead to floating ground planes that can generate unwanted slotline modes .all these parasitic effects can be suppressed by means of the shielding interlayer , as shown in fig .[ figure16:bejanin ] ( a ) .this layer can be wafer bonded to the quantum layer . through holes and cavities on the bottom part of the layercan be readily fabricated using standard si etching techniques .the holes will house the three - dimensional wires whereas the cavities will accommodate the qubit and resonator structures on the quantum hardware .large substrates also generate chip modes that , however , can be mitigated using buried metal layers and through vias .the three - dimensional wires to be used for the qubit architecture will be an upgraded version of the wires used in this work . in particular , the m thread will be removed and the wires will be inserted in a dedicated substrate ( cf . fig .[ figure16:bejanin ] ( a ) ) ; additionally , the screw - in micro connector will be substituted by a direct connection to a subminiature push - on sub - micro ( smps ) connector ( not shown in the figure ) . in future applications of the quantum socket, we envision an architecture where the three - dimensional wires will be used as interconnect between the quantum layer and a classical control / measurement layer .the classical layer could be realized using rsfq digital circuitry .for example , high - sensitivity digital down - converters ( ddcs ) have been fabricated based on rsfq electronics .such circuitry is operated at very low temperatures and can substitute the room temperature electronics used for qubit readout .note that cryogenic ddc chips with dimensions of less than can perform the same operations presently carried out by room temperature microwave equipment with an overall footprint of .recent interest in reducing dissipation in rsfq electronics will possibly enable the operation of the classical electronics in close proximity to the quantum hardware .we also believe it is feasible to further miniaturize the three - dimensional wires so that the wire outer diameter would be on the order of . assuming a wire - wire pitch also of, it will therefore be possible to realize a lattice of wires connecting to qubits arranged on a two - dimensional qubit grid with dimensions of .this will allow the implementation of simple fault tolerant operations between a few tens of logical qubits .matteo mariantoni and the digital quantum matter laboratory acknowledge the alfred p. sloan foundation as well as the natural sciences and engineering research council of canada ( nserc ) , the ministry of research and innovation ( mri ) of ontario , and the canadian microelectronics corporation ( cmc ) microsystems .corey rae h. mcrae acknowledges a waterloo institute for nanotechnology ( win ) nanofellowship-2013 and jrmy h. bjanin a provost s entrance award of the university of waterloo ( uw ) .we also acknowledge fruitful discussions with john m. martinis and the martinis group , eric bogatin , and william d. oliver , as well as the group of adrian lupacu for their assistance in the deposition of al films and nathan nelson - fitzpatrick and the uw s quantum nanofab facility for their support .we thank dario mariantoni and alexandra v. bardysheva for help with a few figures .in this appendix , we discuss the pressure settings of the three - dimensional wires . in the current implementation of the quantum socket, the pressure exerted by the three - dimensional wires on the chip is controlled by the installation depth of the wire in the lid .this depth depends on the number of rotations used to screw the wire into the m-threaded hole of the lid .since the wire s tunnel has to be aligned with the corresponding on - chip pad , a discrete number of wire pressure settings is allowed .for the package shown in fig .[ figure01:bejanin ] ( b ) and fig . [ figure04:bejanin ] ( b ) , the minimum length an unloaded wire has to protrude from the ceiling of the lid s internal cavity to touch the chip surface is ( cf .[ figure01:bejanin ] ( c ) ) . for a maximum wire stroke ,the maximum length an unloaded wire can protrude from the cavity ceiling without breaking when loaded is .the first allowed pressure setting , with wire and pad perfectly aligned , is for .the pitch for an m screw is .hence , five pressure settings are nominally possible , for mm , with .we found the ideal pressure setting to be for , corresponding to a nominal mm ; the actual average setting for wires was measured to be , with standard deviation due to the machining tolerances . for greater depths we experienced occasional wire damage ; lesser depths were not investigated .possible effects on the electrical properties of the three - dimensional wires due to different pressure settings will be studied in a future work .cccccccccc [ 3mm][0mm]material & cu & sn & zn & fe & ni & pb & p & si & others + [ 3mm][0mm]cw724r & & & rest & & & & & & al , manganese + [ 3mm][0mm]cw453k & rest & & & & & & & - & + [ table04:bejanin ] in this appendix , we describe the measurement setup used to characterize the magnetic properties of the materials used in the quantum socket and present the main results . additionally , we give an estimate of the strength of the magnetic field caused by one three - dimensional wire inside the microwave package .the zgc used in our tests comprises three nested cylinders , each with a lid with a central circular hole ; the hole in the outermost lid is extended into a chimney that provides further magnetic shielding .the walls of the zgc are made of an alloy of ni and fe ( or mu - metal alloy ) with a high relative permeability .the alloy used for the chamber is a co - netic aa alloy and is characterized by a dc magnetic permeability at , , and an ac magnetic permeability at and at , . as a consequence , the nominal magnetic field attenuation lies between and .the zgc used in our tests was manufactured by the magnetic shield corporation , model zg-209 .the flux gate magnetometer used to measure the magnetic field is a three - axis dc milligauss meter from alphalab , inc ., model mgm3axis .its sensor is a parallelepiped at the end of a long cable ; the orientation of the sensor is calibrated to within and has a resolution of ( i.e. , ) over a range of ( i.e. , ) .the actual attenuation of the chamber was tested by measuring the value of the earth s magnetic field with and without the chamber in two positions , vertical and horizontal ; inside the chamber the measurements were performed a few centimeters from the chamber base , approximately on the axis of the inner cylinder . in these and all subsequent tests , the magnetic sensor was kept in the same orientation and position .the results are reported in table [ table05:bejanin ] , which shows the type of measurement performed , the magnitude of the measured magnetic field , and the attenuation ratio .the maximum measured attenuation was in the horizontal position .ccc [ 3mm][0mm]measurement & & + [ 0mm][2 mm ] & & + [ 3mm][0mm]vertical position , background field & & - + [ 3mm][0mm]vertical position , with zgc & & + [ 3mm][0mm]horizontal position , background field & & - + [ 3mm][0mm]horizontal position , with zgc & & + [ table05:bejanin ] the zgc characterization of table [ table05:bejanin ] also serves as a calibration for the measurements on the materials used for the quantum socket . in these measurements , each test sample was positioned approximately away from the magnetic sensor .the results , which are reported in table [ table06:bejanin ] , were obtained by taking the magnitude of the calibrated field of each sample .the calibrated field itself was calculated by subtracting the background field from the sample field , component by component .note that the background and sample fields were on the same order of magnitude ( between and ) , with background fluctuations on the order of .thus , we recorded the maximum value of each , , and component . considering that the volume of the measured samples is significantly larger than that of the actual quantum socket components , we are confident that the measured magnetic fields of the materials should be small enough not to significantly disturb the operation of superconducting quantum devices . as part of our magnetism tests , we measured a block of approximately of 5n5 al in the zgc ; as shown in table [ table06:bejanin ], the magnitude of the magnetic field was found to be within the noise floor of the measurement apparatus and a pull of .we found magnetic fields with the same order of magnitude as in table [ table06:bejanin ] . ] .cc [ 3mm][0mm]material & + [ 0mm][2 mm ] & + [ 3mm][0mm]cw724r & + [ 3mm][0mm]cw453k & + [ 3mm][0mm]al 5n5 & + [ table06:bejanin ] a simple geometric argument allows us to estimate the actual magnetic field due to one three - dimensional wire , without taking into account effects due to superconductivity ( most of the wire is embedded in an al package , which is superconductive at qubit operation temperatures ) .we assume that one wire generates a magnetic field of mg ( i.e. , the maximum field value in table [ table06:bejanin ] ; this is a large overestimate considering the tested samples had volumes much larger than any component in the wires ) and is a magnetic dipole positioned mm away from a qubit . the field generated by the wire at the qubitwill then be , where is the distance at which the field was measured in the zgc ; thus , .assuming an xmon qubit with a superconducting quantum interference device ( squid ) of dimensions , the estimated magnetic flux due to the wire threading the squid is .this is approximately three orders of magnitude smaller than a flux quantum ; typical flux values for the xmon operation are on the order of .in this appendix , we describe the method used to estimate the thermal performance of a three - dimensional wire and compare it to that of an al wire bond .note that at very low temperature , thermal conductivities can vary by orders of magnitude between two different alloys of the same material .the following estimate can thus only be considered correct to within approximately one order of magnitude .thermal conductivity is a property intrinsic to a material . to characterize the cooling performance of a three - dimensional wire , we instead use the heat transfer rate ( power ) per kelvin difference , which depends on the conductivity .the power transferred across an object with its two extremities at different temperatures depends on the cross - sectional area of the object , its length , and the temperature difference between the extremities . since the cross section of a three - dimensional wire is not uniform , we assume the wire is made of two concentric hollow cylinders .the cross - sectional area of the two cylinders is calculated by using dimensions consistent with those of a three - dimensional wire .the inner and outer hollow cylinders are assumed to be made of phosphor bronze and brass alloys , respectively .the thermal conductivities of these materials at low temperatures are determined by extrapolating measured data to .the al wire bonds are assumed to be solid cylinders with diameter . in the superconducting state, the thermal conductivity of al can be estimated by extrapolating literature values .the heat transfer rate per kelvin difference is calculated by multiplying the thermal conductivity with the cross - sectional area and dividing by the length of the thermal conductor .the heat transfer rate per kelvin difference of a three - dimensional wire is calculated by summing the heat transfer rate per kelvin difference of the inner conductor to that of the outer conductor and is found to be at . at the same temperature , the heat transfer rate per kelvin of a typical al wire bondis estimated to be ( cf .table [ table07:bejanin ] ) , much lower than for a single three - dimensional wire .note that , instead of al wire bonds , gold wire bonds can be used .these are characterized by a higher thermal conductivity because they remain normal conductive also at very low temperatures .however , al wire bonds remain the most common choice because easier to use .ccccc [ 3mm][0 mm ] & & & & + [ 0mm][2 mm ] & & & & + [ 3.5mm][0mm]inner cond .& & & & + [ 3.5mm][0mm](phos bronze ) & & & & + [ 3.5mm][0mm]outer cond . & & & & + [ 3.5mm][0mm](brass ) & & & & + [ 3.5mm][0mm]wire bond & - & & & + [ 3.5mm][0mm](al ) & & & & + [ table07:bejanin ]cccccc [ 3mm][0mm]spring type & & & & & + [ 0mm][2 mm ] & & & & & + [ 3mm][0mm]fe-113 225 & & & & & + [ 3mm][0mm]fe-112 157 & & & & & + [ 3mm][0mm]fe-50 15 & & & & & + [ table08:bejanin ] in this appendix , we discuss the performance of the springs used in three - dimensional wires at various temperatures .the three types of tested springs are called fe-113 225 , fe-112 157 , and fe-50 15 and their geometric characteristics are reported in table [ table08:bejanin ] .we ran temperature cycle tests by dunking the springs repeatedly in liquid nitrogen and then in liquid helium without any load . at the end of each cycle, we attempted to compress them at room temperature .we found no noticeable changes in mechanical performance after many cooling cycles .subsequently , the springs were tested mechanically by compressing them while submerged in liquid nitrogen or helium .the setup used for the compressive loading test of the springs is shown in movie 4 of the supplemental material at http://www.supplemental-material-bejanin , which also shows a properly functioning spring immediately after being cooled in liquid helium . in these tests , we only studied compression forces because in the actual experiments the three - dimensional wires are compressed and not elongated .the compression force was assessed by means of loading the springs with a mass .the weight of the mass that fully compressed the spring determined the spring compression force .the compression force of each spring is reported in table [ table08:bejanin ] .we observed through these tests that the compression force is nearly independent of the spring temperature , increasing only slightly when submerged in liquid helium .assuming an operating compression , we expect a force between and for the inner conductor and between and for the outer conductor of a three - dimensional wire at a temperature of. note that we chose spring model fe-113 225 for use with the grounding washer .in this appendix , we provide more details about alignment errors .figure [ figure17:bejanin ] shows a set of micro images for au and ag samples . the au pads in panels ( a ) and ( b ) were mated two times at room temperature ; the three - dimensional wires used to mate these pads featured the smaller tunnel ( width ) .the pad dimensions were and .noticeably , in panel ( a ) the wire bottom interface matched the contact pad in both mating instances , even though the matching was affected by a rotational misalignment of approximately with respect to the transmission line longitudinal axis .in panel ( b ) the inner conductor landed on the dielectric gap in the second mating instance . in our initial design , a perfect match required that the die dimensions should be at most thou smaller than the dimensions of the chip recess , as machined . in the case ofthe sample holder used to house the au samples , the chip recess side lengths were , , , and .the au samples were diced from a si wafer using a dicing saw from disco , model dad-2h/6 , set to obtain a die . due to the saw inaccuracies ,the actual die dimensions were , significantly smaller than the chip recess dimensions .this caused the die to shift randomly between different mating instances , causing alignment errors .as described in the main text , in order to minimize such errors a superior disco saw was used , in combination with a disco electroformed bond hub diamond blade model zh05-sd 2000-n1 - 50-f e ; this blade corresponds to a nominal kerf between and .additionally , we used lateral markers spaced with increments of that allowed us to cut dies with dimensions ranging from to , well within the machining tolerances of the sample holder .after machining , the actual inner dimensions of each sample holder were measured by means of a measuring microscope .the wafers were then cut by selecting the lateral dicing markers associated with the die dimensions that fit best the holder being used .figure [ figure17:bejanin ] ( c ) shows a successful alignment for six ag pads on the same chip ; the chip is mounted in a sample holder with grounding washer .all three main steps for an ideal and repeatable alignment ( cf .subsec . [ alignment ] ) were followed .figure [ figure17:bejanin ] ( d ) shows the distinctive marks left by the grounding washer on an ag film .the marks are localized towards the edge of the die ; the washer covered approximately of ag film .this indicates a good electrical contact at the washer - film interface . in conclusion ,it is worth commenting some of the features in fig .[ figure05:bejanin ] ( d ) in the main text .the figure clearly shows dragging of a three - dimensional wire due to cooling contractions .in fact , for the al chip recess an estimate of the lateral contraction length from room temperature to can be obtained as , where is the integrated linear thermal expansion coefficient for al 6061-t6 at from refs . and is the room temperature length of the recess side can be accurately estimated from the data at http://www.cryogenics.nist.gov/mpropsmay/6061%20aluminum/6061_t6aluminum_rev.htm . ] .note that the sample holder is actually made from al alloy 5n5 ; however , different al alloys contract by approximately the same quantity . for the si sample substrate , the lateral contraction length from room temperature to approximately given by , where the integrated linear thermal expansion coefficient at was found in table 2 of ref .below , the thermal expansion of both materials is negligible for our purposes and , thus , the estimate can also be considered to be valid at .in this appendix , we outline the fabrication processes for the samples used to test the quantum socket .a set of samples was made by liftoff of a ag film , which was grown by means of electron beam physical vapor deposition ( ebpvd ; from intlvac canada inc ., model nanochrome ii ) on a inch float - zone ( fz ) si wafer of thickness .the superconducting samples were made by etching a al film that was deposited by ebpvd on a fz si wafer .last , two sets of test samples were made by etching au films of thickness and with a ti adhesion underlayer in both sets .the films were grown by ebpvd on a inch czochralski ( cz ) undoped si wafer of thickness .the ag samples were required to reduce the series resistance of the cpw transmission lines ( cf .subsecs .[ two : port : scattering : parameters ] , [ time : domain : reflectometry ] , and [ signal : crosstalk ] ) .fabricating such a relatively thick film necessitated a more complex process as compared to that used for the au and al samples .the ag samples were fabricated with a thick resist tone reversal process .the wafer was spun with an az p4620 positive tone resist to create a resist thickness of , then soft baked for at . because the resist layer is so thick , a rehydration step of was necessary before exposure .optical exposure was performed for in a mask aligner from sss microtec ag , model ma6 , in soft contact with a photomask .after exposure the sample was left resting for at least so that any nitrogen created by the exposure could dissipate .the tone reversal bake was done for in an oven set to , filled with ammonia gas .the sample then underwent a flood exposure for and was developed in az 400k for .subsequently , of ag was deposited and liftoff of the resist was performed in acetone for with ultrasounds .in this appendix , we present a set of microwave parameters that help further analyze the performance of the quantum socket . these parameters were obtained from the measured s - parameter data of figs .[ figure09:bejanin ] and [ figure10:bejanin ] ( a ) and are shown in fig .[ figure18:bejanin ] .the complex input impedance can be obtained from the frequency dependent impedance matrix ] from the magnitude of is shown in fig .[ figure18:bejanin ] ( a ) .the input voltage standing wave ratio ( vswr ) was obtained from and is displayed in fig .[ figure18:bejanin ] ( b ) .the phase delay was calculated as and is displayed fig .[ figure18:bejanin ] ( c ) .finally , the group delay was obtained from and is displayed in fig .[ figure18:bejanin ] ( d ) .the derivative in eq .( [ equation:08 ] ) was evaluated numerically by means of central finite differences with 6th order accuracy . the data in fig .[ figure18:bejanin ] ( d ) were post - processed using smoothing .note that the output impedance and vswr were also evaluated and resembled the corresponding input parameters .the input and output impedances as well as the vswrs indicate a good impedance matching up to approximately .the phase and group delays , which are directly related to the frequency dispersion associated with the quantum socket , indicate minimal dispersion .this is expected for a combination of coaxial structures ( the three - dimensional wires ) and a cpw transmission line .thus , we expect wideband control pulses to be transmitted without significant distortion in applications with superconducting qubits ( cf . supplemental material at http://www.supplemental-material-bejanin for further details about microwave pulse transmission ) .the experimental setup used to measure the superconducting cpw resonators is shown in fig .[ figure19:bejanin ] .the low - temperature system is a cryogen - free dr from bluefors cryogenics ltd . , model bf - ld250 .the dr comprises five main temperature stages , where microwave components and samples can be thermally anchored : the rt , , , still ( ) , cold plate ( cp ; ) , and mc stage .we will describe the setup following the input signal through the various temperature stages , from port to the input port of the microwave package ( where the resonator sample is mounted ) and from the output port of the package to port .the two ports are connected to the pna - x , which serves as both the microwave source and readout apparatus .port is connected to the rt stage of the dr with sucoflex flexible cables followed by a series of two semi - rigid coaxial cables from ez form , model ez 86-cu - tp / m17 ( each approximately long , with silver - coated copper center conductor , solid ptfe dielectric , and tin - plated seamless copper outer conductor ) . except for the pna - x ports , which feature connectors , all the connectors and bulkhead adaptersare sma type .in particular , the rt stage of the dr features a set of hermetic sma bulkhead adapters from huber+suhner , model 34_sma-50 - 0 - 3/111_n , with a tested leak rate for helium-4 lower than .the dr stages , all the way to the mc stage , are connected by the series of five semi - rigid coaxial cables from coax co. , ltd ., model sc-219/50-ss - ss ( with stainless steel ( sus304 ) center and outer conductor and solid ptfe dielectric ; the cable lengths from rt to mc are : , , , , and , respectively ) .the cables are thermalized to the dr stages by way of cryogenic attenuators from the xma corporation - omni spectra , model 2082 - 6418-xx - cryo , where xx is the attenuation level in db ; for each stage between rt and mc , we chose , and , respectively .the input signals are filtered by means of a low - pass filter from marki microwave , inc . , model flp-0960 - 2s , with bandpass from dc to .the filter is heat sunk at the mc stage by anchoring it to a hardware module , which is bolted to the mc stage .the filter module , and similarly all the other modules used to heat sink microwave components in the dr , are made from c10100 ofe copper alloy .a non - magnetic semi - rigid coaxial cable ez 86-cu - tp / m17 connects the output port of the marki filter to an smp 086 connector on the mounting plate ; the cable is long and enters the a4k shield through one of the chimneys on the shield lid .the shield , which is thermalized to the mc stage , is characterized by a dc relative permeability close to at .the smp 086 connector is mated to the input port of the dut shown schematically in fig .[ figure08:bejanin ] .the dut used in the dr features smp 047 connectors in lieu of sma connectors .the dut when connected to the mounting plate is shown in fig .[ figure01:bejanin ] ( d ) and fig . [ figure04:bejanin ] ( c ) .the output port of the dut is then connected to a series of two cryogenic circulators from raditek inc ., model radc-4.0 - 8.0-cryo-4 - 77k - s3 - 1wr - b ( with magnetic shielding ) by means of a semi - rigid superconducting coaxial cable from coax co. , model sc-219/50-nb - nb , of length .the circulators are thermalized to the mc stage and are connected to each other by means of a semi - rigid superconducting coaxial cable from coax co. , model sc-219/50-nb - nb , of length ; the spare port of each circulator is terminated with an xma cryogenic load , model 2001 - 7010 - 02-cryo , which is thermalized to the mc stage .the output port of the second circulator is connected by way of a long sc-219/50-nb - nb cable to a third circulator at the still stage ( the spare port is terminated with a load thermalized to the still ) . a long sc-219/50-nb - nb cable connects the output port of the third circulator to a cryogenic microwave amplifier from low noise factory ab , model lnf - lnc1_12a .the amplifier , which is thermalized to the stage , is characterized by a nominal gain of approximately and a noise temperature of at an operating temperature of in the to frequency range .finally , the amplifier output port is connected to the and rt stages by a series of two sc-219/50-ss - ss cables of length and , respectively ; the cables are thermalized to the stage by means of a xma attenuator .two ez form copper cables in series , followed by sucoflex flexible cables , complete the network to port .the input channel described here is one of three equivalent channels dedicated to resonator measurements .the three channels share the output line ; this is possible thanks to a microwave switch from radiall , model r573.423.605 , which is operated at the mc stage .the switch is located after the dut but before the two mc circulators ( two of the three input channels and the switch are not shown in fig .[ figure19:bejanin ] ) .the switch has six inputs and one output , making it possible to further extend the number of input microwave channels .84ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 1212 & 12#1212_12%12[1][0] link:\doibase 10.1109/tasc.2005.850084 [ * * , ( ) ] link:\doibase 10.1063/1.3693409 [ * * , ( ) ] link:\doibase 10.1103/physreva.86.032324 [ * * , ( ) ] | quantum computing architectures are on the verge of scalability , a key requirement for the implementation of a universal quantum computer . the next stage in this quest is the realization of quantum error correction codes , which will mitigate the impact of faulty quantum information on a quantum computer . architectures with ten or more quantum bits ( qubits ) have been realized using trapped ions and superconducting circuits . while these implementations are potentially scalable , true scalability will require systems engineering to combine quantum and classical hardware . one technology demanding imminent efforts is the realization of a suitable wiring method for the control and measurement of a large number of qubits . in this work , we introduce an interconnect solution for solid - state qubits : _ the quantum socket_. the quantum socket fully exploits the third dimension to connect classical electronics to qubits with higher density and better performance than two - dimensional methods based on wire bonding . the quantum socket is based on spring - mounted micro wires _ the three - dimensional wires _ that push directly on a micro - fabricated chip , making electrical contact . a small wire cross section ( ) , nearly non - magnetic components , and functionality at low temperatures make the quantum socket ideal to operate solid - state qubits . the wires have a coaxial geometry and operate over a frequency range from dc to , with a contact resistance of , an impedance mismatch of , and minimal crosstalk . as a proof of principle , we fabricated and used a quantum socket to measure superconducting resonators at a temperature of . quantum error correction codes such as the surface code will largely benefit from the quantum socket , which will make it possible to address qubits located on a two - dimensional lattice . the present implementation of the socket can be readily extended to accommodate a quantum processor with a qubit lattice , which would allow the realization of a simple quantum memory . |
murphy s law is not a law in the formal sense ; instead it is an interesting satirical statement : _ if something can go wrong then it will definitely go wrong _ .the origin of the statement is unknown ( even murphy himself ! ) , yet it is not very hard to find it relevant in various situations , either in its rudimentary form or in the form of some derivative .popular science often relates this statement to the second law of thermodynamics , which states that the entropy of the universe will always increase with time .the analogy between these two seemingly very dissimilar statements seems to crop up from the idea that left to themselves , things tend to get disorganized with time and murphy comes into the picture , albeit in a modified form , stating that bad will eventually turn worse . in the realm of philosophy these statements may seem connected , however from a scientific viewpoint they are clearly unrelated , as ( _ i _ ) one is a universal law , whereas the other , a rather ambiguous statement ; ( _ ii _ ) the second - law strictly talks about energy dissipation in physical processes and their irreversibility , whereas murphy s statement ( ` law ' is a misnomer in this case ) is just a popular imagination , and does not even remotely relate to any of the physical or mathematical concepts to rely upon .surprisingly , given these discrepancies , we still find it very interesting when the law is formulated as a mathematically equivalent statement . in this paper, we try to _ actually _ relate murphy s statement with the second law of thermodynamics , and draw inspirations from energy and entropy .we check the correctness of the statement from a scientific standpoint , by first constructing a mathematically equivalent statement using concepts from mathematical logic and probability theory .next , we understand the physical essence of the terms ` rightness ' and ` wrongness ' from the context of energy and time , using the second law and principle of least action . finally , we relate our findings to check the truthfulness of murphy s statement .we observe that what initially seemed to be completely unrelated statements - the second law and murphy s statement - _ indeed _ have some interesting tales to tell .murphy s statement is more of a psychological statement having philosophical connotations of ` rightness ' and ` wrongness ' .a statement like this is not only hard to evaluate but even more challenging to prove ( or disprove ) .the first step in any proof strategy lies in the rigorous formulation of the statement which is under scrutiny , and hence it is necessary that we try to formulate this rather anecdotal philosophy into a logical one .doing so lets us set limits to the premise of the statement , as well as to its domain of application .however , in order to proceed we need to make some underlying assumptions .the idea of using propositional logic to build a mathematically coherent statement , even before attempting to prove or disprove it , is a necessity .whereas , assigning an additional element of probability to the statement(s ) is actually non - intuitive , because ( _ i _ ) murphy s speaks about occurrence of events and in mathematics , occurrence is always associated with _ chance _ ; ( _ ii _ ) the presence of the intensifier , _ ` definitely ' _ , in the linguistic construction of the statement itself , further strengthens our intuition of associating the statement(s ) with finite values of probability . logically speaking , murphy s statement can be broadly divided into two distinct components : * 1*. ( _ if _ ) something can go wrong , and * 2*. ( _ then _ ) it will definitely go wrong .in propositional logic , this can be viewed as a causal chain of statements , , where is the first component , the second , and causes .such an argument is known as _ modus ponnens _ which asserts that the occurrence of is dependent upon the occurrence of , or if happens , definitely happens. it can be represented mathematically as .it was noted earlier that the above statements have an inherent probability associated with them .the presence of an intensifier , ` definitely ' further affirms the occurrence of the latter half of the statement , irrespective of the magnitude of the chances of occurring of the former half .mathematically , if the probability of occurrence of is _ finite _ ( remember , the probability that occurs is also finite ) then occurs _ almost surely _ , _i.e. _ , with probability one ( note that , even though we are interested in the happening of something which is not desired ) . what we earlier stated as a _modus ponnens _ , interestingly transforms into _ modus tollens _ once we associate finite probabilities to the occurrence of the events ( _ modus ponnens _ is the law of affirming by affirming , therefore we affirm in order to justify , and murphy s statement affirms by associating finite chances of occurrence of ) .according to the law of contrapositive : ( _ modus tollens _ ) , , or if probability of occurrence of is _ finite _ then occurs _ almost surely _ and did not occur _ almost surely _ enforces the fact that did not have a finite chance of occurring in the first place , or .this is the form of murphy s statement that we will adhere to throughout this paper .also , in the following sections of this paper , we will analyze murphy s statement using fundamental laws of physics , the least action principle and the second law of thermodynamics .it may seem puzzling at first as to how energy and entropy come into the picture , but a deeper inspection will reveal the intimate connection that murphy s statement holds with respect to a system , process and entropy . the connection between murphy s statement and thermodynamics is the central theme of this paper . as proposed earlier , when we talk about murphy s statement we refer to the probability of occurrence of any event in space and time .we thus make a transition from a system - theoretic perspective to a process - theoretic one , and once we do that we inevitably include the second law into the picture .where , thermodynamics is a branch of science dealing with energy interactions with systems , energy conversion and energy degradation ; it has also given birth to one of the most fundamental laws of nature , known as the second law or the entropy principle .the second law deals with processes in nature , and acts as a standard test to justify the occurrence or nonoccurrence of any phenomenon .every process ( phenomenon ) in nature can be visualized as a flow of energy . when this flow is spontaneous , _ i.e. _ , the driving force behind the flow is the presence of gradients ( like , energy , temperature , pressure , concentration , etc . ) , then such a flow is realized with an equivalent increase in entropy .the flow of energy during any arbitrary process when coupled with the dimension of time gives rise to the notion of _ action _ , formally represented as , where the pair , is the generalized position - momentum pair , is the time , is the change in energy along a specific path ( in this case ) , and are the kinetic and potential energies respectively and , the lagrangian .there are several interpretations of the action principle , for example , the laws of physics when formulated in terms of the action principle identify any natural process as the one in which energy differences are leveled off in the least possible time ( maupertius formulation ) ; among all existing possible paths , the path of least action is the one along which a natural process must proceed ( hamiltonian formulation ) . on the energy landscape , there is a unique trajectory that directs ( energy ) flows along the lines of steepest descent .a classical example of this principle can be observed in case of the brachistochrone problem .the principle of least action , thus dictates the existence of a unique path out of numerous possible trajectories ( see figure 1 ) . while some are thermodynamically feasible , others are not . from a thermodynamic perspective decrease in actionis equivalent to an increase in entropy ( we restrict ourselves to non self - organizing systems and processes ) , or along a trajectory , both the quantities can be represented along mutually orthogonal directions but with opposite signs , .murphy s statement vaguely mentions processes , but stresses more on their qualitative aspect of ` rightness ' and ` wrongness ' .it does not talk about the system , per se , undergoing the process , nor does it go into the details of the mechanism underlying the process ; it basically concentrates only on the qualitative aspects of the outcomes .in order to test the correctness of the statement , it is absolutely necessary to look into the details of the process , as focusing on the finer details of the process will enable us to justify the outcomes .a deeper inspection is needed when we talk about processes and trajectories ( pathways to achieve a particular outcome during a process ) .when we talk about murphy s statement , we must invariably refer to processes and the trajectories ( pathways ) along which these processes occur ( flow ) . in order to model a physical phenomenon into a thermodynamic one , we must explicitly distinguish between a system , surroundings and the set of processes that the system undergoes . in the algebraic formulation of variational thermodynamics ,these terms are defined in a detailed , mathematically rigorous yet an abstract fashion .roughly speaking , a system is a set of all those elements which are of current interest , and there exists a boundary that separates the system from its surroundings , where surroundings ( relative to the system ) refer to everything else in the universe except the system and the system boundary .a process is a change of state of a subset of system elements or the system itself in time .mathematically , a process can be represented as a map that captures the changes incurred by the system or a subset of a system .we define a system undergoing a process by a tuple , ( ) , where the elements of are the system elements / constituents / agents , and the elements of are processes .we identify an event as a _ spontaneous _ process in space - time if it proceeds along the path that reduces action and increases entropy .the principle of least action is that tool which helps us in finding that particular trajectory out of a set of uncountable possibilities along which an event must proceed . using the standard notation, we define action for an arbitrary element in a system undergoing an arbitrary process , as , where , the set of all system elements in , and , the set of all arbitrary processes , . for any arbitrary process , there exist numerous possibilities or trajectories , .we define the set to be the set of all such trajectories , for the corresponding process , and ( , , and are index sets ) .it is now crucial to define what exactly is meant by ` right ' and ` wrong ' events as this is the central idea of murphy s statement .we define two subsets of trajectories , and , such that both .the trajectories which are thermodynamically feasible form the set , , and the ones which are not , form the set , . therefore , the elements of are either labeled by or ( for representation purpose only ) .the right trajectories are denoted by , and the wrong ones by . clearly , the subsets , and partition the set , , so and ( see figure 2 ) .it is not very hard to see that the tuple , ( ) forms a -algebra be a collection of sets of , then ( , ) , ( , ) , , , and finally and .so , ( ) is a -algebra . ] where is a collection of sets of . we can define a probability measure , , such that and , then the triple , ( ) forms a probability space . , and the action ( for particle ) undergoing process along trajectory , .,scaledwidth=40.0% ]as stated above , every element in is a possible trajectory , and it is not surprising that the set is unbounded because there exist infinitely many possible trajectories or sequence of trajectories for a given outcome ( feynman s path integral formulation argument ) .furthermore , we associate some _ finite _ ( however small or large ) probability to every trajectory ( trajectories are basically the worldlines of a particular outcome ) .thus , without any loss of generality , we can assume that the set , , is _ countably _ infinite , whereas the set , , is finite .it is even more interesting to note that the set , is , in fact , _uncountable_. since the elements of the set represent all those trajectories which are thermodynamically infeasible , we can make any arbitrary combination of ( or a mix of and , because it does nt matter if a process is thermodynamically feasible in a later stage if it is infeasible in the very first instant ! ) , and always come up with a new infeasible trajectory , say . as the set is infinite , we come up with an argument similar to cantor s diagonalization argument for uncountability of reals , and prove that is uncountable because for every increment in , we can always find a new element , based on the combination of the diagonal elements ( in this case , ) and this will happen infinitely many times , _i.e. _ , for the physical explanation of the above argument , we consider the case of a chemical reaction where substrates are converted into products . in order that the reaction is thermodynamically feasible, a minimum energy barrier ( energy hill ) called activation energy , , has to be surpassed .those trajectories along which this energy threshold is successfully realized can be visualized as the ` right ' trajectories and vice - versa .even the so called ` right ' trajectories have further thermodynamic constraints , so basically , we can refine our intuition and come up with a finite number of possibilities , or maybe in some exceptional cases , _ atmost _ countable ` right ' possibilities and _ always _ uncountable ` wrong ' possibilities .let be an action map such that , , in this case , , and ] .let be a collection of subsets of , then the tuple forms a -algebra .the interval , ] , and would be countable in number , whereas the ` wrong ' trajectories would cluster around $ ] , and would be uncountably many .the subset , in the transformed sample space , is denoted by .similarly , the subset is denoted by , and and . on assigning the probability measure to the sets , and , we observe , since is countable , and , as is uncountable .what we just proved above is that no matter what , the probability that a ` wrong ' trajectory is chosen is one , whereas the probability of choosing a ` right ' trajectory is zero , which in fact , supports murphy s statement ! and .,scaledwidth=70.0% ] there is a fallacy in the above argument .when we say there is a finite probability of any event to happen in some way , there is _ also _ a finite probability that the event might happen in some other way , that we and also murphy s statement ignore . as with any natural process , it can happen in infinitely many ways ( as seen above ) , and the _ few _ so - called ` correct ' trajectories get ` lost ' in the multitude of the uncountably many ` wrong ' ones , in spite of having finite chances of occurring .we define two sequences , and .the sequence , lies in the neighborhood of , and the sequence , , in the neighborhood of ( and ) . from our initial assumption , and both finite for _ every _ in the respective sequences , and . based on our definition for ` right ' and ` wrong ' events , the terms in the sequences, and will again be infinitely many .following our intuitive idea , we can easily foresee the probability , for every element in the sequence , takes the form , , for every ( is not a distribution function , strictly because for some processes , the probability might decay , as , while for others , as , and so on ) .we can see that however small the probability for any process to occur along a specific trajectory , be , it is finite .since the probability for every trajectory is decaying with an increment in , the summation of the probabilities , ( because as we go nearer to the trajectories in the neighborhood of , their probability will decrease at an even faster rate , making the probability of the sum of infinitely many such trajectories finite ) . from the borel - cantelli lemma, we can say that the probability of occurrence of these trajectories infinitely many times will be zero , and there are infinitely many of them .so , the probability of their occurrence is zero .conversely , for the sequence , , the probability distribution takes the form of , where for some , while the probability is _ exactly _ for finitely many trajectories .thus , the summation of probabilities in this case , diverges because diverges . from the converse of the lemma ,the probability of occurrence of these trajectories infinitely often will be one , and there are infinitely many of them and that such a trajectory ( from the set , ) will be chosen _almost surely_. we reflect on our findings based on the logic behind murphy s statement , and observe that although ` wrong ' trajectories have finite probability , they do not occur , which is a contradiction to the proposition that , ( with finite probabilities ) .murphy s statement often serves as an anecdote in many real - life circumstances . due to its affinity towards the negative aspects of any situation ,it has often been misinterpreted to hold a resemblance with the second law of thermodynamics . in this paper, we actually tried to relate both these statements and found quite striking results . as we mentioned earlier, murphy s statement holds several philosophical connotation , s and appears as a generalized take on various things happening around us all the time . in order to test its scientific validity, we need to construct it as a chain of mathematically consistent statements and derive some logical inference from them .any proof is invalid if the statements to be proved or disproved are themselves logically incoherent and mathematically not sound .once this is achieved ( in section 2.1 ) , we need to outline a proper proof strategy .this is where the complexity associated with murphy s statement increases , the reasons being : ( _ i _ ) it deals with physical , observable processes , ( _ ii _ ) the outcomes of the processes could be anything , and depending upon external perturbations the outcomes and even the processes may change their course of action , and ( _ iii _ ) every outcome is plausible , _ i.e. _ , every process has a finite chance of occurring and along numerous pathways leading to numerous finite possible outcomes . in order to deal with these uncertainties , we made assumptions like : any process or any outcome always has some energy associated with it , and any observable outcome can be achieved in several ways , which we call trajectories or pathways . in order to deal with a complex situation like this , we look for inspiration in the most fundamental laws of nature , and thereby propose that those outcomes , which are thermodynamically feasible shall be achieved along those trajectories that minimize action and maximize entropy . using mathematical tools from probability theory and logic , we claim to disprove murphy s statement in this paper . during the course of disproving the statement , we clearly showed the fallacy which is often the mis - interpreted truth behind the statement ( section 3.1 ) .we saw that there always exists an equivalent finite chance of an event to occur in different ways when it occurs in a certain way , which murphy s statement generally ignores .we would further like to add that the intensifier , ` definitely ' in the construction of murphy s statement enforces the fact that if something can go wrong , it will go wrong for sure .but we have seen earlier that the probability of something happening along the wrong way , as well as along the right way is finite , which raises a situation of _ post - hoc _ fallacy in the statement .this type of logical fallacy is known as _ post - hoc ergo propter hoc _ .a similar situation is observed in the case of the law of large numbers and the central limit theorem . according to the law of large numbers ( lln ) , given sufficiently large number of trials , the average of the results thus obtained , shall converge ( almost surely ) to the expected value or mean of the experiment .similarly , the central limit theorem ( clt ) states that the arithmetic mean of a sufficiently large number of iterates of independent random variables ( each with a well - defined mean and variance ) will be approximately normally distributed , regardless of the underlying distribution .both clt and lln seem to miss out on a crucial piece of information , _i.e _ , the information of the _ individual _ outcomes or the _ individual _ distributions respectively !thus , murphy s statement seems to be a far - fetched generalization of clt and lln in daily life .however , we must not forget that murphy s statement has a physical essence associated with it ( _ process _ ) , and therefore needs to be analyzed on a different scale .since it also deals with large number of outcomes and events , a statistical approach does sound good but a statistical formulation often fails to capture certain fine - tuned characteristics of the system . our methodology , thus holds strong promises in the study of complex systems , where every motion of a particle counts and little perturbations may drive the system from an organized state to a state of chaos . during the formulation of murphy s statement, we also focused more on the process and the trajectories , and less on the system and its constituting elements .it is not necessary for the system or a system element to be a physical particle , rather it is immaterial and irrespective of the physical definition ( or topology ) of the system for our reasoning to hold . by a process, we refer to the change in state of the system , which is the same as the change in outcome(s ) due to the system undergoing a process in time .as far as our assumption - any process is associated with energy flows on the energy landscape - holds good , the central idea of the paper will hold true .further , we mathematically prove that nature orders the occurrence of processes and their outcomes based on energy and entropy considerations . thus a reinterpretation of murphy s statement would read as : irrespective of all the possible outcomes and their qualitative aspects of ` rightness ' and ` wrongness ' anything that shall happen will definitely happen along the way that must minimize action .the author would like to thank the reviewers for their valuable feedback highlighting the merits and drawbacks of this paper .their constructive criticism and suggestions have made this paper much more informative and interesting to read .the author would also like to thank his colleagues , sheetal surana and catherine nipps , for their inputs and lengthy discussions on the topic of this paper . | murphy s law is a law in the formal sense yet popular science often compares it with the second law of thermodynamics as both the statements point toward a more disorganized state with time . in this paper , we first construct a mathematically equivalent statement for murphy s law and then disprove it using the intuitive idea that energy differences will level off along the paths of steepest descent , or along trajectories of least action . * pacs * : 89.20.-a , 89.75.-k , 05.70.-a , 02.30.-f + * keywords * : murphy s law , principle of least action , second law of thermodynamics , probability measure |
the evolutionary prisoner s dilemma game ( pdg ) and the snowdrift game ( sg ) have become standard paradigms for studying the possible emergence of cooperative phenomena in a competitive setting .physicists find such emergent phenomena fascinating , as similar cooperative effects are also found in interacting systems in physics that can be described by some minimal models , e.g. models of interacting spin systems .these games are also essential in the understanding of coexistence of ( and competition between ) egoistic and altruistic behavior that appear in many complex systems in biology , sociology and economics. the basic pdg consists of two players deciding simultaneously whether to cooperate ( c ) or to defect ( d ) .if one plays c and the other plays d , the cooperator pays a cost of while the defector receives the highest payoff ( ) .if both play c , each player receives a payoff of .if both play d , the payoff is .thus , the pdg is characterized by the ordering of the four payoffs , with . in a single round of the game , it is obvious that defection is a better action in a fully connected ( well - fixed ) population , regardless of the opponents decisions .modifications on the basic pdg are , therefore , proposed in order to induce cooperations and to explain the wide - spread cooperative behavior observed in the real world .these modifications include , for example , the iterated pdg , spatially extended pdg and games with a third strategy .the snowdrift game ( sg ) , which is equivalent to the hawk - dove or chicken game , is a model somewhat favorable for cooperation .it is best introduced using the following scenario .consider two drivers hurrying home in opposite directions on a road blocked by a snowdrift .each driver has two possible actions to shovel the snowdrift ( cooperate ( c ) ) or not to do anything ( not - to - cooperate or defect " ( d ) ) .if they cooperate , they could be back home earlier and each will get a reward of . shovelling is a laborious job with a total cost of .thus , each driver gets a net reward of .if both drivers take action d , they both get stuck , and each gets a reward of .if only one driver takes action c and shovels the snowdrift , then both drivers can also go home .the driver taking action d ( not to shovel ) gets home without doing anything and hence gets a payoff , while the driver taking action c gets a sucker " payoff of .the sg refers to the case when , leading to the ranking of the payoffs .this ordering of the payoffs _ defines _ the sg .therefore , both the pdg and sg are defined by a payoff matrix of the form and they differ only in the ordering of and .it is this difference that makes cooperators persist more easily in the sg than in the pdg . in a well - mixed population ,cooperators and detectors coexist . due to the difficulty in measuring payoffs and the ordering of the payoffs accurately in real world situations where game theory is applicable , the sd has been taken to be a possible alternative to the pdg in studying emerging cooperative phenomena .the present work will focus on two aspects of current interest . in many circumstances ,the connections in a competing population are better modelled by some networks providing limited interactions than a fully - connected network .previous studies showed that different spatial structures might lead to different behaviors .for example , it has been demonstrated that spatial structures would promote cooperation in the pdg , but would suppress cooperation in the sg .there are other variations on the sg that resulted in improved cooperation . here, we explore the effects of an underlying network on the evolutionary sg in a population in which there exists an additional type of players .the latter is related to the fact that real - world systems usually consist of people who would adopt a strategy other than just c and d. for example , there may be people who do not like to participate in the competition and would rather take a small but fixed payoff ._ studied the effects of the presence of such persons , called loners , in a generalization of the pdg called the public goods game(pgg ) .motivated by these works of hauert _ et al . _ , we study the effects of risk averse loners in the evolutionary sg . in our model , evolution or adaptation is built in by allowing players to replace his character or strategy by that of a better - performing connected neighbor .we focus on both the steady state and the dynamics , and study how an underlying network structure affects the emergence of cooperation .it is found that in a fully - connected network , the c - players and d - players _ can not _ coexist with the loners . in a square lattice , however , cooperators are easier to survive . depending on the payoffs , there are situations in which c - players , d - players and loners can coexist . in sec . [ sec : model ] , the evolutionary sg with loners in a population with connections is presented . in sec .[ sec : simulation results and discussions ] , we present detailed numerical results in fully - connected networks and in square lattices , and discuss the physics of the observed features .the effects of noise are also discussed .we summarize our results in sec . [sec : conclusion ] .we consider an evolutionary snowdrift game in which the competitions between players are characterized by the payoff matrix here , each player takes on one of three possible characters or strategies : to cooperate ( c ) , to defect ( d ) , or to act as a loner ( l ) .the matrix element gives the payoff to a player using a strategy listed in the left hand column when the opponent uses a strategy in the top row . in the basic sg , it is useful to assign so that the payoffs can be characterized by a single parameter representing the cost - to - reward ratio . in terms of , we have , , , and . a competition involvinga loner leads to a payoff for both players . here, we explore the range of .spatial networking effects and evolutions are incorporated into the sg as follows . at the beginning of the game ,the players are arranged onto the nodes of a network and the character of each player is assigned randomly among the choices of c , d , and l. our discussion will be mainly on fully - connected graphs and regular lattices . in a fully - connected network ,every player is connected to all other players . in a square lattice ,a player is linked only to his four nearest neighbors .numerical studies are carried out using monte carlo simulations as reported in the work of szab _ et al . _ ( see also refs .the evolution of the character of the players is governed by the following dynamics . at any time during the game , each player competes with all the players that he is linked to and hence has a payoff . a randomly chosen player reassesses his own strategy by comparing his payoff with the payoff of a randomly chosen connected neighbor . with probability = \frac{1}{1 + \exp\left([p(i ) - p(j)]/k\right)},\ ] ] the player adopts the strategy of player .otherwise , the strategy of player remains unchanged . here is a noise parameter that determines the likelihood that player replaces his strategy when he meets someone with a higher payoff . for ,a player is almost certain to replace ( not to replace ) his strategy when he meets someone with a better ( worse ) payoff . for large , a player has a probability of to replace his strategy , regardless of whether is better or worse than . in a fully connected network ,a player s character may be replaced by any player in the system . in a square lattice ,a player s character can only be replaced by one of his four connected neighbors .as the game evolves , the fractions of players with the three characters also evolve .these fractions are referred to as frequencies .depending on the parameters and , the cooperator frequency , defector frequency , and loner frequency take on different values in the long time limit .we performed detailed numerical studies on our model . the number of players in the system is taken to be . in getting the fraction of players of different characters in the long time limit, we typically average over the results of monte - carlo time steps per site ( mcs ) , after allowing mcs for the system to reach the long time limit .averages are also taken over 10 initial configurations of the same set of parameters .figure 1 shows the results for fully connected networks .a value of is taken .the cooperator frequency , defector frequency , and loner frequency are obtained as a function of the cost - to - benefit ratio for three different values of the loner s payoff , , and . in the absence of loners , and in a fully connected network . from figure 1 , the loners extinct for a range of values of in which the behavior is identical to the basic sg . for ,the loners invade the whole population and both cooperators and defectors disappear .this is similar to the results in the pdg and in the pgg . in a fully connected network ,the three characters _ can not _ coexist .this is in sharp contrast to the rock - scissors - paper game on a fully connected network in which the three strategies coexist .we obtained numerically .the result is shown in figure 1(d ) as a curve in the - parameter space .it is found that follows the functional form , which will be explained later .the curve represents a phase boundary that separates the - space into two regions .the region below ( above ) the curve corresponds to a phase in which cooperators and defectors ( only loners ) coexist ( exist ) .we also studied the temporal evolution in both phases , i.e. , for and .taking , for example , .figure 2 shows , and in the first mcs .the initial frequencies are for all three characters . for values of deep into either phase ( see fig .2 ) , the transient behavior dies off rapidly and the extinct character typically vanishes after mcs . in the phase where c and d coexist , and oscillate slightly with time in the long time limit , due to the dynamical nature of the game .it is noted that for , the strategies compete for a long while and the transient behavior lasts for a long time .this slowing down behavior is typical of that near a transition .the behavior of follows from the rule of character evolution . in a fully - connected network ,all c - players have the _ same _ payoff and all d - players have the _ same _ payoff .these payoffs depend on , , and at each time step .the payoff for a loner is at all time , for a system with . for small , decays exponentially with time if and are both greater than .in addition , the phase with only non - vanishing and is achieved by having . for this phase in the long time limit , and . together with ( since in the phase under consideration ) , the condition implies and .these results are identical to the basic sg ( without loners ) in a fully connected network .the validity of this solution requires ( and hence ) , which is equivalent to .this is exactly the phase boundary shown in figure 1(d ) .the behavior of the game in a square lattice is expected to be quite different , due to the restriction that a player can only compete with his connected neighbors .we carried out simulations on square lattices with periodic boundary conditions .figure 3(a)-(c ) shows , and for three different values of the loner payoff .the results for the spatial sg ( without loners ) on a square lattice is also shown ( solid lines in figure 3(a ) and 3(b ) ) for comparison .a value is used .several features should be noted . for ,the loners eventually vanish with and take on the mean values in the spatial sg without loners .this behavior is similar to that in fully connected networks . for , however , the behavior is different from that in fully connected networks . here ,c , d , and l characters coexist . above , drops with to a finite value , leaving rooms for to increase with .the cooperator frequency remains finite above .therefore , the cooperator frequency or the cooperative level in the system as a whole is significantly _ improved _ by the presence of loners . for , increasing the payoff of loners leads to a higher cooperator frequency and lower defector frequency . reading out for different values of , we get the phase boundary as shown in figure 3(d ) that separates a region characterized by the coexistence of three characters and a region in which only c and d coexist .the results indicate that , due to the restriction imposed by the spatial geometry that a player can only interact with his four nearest neighbors , it takes a certain non - vanishing value of for loners to survive even in the limit of .the behavior is therefore different from that in a fully connected network for which the boundary is given by .note that there exists a region of small values of in which the steady state consists of a uniform population of c strategy ( see fig .3(a ) and fig .3(d ) ) . for small ,loners are easier to survive , when compared with the fully connected case . putting these results together , the phase diagram ( see fig .3(d ) ) for a square lattice , therefore , shows three different phases .the most striking effect of the spatial structure is that cooperators now exist in every phase .interestingly , we found that the phase boundary in figure 3(d ) can be described quantitatively as follows .we _ assume _ that the survival of loners is related to the cooperator frequency .in particular , loner survival requires the cooperator frequency to drop below a certain level and that this value is the same in a square lattice as in a fully connected network .that is to say , we assume that loners could survive , for a given value of and , only when . numerical results also indicate that when all loners extinct , and follow the results in a spatial sg without loners .this is shown as the solid line in figure 3(a ) . therefore ,for a given value of , we can simply read out the value of such that from the results in spatial sg in a square lattice . for different values of , this procedure results in the dashed line shown in figure 3(d ) which describes the phase boundary quite well .figure 4 shows the temporal dependence of , , and in a square lattice for two values of at . for ( fig .4(a ) ) , which corresponds to a case in which only cooperators and defectors coexist , the number of loners decay rapidly in time , typically within 100 mcs . after the transient behavior, the cooperator and defector frequencies only oscillate slightly about their mean values .this behavior is similar to that in the c and d coexistence phase in figure 1(d ) for fully - connected networks . for ( fig .4(b ) ) , which corresponds to a case with the three characters coexist , the long time behavior of , and is oscillatory .similar behavior has been found in the rock - scissors - paper game and in the voluntary pdg . due to the dynamical nature of character evolution, there are continuous replacements of one character by another and this oscillatory behavior is expected .the major difference between a square lattice and a fully - connected network is that in a fully - connected network , each player competes with all other players . as a result , there are only three payoffs in the system one for each type of player , at each time step .the loners , for example , have a constant payoff of , while the cooperators and defectors have payoffs that depend on and .once is higher than the payoffs of cooperators and defectors , the number of loners grows until they take over the whole population . in a square lattice, however , each player has a payoff that depends on his character _ and _ the detail of his neighborhood , i.e. , the characters of his four connected neighbors .this implies that the c - players and d - players in a square lattice may have different payoffs depending on the characters of his connected neighbors .the loners have a constant payoff of .the non - uniform payoffs among c - players and d - players in a lattice allow some c and d players to coexist with the loners , by evolving to spatial local configurations that favor their survivals . since the adaptive rule is related to the payoff of each character , it will be interesting to compare the payoffs in a spatial sg without and with loners .figure 5(a ) shows the mean payoffs of cooperators and defectors as a function of in a spatial sg in a square lattice _ without _ loners .the averaged payoff over all players is also shown . for small ,there is a phase with all c players and the payoff is 4 for each of the c players . for large , there is a phase with all d players and the payoff is zero . for intermediate where c and d players coexist ,the mean payoff drops gradually with . in a spatial sg _ with _loners ( fig .5(b ) ) , it is observed that the mean payoffs basically follow that in figure 5(a ) in the phase where loners are completely replaced .when loners can survive , the presence of these loners increases the payoffs of both the remaining cooperators and defectors .the loners themselves have a payoff of in a 2d square lattice .the cooperators payoff is enhanced once loners survive and the increase follows the same form as the increase in the loner frequency with ( compare the circles in fig .5(b ) with the squares in fig .3(c ) in the range of when loners survive ) .when loners survive , the payoff averaged over all players is significantly _ enhanced _ due to their presence .this is similar to what was found in the voluntary pdg .all the results reported so far are for the case of .this corresponds to a case where the player is highly likely to replace his character when he meets a better - performing player . in figure 6, we show the effects of the noise parameter for a fixed . as increases, the step - like structure in as a function of becomes less obvious and is gradually suppressed in the limit .the most important effect of a 2d square lattice is that each player is restricted to interact with his four neighbors .take a player of character , he will only encounter a finite number of configurations for which he is competing in .for example , his four neighbors may consist of 4 c - players ; 3 c - players and 1 d - player or 1-loner , etc .each of these configurations corresponds to a . in a square lattice , therefore , there will be a finite number of payoffs for a c - player , depending on the characters of the neighbors .similarly , there are finite number of payoffs for a d - player .the loners always get a payoff of .for , the adaptive mechanism is strictly governed by the ordering of these payoffs .the distribution of players in a square lattice will then evolve in time according to how the payoffs are ordered . in the long timelimit , only a few favorable local configurations will survive and the number of players in each of these favorable configurations is high . as one increases slightly , the ordering of the finite number of payoffs may not change .therefore , will not change with until we reach certain values of that the ordering of the payoffs is changed .this gives rise to the more sudden changes in as observed at some values of and it is the reason for having step - like features in and for small values of .as the noise parameter increases , the adaptive mechanism is less dependent on the exact ordering of the payoffs .therefore , the changes in with becomes more gradual as increases .interestingly , less obvious step - like structures in are also observed in the spatial sg without loners in 2d lattices with a larger coordination number .this is also related to the picture we just described .a lattice with more neighbors will give a higher number of neighborhood configurations and hence more values of the payoffs .more configurations also imply the number of players encountering a certain configuration is smaller .thus , the number of players involved in a change in the ordering of the payoffs as changes is smaller .this has the effect of making the drop in gradual .therefore , increasing for a given fixed coordination number is similar in effect as increasing the coordination number for fixed .we studied the effects of the presence of loners in a snowdrift game with loners in fully - connected networks and in square lattices . in a fully - connected network , either cooperators live with defectors or loners take over the whole population .the condition for loners to take over is found to be .this result can be understood by following the payoffs of each strategy . in a fully - connected network ,the strategies payoffs are particularly simple in that they depend only on the strategy frequencies at the time under consideration , with each type of player having the same payoff .in a square lattice , the spatial sg with loners behave quite differently .it is found that the cooperators can survive in the fully parameter space covering and .depending on the values of these parameters , there are three possible phases : a uniform c - player population , c - players and d - players coexist , and coexistence of the three characters .the underlying lattice thus makes the survival of cooperators easier .the presence of loners is also found to promote the presence of cooperators .there average payoff among all players is also found to be enhanced in the presence of loners .we discussed the influence of a square lattice in terms of the payoffs of the players . in a square lattice ,spatial restriction is imposed on the players in that a player can only interact with the four nearest neighbors .this leads to a payoff that does not only depend on the character but also depend on the local environment in which the player is competing in .the players in the local environment , in turns , are also competing in their own local environment .this will lead to clustering or aggregation of players in the square lattice into configurations that the payoffs favored .the dependence of the frequencies on in a square lattice then reflects the change in preferred configurations as is changed .we also studied the effects of the noise parameter in the adaptive mechanism .it is found that as the noise parameter increases , the change of the frequencies with becomes more gradual .this is related to the importance of the ordering of the many payoffs in the adaptive mechanism .as the noise parameter increases , the exact ordering of the payoffs becomes less important and the change in frequencies becomes more gradual . in closing , we note that it will be interesting to further investigate the effects of loners in the snowdrift game in networks of other structures . among themare the re - wiring of regular lattices into a small - world network or a random network and the scale - free networks .this work was supported in part by the national natural science foundation of china under grant nos .70471081 , 70371069 , and 10325520 , and by the scientific research foundation for the returned overseas chinese scholars , state education ministry of china .one of us ( p.m.h . ) acknowledges the support from the research grants council of the hong kong sar government under grant no .cuhk-401005 and from a direct research grant at cuhk . | the effects of an additional strategy or character called loner in the snowdrift game are studied in a well - mixed population or fully - connected network and in a square lattice . the snowdrift game , which is a possible alternative to the prisoner s dilemma game in studying cooperative phenomena in competing populations , consists of two types of strategies , c ( cooperators ) and d ( defectors ) . in a fully - connected network , it is found that either c lives with d or the loners take over the whole population . in a square lattice , three possible situations are found : a uniform c - population , c lives with d , and the coexistence of all three characters . the presence of loners is found to enhance cooperation in a square lattice by enhancing the payoff of cooperators . the results are discussed in terms of the effects in restricting a player to compete only with his nearest neighbors in a square lattice , as opposed to competing with all players in a fully - connected network . |
cliques are highly - interconnected subgraphs ( complete graphs ) , and appear dominantly in networks which describe wide - ranging complex systems occurring from the level of cells to society . and , the cliques are actively investigated in recent years because of provisions of important insights to information processing , hierarchical modularity , and community structures .for instance , in gene regulatory networks , small cliques correspond to the feed - forward loop which is one of the network motifs .the motifs play an important role in gene regulation , and are regarded as building blocks of life .furthermore , the cliques are a representation for clusters , communities , and groups because there are edges among persons as nodes if there are friendships , partnerships , and _ etc . _ among the persons in social networks . therefore , the cliques help to detect community structures in social networks . again , in protein - protein interaction networks , the cliques are powerful tools for understanding evolution of proteins and functional predictions of proteins having unknown function because proteins which have same functions tend to interact . motivated by these breakthroughs , recent efforts have taken place to analytically evaluate the abundance of subgraphs , including cliques , based on statistical mechanics , providing excellent knowledge about the local interaction patterns and the time evolution of the abundance of subgraphs including cliques .these previous works focus on the local information such as the subgraph and clique abundance , and the size of the giant components led by percolation via a class of subgraphs such as the subgraph percolation , the , and the clique percolations . in recent years, however , it has been revealed that real - world networks are constructed by overlapping subgraphs including cliques ; thus it is important to elucidate global structures in networks consisting of cliques .for example , dynamics of a high order emerge by the combined network motifs in gene regulatory networks .in particular , the several power - law statistical properties have been empirically found in real - world complex networks .one of the properties is scale - free connectivity which is characterized by a power - law degree distribution with empirically found .the scale - free connectivity means that a few nodes ( hubs ) integrate a great number of nodes and most of the remaining nodes do not .another of the properties is hierarchical modularity which is characterized by a power - law clustering spectrum with empirically found , and this property suggests a hierarchical structure of the cliques .a clustering spectrum is defined as an average clustering coefficient of nodes with degree , where the clustering coefficient means the density of edges among neighbors of a node .since these properties reflect a global structure of a network , it is significant to clarify relationships between these properties and the global structures of the combined cliques . in this paper, we propose the -clique network as a powerful tool for understanding global structures of combined highly - interconnected subgraphs .furthermore , we provide the theoretical predictions for well - known statistical properties of -clique networks embedded in a complex network using the degree distribution and the clustering spectrum , and evaluate our theoretical predictions with numerical simulations .the theoretical predictions are established by applying the statistical method in .moreover , we discuss relationships of statistical properties which are observed between several real - world networks and their -clique networks .-clique networks are represented as sets of nodes and edges which are contained in -node cliques , corresponding to -node complete graphs , embedded in an original network .figure [ fig : c_net ] shows a schematic diagram of -clique networks .the original network [ fig . [ fig : c_net ] ( a ) ] has two clique networks [ figs .[ fig : c_net ] ( b ) and ( c ) ] , and the clique networks are expressed as the circled black nodes with black edges .the gray nodes and edges are eliminated because the nodes and edges are affiliated with no cliques . following a procedure ,-clique networks are extracted from an original network .in addition , original networks are equivalent to -clique networks in the absence of isolated nodes corresponding to nodes which have no edges . in this paper , we assume that the original networks have no isolated nodes .we utilize the algorithm based on the network motif detection to find the cliques although finding clique abundance is computationally intractable ( np - hard ) , enumeration of -cliques in a given network can be done in polynomial time if is a constant .-clique networks embedded in the original network ( a ) .the -clique networks [ ( b ) and ( c ) ] are expressed as the circled black nodes with black edges . ]-clique networks embedded in the ba network with and ( shifted for clarity ) . means the average degree .the symbols correspond to the numerical results , and the dashed lines are theoretical predictions given by eq .( [ eq : p_n(k ) ] ) .the solid lines show . ]we consider degree distributions from -clique networks .the degree distribution is defined as the existence probability of nodes with degree which is the number of edges at a node in a -clique network .in addition , denotes the degree distribution from an original network because .in order to establish a theoretical prediction on the degree distribution of -clique networks , we propose an approximation method based on the statistical method in .we assume that the clustering spectrum corresponds to the probability that two neighbors of a node with degree ( ) are linked .first , we consider the probability that an edge on a node with degree is eliminated due to the extraction of -clique networks from an original network . for simplicity, we assume that the probability of an edge to be eliminated from a node is independent from the probability of another edge to be eliminated from the same node and the probability of the same edge to be eliminated from a neighbor . this assumption is a suitable approximation in the case of random graphs because the probability that there is an edge between two nodes is constant .we show that the approximation is also suitable in the case of arbitrary large - scale graphs ( networks ) for large with numerical simulations . here , we focus on a subset which consists a node with degree , neighboring nodes and edges among these nodes. then , the edge on the node with degree belongs to -cliques which are formed with the probability , where .that is , the probability that the edge is not contained in one of -cliques is $ ] .since the edge is eliminated if the edge is contained in no -cliques , from the assimptation of independence , the probability can be written as next , we characterize the conditional probability that the degree shifts from to due to the extraction of -clique networks using the probability . the conditional probability can be expressed using the bimodal formula , and we have ^{k^{(n)}}\phi_n(k)^{(k - k^{(n)})}. \label{eq : phi(k|j)}\ ] ] the degree distribution from an -clique network is proportional to the sum of for .therefore , the degree distribution is finally described as where and correspond to the total number of nodes in an original network and in a -clique network , respectively . using and ,the total number of nodes in the -clique network can be estimated by .\label{eq : n_n}\ ] ] in order to confirm the theoretical predictions , we performed numerical simulations for the barabsi - albert ( ba ) network , which provides power - law degree distribution ; with the degree exponent .figure [ fig : deg_ba ] shows the degree distributions of -clique networks embedded in the ba network . as shown in fig .[ fig : deg_ba ] , our theoretical predictions are in good agreement with the numerical results , indicating that the approximation is suitable .in addition , the different degree distributions are observed between the -clique networks and the original network .the degree at a node shifts due to the extraction -clique networks from an original network . here , we consider the theoretical predictions for the shifts with the statistical properties from an original network . using the probability [ eq .( [ eq : phi(k ) ] ) ] that an edge is eliminated due to the extraction of -clique networks , the expectation value of the degree at a node in a -clique network can be written as .\label{eq : k'}\ ] ] the probability is dependent on the clustering spectrum as shown in eq .( [ eq : phi(k ) ] ) .since it is empirically found that the spectrum follows the power law in most complex networks , we assume the power - law spectrum ; hence .moreover , we use the feature of napier s number , for large , to rewrite the probability [ eq .( [ eq : phi(k ) ] ) ] . in doingsuch we have , \label{eq : phi(k)_2}\ ] ] where in particular , the probability is independent of the degree when , and the proportional relationship between and is satisfied . in order to confirm the theoretical prediction , we performed numerical simulations for the ba network .figure [ fig : chg_deg ] shows the shift of the degree at a node due to the extraction of the -clique networks . as shown in fig .[ fig : chg_deg ] , our theoretical prediction is in good agreement with the numerical results .figure [ fig : phi_k ] shows the probability which is obtained from the extraction of -clique networks .assume that , and are about 0.02 and 0.1 with least - square method , respectively .we give the theoretical prediction with these values .as shown in eq .( [ eq : phi(k)_2 ] ) , declines exponentially with , indicating that a degree of a high - degree node tends to stay .the prediction is in agreement with the numerical results .-clique networks from the ba network with and .the symbols correspond to the numerical results , and the dashed lines are given by eq .( [ eq : k ] ) .the solid lines show . ]is obtained from the extraction of -clique networks from the ba network with and .the symbols correspond to the numerical results , and the dashed lines are given by eq .( [ eq : phi(k)_2 ] ) . ] in the case of , however , the agreements are weak in fig .[ fig : chg_deg ] and [ fig : phi_k ] .there are two reasons .one is the assumption of independence . in scale - free network ,low - degree nodes tend to connect to high - degree nodes . as shown in fig .[ fig : phi_k ] , the probability that an edge on the high - degree node is eliminated is very small .for this reason , real for small tends to be smaller than eq .( [ eq : phi(k)_2 ] ) . therefore , real tends to be larger than our theoretical prediction .another is fluctuation in clustering spectra . in the case of scale - free networks ,the fluctuation is large for small , and is contrary small for large because of heterogeneous connectivity . and , the probability that a -clique is formed is described as .that is , the error increases with .therefore , our theoretical prediction tends to be in weak agreement in the case of large and small .the clustering spectrum of the ba network is independent of the degree .that is , . according to eq .( [ eq : d_n ] ) , we predict that the shifts of the degree follow the nonlinear relationship because of the nonzero ; for example , and . as shown in fig .[ fig : chg_deg ] , our prediction is in agreement with the numerical results ..network sizes , average degrees , and characteristic exponents of the investigated real - world networks and the ba network .the exponents and are extracted using the maximum likelihood estimation and the analytical approximation ; thus , respectively . [ cols="<,^,^,^,^,^,^",options="header " , ] -clique networks embedded in the investigated networks ( shifted for clarity ) .the solid lines show in the each main panel .the exponents are provided from table [ table : real_net ] , respectively .the each inset shows the shift of the degree due to the extraction of -clique network .in the each inset , the solid lines correspond to .( a ) internet ( as level ) , ( b ) metabolic network of _e. coli _ , and ( c ) protein - protein interaction network of yeast . ]we discuss statistical properties of -clique networks embedded in a network with power - law statistical properties . here , we focus on the scale - free connectivity which is one of the well - known power - law statistical properties and is defined as a power - law degree distribution , . in networks with scale - free connectivity , we predict that forms of the degree distributions are invariant between the 3-clique and the original network when .this is because the proportional relationship between the degrees at nodes in the original network and in the -clique networks , is satisfied under this condition . in order to verify our prediction, we investigate the degree distributions from -clique networks embedded in several real - world networks with scale - free connectivity : the autonomous system representation of the internet , the metabolic network of _ escherichia coli _ , and the protein - protein interaction network of yeast .these real - world networks have hierarchical modularity , indicating the power - law clustering spectra ; hence , with .in addition , we also consider the ba network , which does not have hierarchical modularity , for comparison .we summarize the networks size , the average degrees , and the exponents characterizing each network in table [ table : real_net ] .the exponents from the real - world networks with hierarchical modularity are almost one ( see also table [ table : real_net ] ) . therefore , we expect that the forms of the degree distributions are invariant between the 3-clique and the original network because .figure [ fig : real_deg ] shows the degree distributions of -clique networks embedded in the real - world networks. as expected , the forms of the degree distributions are invariant between the original and the 3-clique network because of the proportional relationship between and ( see the insets in fig .[ fig : real_deg ] ) .in contrast , the exponent from the ba network is equivalent to zero ( see also table [ table : real_net ] ) because of there is no hierarchical modularity .therefore we predict that the power - law degree distribution from an original network is variant due to the extraction of the -clique network ( because ) .figure [ fig : deg_ba ] shows the degree distributions of -clique networks embedded in ba networks . as expected , the form of the degree distribution is variant between the 3-clique network and the original network because of the nonlinear relationship between and ( fig .[ fig : chg_deg ] ) .in this paper , we have provided theoretical predictions using the approximation method for the degree distribution of a -clique network and the shifts of the degree due to the extraction of the -clique network .moreover , we performed numerical simulations and show that the numerical results are in good agreement with our theoretical predictions , indicating that the approximation method is suitable . furthermore , we have found that the power - law degree distributions are identical between the 3-clique and the original networks in the scale - free networks with hierarchical modularity using our theoretical predictions .we have only focused on the power - law degree distributions in this paper . however , because of the proportional relationship between and , the converse holds for the other power - law statistical properties which are observed in real - world networks : the hierarchical modularity and the assortativity .we have confirmed that the power - law statistical properties are invariant between the 3-clique networks and the original networks , although there is no space for the showing of the data .the invariance of the statistical properties implies that structural properties are identical between -clique and original networks .in addition , from these results , we expect that the -clique networks are constructed by the same mechanisms as the original networks with hierarchical modularity .in contrast , we have found that the 3-clique network embedded in the ba network which does not have hierarchical modularity has different statistical properties from the original network . that is , the structural properties are different between -clique and original networks in the ba network .we believe that these results provide new insights into global structures of combined network motifs , community structures in social and biological networks . in this paper , expressly , we found structural properties are identical between 3-clique networks and original networks .this lets us expect that 3-clique networks are constructed by the same design principles as the original networks with hierarchical modularity , and it implies that the clique networks help to understand design principles and global structures of combined significant subgraphs which reflect community and functional modules in networks .for example , it is believed that most real - world networks are constructed by the preferential attachment .because of a structural identity between 3-clique networks and original networks , we expect that the clique networks are also constructed by the same preferential attachment as the original networks .this mechanism suggests the preferential attachment of cliques .actually , it is reported that there is a preferential attachment of community in social networks . in biological networks , furthermore , cliques correspond to functional modules such as network motifs .in particular , 3-node clique , which denotes the network motifs such as the feedforward loop and so on , appears frequently . from our result , we expect that a network which consists of network motifs only is constructed by the same preferential attachment as an original network .if so , the motifs may concentrate on hubs .actually , the concentration of motifs has been found by the network analysis . in this manner, we believe that we can find new structural properties and new insights into design principles of networks via an analysis of clique networks . and , our theoretical predictions may help the analysis and its interpretation .in biological networks , especially , since it is difficult to discuss network formation processes because of no ancestral networks , we believe that the analysis help to understand design principles of networks .in addition , we may establish more realistic growing network models via the analysis .this work was partially supported by grant - iazn - aid no.18740237 from mext ( japan ) . | we propose the -clique network as a powerful tool for understanding global structures of combined highly - interconnected subgraphs , and provide theoretical predictions for statistical properties of the -clique networks embedded in a complex network using the degree distribution and the clustering spectrum . furthermore , using our theoretical predictions , we find that the statistical properties are invariant between -clique networks and original networks for several observable real - world networks with the scale - free connectivity and the hierarchical modularity . the result implies that structural properties are identical between the -clique networks and the original networks . cliques , scale - free networks , hierarchical modularity , real - world networks 89.75.hc , 89.75.da |
in solving many mathematical and physical problems by means of numerical methods one is often challenged to seek derivatives of various functions given in discrete points . in such cases , when it is difficult or impossible to take derivative of a function analytically one resorts to numerical differentiation .it should be noted that there exists a great deal of formulae and techniques of numerical differentiation ( see , for instance , ref . ) . as a rule, the function in question is replaced with the easy - to - calculate function and then it is approximately supposed that .the derivatives of higher orders are computed similarly .therefore , in order to obtain numerical value of the derivative of the considered function it is necessary to indicate correctly the interpolating function .if the values of the function are known in discrete points , the function is usually taken as the polynomial of power . to find the derivative of functions having the intervals both quick and slow variation quasi - uniform nets are used ( see ref .this method has an advantage since constant small mesh width is unfavorable in this case , because it leads to the strong enhancement of the function values table .the problem of the numerical differentiation accuracy is also of interest .the numerical differentiations formulae , taking into account the values of the considered function both at and ( is a point where the derivative is computed ) , are called central differentiation formulae . for instance , the formulae based on stirling interpolating polynomial can be included in this class .such formulae are known to have higher accuracy compared to the formulae , using unilateral values of a function , i.e. , for instance , at .the range of numerical differentiation formulae based on different interpolating polynomials is limited , as a rule , to finite points of interpolation .all available formulae known at the present moment are obtained for a certain concrete limited number of interpolation points ( see refs .it can be explained by the fact that the procedure of the finding of the interpolating polynomial coefficients in the case of the arbitrary number of interpolation points is quite awkward and requires formidable calculations .it is worth mentioning that the procedure of the numerical differentiation is incorrect .indeed , in ref . it was shown that it is possible to select such decreasing error of the function in question which results in the unlimited growth of the error in its first derivative .some recent publications devoted to the numerical differentiation problem should be mentioned ( see , e.g. , ref . ) . in this workthe finite difference formulae for real functions on one dimensional grids with arbitrary spacing were considered .the formulae of central differentiation for the finding of the first and the second derivatives of the functions given in discrete points are derived in this paper .the number of interpolation points is taken to be arbitrary .the obtained formulae for the derivatives calculation do not require direct construction of the interpolating polynomial . as an example of the use of the developed method we calculate the first derivative of the function .the obtained result is studied in the limiting case .we examine the spectral characteristics of the weight coefficients sequence of the numerical differentiation formulae for the different number of the interpolation points .the performed analysis can be applied to the studying of the accuracy of the numerical differentiation technique developed in this work .it is found that the derived formulae of numerical differentiation have a high accuracy in a very wide range of spatial frequencies .without the restriction of generality we suppose that the derivative is taken in the zero point , i.e. .let us consider the function given in equidistant points , where and is the constant value .we can pass the interpolating polynomial of the power through these points the values of the function in points of interpolation coinciding with the values of the interpolating polynomial in these points : .let us define as the differences of the values of the function in diametrically opposite points and , i.e. .we can present in the form to find the coefficients , , we have gotten the system of inhomogeneous linear equations with the given free terms .it will be shown below that this system has the single solution .we will seek the solution of the system [ eq .] in the following way where are the undetermined coefficients satisfying the condition thus , the system of equations [ eq . ]is reduced to the equivalent , but more simple system [ eq . ] , in which for each fixed number it is necessary to find the coefficients .let us resolve the system of equations [ eq .] according to the cramer s rule : where in eq .we used the formula for the calculation of the vandermonde determinant . from eq .it follows that the determinant of the system of equations [ eq . ]is not equal to zero , i.e. the system of equations [ eq .] has the single solution .the most simple expression for is obtained in the case of that corresponds to a calculation of the first - order derivative from eq . as well as taking into account eqs .and we get the expression for the coefficients where it should be noted that one can similarly get the expression for the coefficients which is presented in the following way taking into account eqs . - we finally get the formula for the first derivative of the function the algorithm for the computation of the coefficients is presented in the appendix [ append ] and the results for the certain concrete number of the interpolation points are given in tab .[ alphatab ] ..[alphatab ] values of the coefficients . [ cols="<,^,^,^,^,^,^",options="header " , ] note that the expression for the first derivative obtained by this method coincides with the value presented in the refs . for that corresponds to three and five points of interpolation .however , technique developed in this article allows one to calculate the coefficients , and hence the first derivative , for any value of .similar formula can be obtained for the calculation of the second derivative .we give without proof corresponding expression where and the product is introduced in eq . .as an example of the use of the obtained central differentiation formulae we will compute the first derivative of the function at .let us set the value of the mesh width equal to .notice that , as a rule , the less the mesh width the more exact result numerical differentiation gives .we have chosen rather big value of .the eq . for this case takes the form in eq .we take that if is an even number , and if is an odd number .let us study the obtained result in the limiting case .first , it is necessary to calculate the value of the product within the limit here we used the known value of infinite product using eqs . and , we find that the expression for the coefficients within the limit is represented in the following way now it is easy to complete the studying of eq . . substituting the result from eq . to eq .and using the known value of infinite series we get that thus , it is shown that the method of the derivatives finding , developed in this paper , gives for the function the value of the first derivative which coincides with the exact analytical one even at rather crude mesh width .in the section [ ffderiv ] of the present work we derived the formula for the finding of the first derivative of the function at . this result can be easily generalized for the case of the arbitrary point . if we set that , and moreover supposing that for and ( see tab .[ alphatab ] ) , then in the considered case eq . reads as follows here the summing is taken over all range of the function involved : .for instance , if the values of the function are set on the limited equidistant collection of elements , then eq . can be rewritten in the form it is worth noticing that in eq .we used the periodicity condition of the weight coefficients fig .[ alphan1 ] presents the example of the weight coefficients of the differentiating sequence . .] thus the first derivative computation of the function at the points , , is reduced to the procedure of the calculation of the mutual correlation function between the finite sequences and .it is known ( see , e.g. , ref . ) that if a function satisfies the dirichlet conditions in the interval , then it can be expanded into the fourier series where the expansion coefficients are presented in the way if the first derivative satisfies the analogous conditions as the function , then the following expression will be valid therefore , form eqs . andit follows that the differentiation procedure is the linear filter with the frequency characteristic : .similarly we receive for the second derivative in this case the the frequency characteristic of the corresponding filter has the form : . according to wiener - khinchin theorem ( see , e.g. , ref . ) the mutual correlation function between the two finite sequences can be calculated with the help of the inverse fourier transform of the mutual spectrum of the considered sequences .thus , if we define that is the complex spectrum of the differentiating sequence , and is the spectrum of the function , then it follows from eq .that where is the complex conjugated quantity with respect to . comparing eqs .and we obtain that the accuracy of the numerical differentiation performed with the use of the various types of the sequences is characterized by the closeness of imaginary parts of their spectra to the linearly growing sequence .the spectra of the sequences are depicted in fig .[ spectra111211 ] for the various values of at . at . ]it can be seen from this figure that for , i.e. for the sequence shown in fig .[ alphan1 ] , the imaginary part of the spectrum is the branch of the function .the linearity condition is satisfied only in the vicinity of zero and .however , at the spectrum practically does not differ from the linear one up to .the more close to linear one is the spectrum of the sequence .the difference between the imaginary parts of the spectra of the sequences and the linearly growing sequence are presented in fig .[ log101 ] . versus for different .] the computations have been performed with the accuracy up to , thus the reliable results at have been obtained for , and at for .the presented results demonstrate the high accuracy of the numerical differentiation carried out with the help of the sequences in the wide range of the spatial frequencies .now let us briefly consider the sequences for the calculation of the second derivative , which are given in eq . .their spectral properties can be obtained in the similar manner as we have done it for the case of the sequences and therefore we just present the final results .the spectra of the sequences are shown in fig .[ spectra111212 ] . at . ]it follows from this figure that the closeness of the corresponding spectrum to the parabola in the case of exists only in the vicinity of zero .the spectra at and are close to function in a wider range of ( and respectively ) .the difference between the real parts of the spectra of the sequences and the parabola are depicted in fig .[ log102 ] in the logarithmic scale . versus for different .] this figure again demonstrates the high accuracy of the second derivative computation with the use of the sequences .in conclusion we note that the method of central differentiation formulae finding has been developed in this article .the elaborated technique does not require direct construction of the interpolating polynomial .we have derived simple and convenient expressions for the first and the second derivatives [ eqs . and ] of the function given in discrete points .the number was taken to be arbitrary .in contrast to the results of the ref . , where the recursion relations for the calculation of the weight coefficient being used in numerical differentiation formulae were considered , in the present work the expressions for the considered weight coefficients have been derived in the explicit form for the arbitrary number of interpolation points . as an example of the use of the developed methodwe have calculated the first derivative of the function .the obtained result has been studied in the limiting case .we have examined the spectral characteristics of the weight coefficients sequence of the numerical differentiation formulae for the different number of the interpolation points .the performed analysis has allowed one to study the accuracy of the numerical differentiation carried out with the help of the developed method .it has been found that the derived formulae of numerical differentiation posses the high accuracy in a rather wide range of the spatial frequencies . as it has been shown in this paper , the formulae for the derivatives finding gave correct results in the case of large number of interpolation points .thus , the developed method can be useful in lattice simulation of quantum fields . to get the exact results at calculations on lattices one has to use nets with the big number of points .derivatives which one encounters in theories of quantum fields , as a rule , do not exceed the second order .therefore , the formulae obtained in this article could be of use in carrying out mentioned above research .this research was supported by grant of russian science support foundation .the author is indebted to sergey i. dvornikov for helpful discussions .in this appendix we present the algorithm for the computation of the coefficients on the matlab 6.5 programming language . | we derived the formulae of central differentiation for the finding of the first and second derivatives of functions given in discrete points , with the number of points being arbitrary . the obtained formulae for the derivative calculation do not require direct construction of the interpolating polynomial . as an example of the use of the developed method we calculated the first derivative of the function having known analytical value of the derivative . the result was examined in the limiting case of infinite number of points . we studied the spectral characteristics of the weight coefficients sequence of the numerical differentiation formulae . the performed investigation enabled one to analyze the accuracy of the numerical differentiation carried out with the use of the developed technique . mathematics subject classification : primary 65d25 ; secondary 65t50 |
the past few years have seen a large increase in the interest for modeling dynamic interactions between individuals . while many real world data contain continuous - time information on the interactions , as e.g. email exchanges between employees in a company or face - to - face contact between individuals measured through sensors , most models are discrete in time .commonly , data are aggregated on predefined time intervals to obtain a sequence of snapshots of interaction random graphs .besides the loss of information induced by data aggregation , the specific choice of the time intervals has a direct impact on the results , which is most often overlooked .thus , developing models of interaction that exploit the continuous - time aspect of the data either called _ longitudinal networks , interaction event data , link streams _ or _ temporal networks _ is an important research issue .statistical methods for the analysis of longitudinal networks form a huge corpus , especially in social sciences and we do not pretend to provide an exhaustive bibliography on this topic .we refer to the very nice and recent review by for a more complete view on temporal networks .a natural way of modeling temporal event data is based on stochastic point processes .an important line of research involves continuous - time markov processes with seminal works on dyad - independent models up to the development of so - called stochastic actor oriented models ( e.g. * ? ? ?* ; * ? ? ?* ) . in these works observations consist in a series of time intervals of interaction and interactionsare assumed to last during the whole corresponding time interval . here , we focus on a rather different setup where each interaction is identified with a time point .furthermore , we consider a model that allows for dependencies of the processes modeling the interactions of pairs of individuals .the analysis of event data is an old and important area in statistics ( see e.g. * ? ? ?generally a multivariate counting process is considered , that counts the number of interactions of each pair of individuals up to time . in counting processeshave been introduced in the context of _ action _ data , which are a set of time - stamped directed interactions between individuals that , in addition , are marked by a label ( representing a behavioral event ) .the model may be viewed as an instance of cox s multiplicative hazard model with time - dependent covariates and constant baseline function . in the same vein , propose a general regression - based modeling of the intensity of non recurrent interaction events .they consider two different frameworks : cox s multiplicative and aalen s additive hazard rates ( see e. g. * ? ? ? propose another variant of cox s multiplicative intensity model for recurrent interaction events where the baseline function is specific to each individual . in the abovementioned works a set of statistics is chosen by the user as potential candidates that modulate the interactions .as in any regression framework , the choice of these statistics might raise some issues : increasing their number potentially leads to a high - dimensional problem , and interpretation of the results might be blurred by the correlation between these statistics .the approaches by , , and others are based on conditional poisson processes characterized by random intensities , also known as doubly stochastic poisson processes or cox processes .a particular instance of the conditional poisson process is the hawkes process , which is a collection of point processes with some background rate , where each event adds a nonnegative impulse to the intensity of all other processes . develop a model for spatial - temporal networks with missing information , based on such self - exciting point processes for temporal dynamics combined with a gaussian mixture for the spatial dynamics .similarly , combine temporal hawkes processes with latent distance models for implicit networks that can not be observed directly .clustering individuals based on interaction data represents a well - established technique for taking into account the intrinsic heterogeneity and summarizing information . in the context of dynamic random graphs , where a discrete - time sequence of graphs is observed ,recent approaches propose to generalize the so - called stochastic block model to a dynamic context .stochastic block models posit that each individual belongs to a latent group and interactions between two individuals are conditionally independent of the interactions of any other pair , given the latent groups of the interacting individuals .another attempt to use stochastic block models in the context of interaction events appears in generalizing the approach of by adding discrete latent variables on the individuals . in this work a semiparametric stochastic block model for recurrent interaction events in continuous timeis introduced , to which we refer as the poisson process stochastic block model .this is a stochastic block model where interactions are modeled by conditional inhomogeneous poisson processes , whose intensities only depend on the latent groups of the interacting individuals . in contrast to many other works, we do not rely on a parametric model where intensities are modulated by predefined network statistics , but intensities are modeled and estimated in a nonparametric way .the model is shown to be identifiable .our estimation and clustering approach is a semiparametric version of the variational expectation - maximization algorithm , where the maximization step is replaced by nonparametric estimators of the intensities .semiparametric generalizations of the classical expectation - maximization ( ` em ` ) algorithm have been proposed in many different contexts ( see e.g. for semiparametric mixtures or for a semiparametric hidden markov model ) .however , we are not aware of other attempts to incorporate nonparametric estimates in a variational approximation of ` em ` .two versions are developed for the nonparametric part of the model : a histogram approach based on the work of and a kernel estimator based on .for the histogram approach , an integrated classification likelihood criterion is proposed to select the number of latent groups adaptively .synthetic experiments enlighten both the clustering capacities of our method as well as the performance of the nonparametric estimation of the different intensities .moreover , the analysis of several real datasets illustrates the strengths and weaknesses of our approach .the supplementary material , whose references appear as s.xx , provides the proofs of all theoretical results , technical details on the algorithm and more detailed results of the analysis of the real data examples .we are interested in the pairwise interactions of individuals during some time interval ] , that is where \times \mathcal r ] is .we assume that , i.e. there is at most one event at a time . to model the distribution of these observations , every individual is assumed to belong to one out of groups , and the relation between two individuals , that is the way they interact with another , is driven by their group membership .more precisely , let be independent and identically distributed ( latent ) random variables taking values in with non zero probabilities for the moment , is considered to be fixed and known .when no confusion occurs , we also use the notation with such that has multinomial distribution with .now , our poisson process stochastic block model ( ppsbm ) is defined as follows . for every ,the interactions of individuals and , conditional on the latent groups and , are modeled by a conditional inhomogeneous poisson process on ] .we denote the set of permutations of .the parameter of a poisson process stochastic block model is identifiable on ] , but should not be equal almost everywhere .[ prop : ident ] under assumption [ hyp : ident ] , the parameter is identifiable on ] , then both and are identifiable on ] . the procedure is based on a least - squares penalized criterion following the work of . the detailed construction is provided in the supplementary material . for , where is to be chosen ,we denote by the regular partition of ] together with some bandwidth , the intensity is estimated by if and otherwise , where is defined in section [ sec : proc_notation ] .the bandwidth can be chosen adaptively from the data following the procedure proposed by .kernel methods are not always suited to infer a function on a bounded interval as boundary effects may deteriorate their quality .however , it is out of the scope of this work to investigate refinements of this kind . during the implementation of the algorithm ,two issues arise : convergence and initialization .as our algorithm is an iterative procedure , one has to test for convergence .a stopping criterion can be defined based on the current expected complete data log - likelihood }}(\theta^{[s]}) ] output },\alpha^{[s]}) ] defined by for all measurable ] ) .then , under assumption 1 that the intensities are distinct , the corresponding measures are all different and we may recover from the distribution of our counting process the set of values or equivalently the set .in particular , we recover the functions almost everywhere on ] , up to a permutation in .to finish the proof , we need to identify the proportions .note that as we identified the components , we recover from the set of values up to the same permutation as on the s .this concludes the proof .we follow some of the arguments already appearing in the proof of proposition 1 .let ( resp . ) denote the measure whose intensity is ( resp . ) the univariate process is a cox process directed by the random measure that is now distributed as thus the measures and are identifiable from the distribution of , but only up to a permutation .once again , we rather consider the trivariate cox processes directed by the random measures whose distribution in the affiliation case has now five atoms as previously , these five components are identifiable , up to a permutation on .now it is easy to identify the three components for which two marginals have same parameters and the third one has a different parameter .thus , we recover exactly the measures and .this also identifies the corresponding intensities and almost everywhere on ] .moreover , by the factorization property , for every we have ={e}_\tau[z^{i , q}|\mathcal o]{e}_\tau[z^{j , l}|\mathcal o]=\tau^{i , q}\tau^{j , l } .\ ] ] the quantity is thus equal to ] , the variational approximation of the probability that observation corresponds to a dyad with latent groups .it follows that where is the variational ` e`-step consists in maximizing with respect to the s which are constrained to satisfy for all . in other words, we maximize with lagrange multipliers .the partial derivatives are the partial derivatives are null iff and the s satisfy the fixed point equations , with being the normalizing constant . in this part , each intensity is estimated by a piecewise constant function and we propose a data - driven choice of the partition of the time interval ] with partition size .denote the space of piecewise constant functions on .note that the total number of dyads is an upper bound for ( the variational mean number of dyads in group ) .following , we consider the projection estimator of on defined as where the least - squares contrast is defined ( relatively to the counting process ) for all ,dt) ] considered for the estimation of with fixed .adaptive estimation consists in choosing the best estimator among the collection of estimators with defined by .the choice is based on a penalized least - squares criterion of the form for some penalty function that penalizes large partitions . following we take for either the collection of regular partitions of ] with intervals of length for ( where and are to be chosen ) .furthermore , the penalty function is given by where denotes the finest partition in the collection , that is in the regular case and in the dyadic case , and denotes the -th interval of partition .denote by the partition that minimizes over .let be the size of partition .then the adaptive estimator of intensity is given by that writes , \quad \hat { \alpha}^{(q , l)}_{\text{hist } } ( t ) = \hat \alpha_{\hat { \mathcal{e}}^{(q , l)}}^{(q , l)}(t ) = \frac1{t\bar{y}^{(q , l)}}\sum_{k=1}^{\hat d^{(q , l ) } } \hat d^{(q , l)}n^{(q , l)}(e_k^{\hat{\mathcal e } } ) \1_{e_k^{\hat{\mathcal e}}}(t).\ ] ] a natural stopping criterion of the variational ` em ` algorithm is based on function defined in .indeed , , where denotes the entropy of the distribution defined as .as our estimation procedure aims at maximizing , the algorithm may be stopped at iteration if the increase of is less than a given threshold , that is when },\tau^{[s+1]})-j(\theta^{[s]},\tau^{[s]})}{j(\theta^{[s]},\tau^{[s]})}\right|<\varepsilon.\ ] ] we use several initializations of the algorithm , relying on different aggregated datasets ( on the whole time interval or on sub - intervals ) and applying a k - means algorithm on the rows of the adjacency matrix of these aggregated datasets .( bold line ) and the inter - group intensity for ( dotted line ) with different shifting parameter . , height=340 ] d. j. daley and d. vere - jones ._ an introduction to the theory of point processes .i_. probability and its applications ( new york ) .springer - verlag , new york , second edition , 2003 . elementary theory and methods . | to model recurrent interaction events in continuous time , we propose an extension of the stochastic block model where each individual belongs to a latent group and interactions between two individuals follow a conditional inhomogeneous poisson process whose intensity is driven by the individuals latent groups . the model is shown to be identifiable and an estimation procedure is proposed based on a semiparametric variational expectation - maximization algorithm . two versions of the method are developed , using either a nonparametric histogram approach ( with an adaptive choice of the partition size ) or kernel intensity estimators . the number of latent groups can be selected by an integrated classification likelihood criterion . finally , we demonstrate the performance of our procedure on synthetic experiments and the analysis of several real datasets illustrates the utility of our approach . * keywords : * dynamic interactions ; expectation - maximization algorithm ; integrated classification likelihood ; link streams ; longitudinal network ; semiparametric model ; stochastic block model ; variational approximation . |
spatial indexing of geographic data has always been an important component of database systems . since the wide - spread adoption of social media and social networks , the size of data to be indexed has grown by multiple orders of magnitude , making even more demand on the efficiency of indexing algorithms and index structures .certain fields of natural sciences also face similar problems : astronomers , for example , have to perform spatial searches in databases of billions of observations where spatial criteria can be arbitrarily complex spherical regions. inspired by astronomical problems , szalay et al . came up with a solution to index spatial data stored in microsoft sql server years before native support of geographic indexing appeared in the product .their indexing scheme is called hierarchical triangular mesh ( htm ) and uses an iteratively refined triangulation of the surface of the sphere to build a quad - tree index . in the present case study, we demonstrate how we applied htm to index real gps coordinates collected from the open data streams of twitter , and performed a huge spatial join on the data to classify the coordinates by the administrative regions of the world .our results show that pre - filtering capabilites of htm are significantly better than that of the built - in spatial index of sql server and that htm renders spatial joins originally thought impossible to be able to be computed in a reasonable time .source code and full queries are available at the following url : http://www.vo.elte.hu/htmpaper .our data set consists of short messages ( tweets ) collected over a period of two years from the publicly available `` sprinkler '' data stream of twitter . more than half of the tweets , over a billion , are geo - tagged .we built a microsoft sql server database of the tweets and wanted to classify the geo - tagged messages by political administrative regions to investigate the geographic embedding of the social network of twitter users .microsoft sql server supports spatial indexing since version 2008 via a library implemented on top of the integrated .net framework runtime ( sql clr ) .the library works by projecting the hemisphere onto a quadrilateral pyramid , then projecting the pyramid onto a square , and tessellating the square using four levels of fix - sized rectangular grids to construct a spatial index .the number of grid cells can be set between and providing a maximal index depth of 32 bits .the index structure itself is materialized as a hidden , but otherwise normal database table containing one row for each cell touched by the geography object being indexed .the table uses 11 bytes for the spatial index which is complemented by the primary key of the indexed object , in our case an additional 10 bytes .the final index size can be controlled by limiting the number of cells stored for each geography object resulting in less effective pre - filtering of spatial matches when the index size is kept small. the hard limit on the number of index entries per geography object is 8192 .similarly to the built - in spatial index of sql server , htm indexing is also implemented in .net . by default, htm calculates a 40 bit deep hash , the so called htm i d , from the coordinates .the 40 bit hash length corresponds to an almost uniform resolution of 10 meters on the surface of the earth .htm tessellates two dimensional shapes with small spherical triangles , called trixels .trixels are represented by integer intervals ; all coordinates with an htm i d falling into the interval are guaranteed to be covered by the trixel .as htm is a custom library , we had full control over the index structures , therefore we simply stored the 8 byte htm identifiers in the same table where the tweets were , and built an auxiliary index on the table ordered by the htm identifier .together with the primary key , the auxiliary index size was 18 bytes per row , exactly one row per coordinate pair .we classified tweets using the maps from gadm.org , an open database of global administrative regions of every country in the world .the maps were loaded into the database as geography objects using the built - in geography type . for indexing , however , we decided to use htm .this raised a problem because the htm library was built for astronomical applications where regions on the sphere are better represented as unions of convex shapes contoured by great _ or _ small circles than by vertices of polygons connected by great circles .this union - of - convexes representation is not appropriate for highly detailed complex maps as shapes would need to be decomposed into convexes first , a process that significantly increases the size of the data structures .unfortunately , no code exists to directly compute the htm tessellation of maps in the polygon representation , so we had to use another approach . by combining the htm library and the built - in geographic library of sql server , we determined the approximate htm tessellation of regions up to a given precision by iteratively refining htm trixels at the boundaries .our solution , see algorithm [ lst : alg ] , goes as follows .we construct a coarse tessellation of the region based on the bounding circle , then we intersect each trixel with the region using the built - in functions of sql server . if a trixel is completely inside the region it is added to the result set . similarly , a completely disjoint trixel is discarded .trixels intersecting with the boundary are refined into four sub - trixels and the algorithm is called recursively . passing only the intersection of the original map with the trixel to the recursive call reduces the total runtime of the tessellation significantly .the algorithm uses the maximum depth of htm trixels as a stop condition to limit the resolution of the tessellation but can be easily modified to use an upper limit on the number of trixels instead . also , instead of trying to keep the index tables small , we store every trixel of the tessellation .trixels on the deepest level which intersect with the boundary of the geography object are flagged as `` partial '' .figure [ fig : cali ] illustrates the results of the level 9 htm covering of california with partial trixels in green .retlist t.partial false retlist.add(t ) region2 region.stintersection(t ) t.partial true retlist.add(t ) tlist2 t.refine(t.level+1 ) retlist.addrange ( ) retlist.... create table tweet ( i d bigint primary key , htmid bigint , coord geography ) create index ix_tweet_htm on tweet ( htmid ) create spatial index ix_tweet_spatial on tweet ( coord ) create table region ( i d int primary key , geo geography ) create spatial index ix_region_spatial on region ( geo ) create table regionhtm ( i d int , --foreign key to region.id start bigint , end bigint , partial bit ) .... in order to explain the internals of htm indexing , we create the schema of our database with query [ query : schema ] .the htm - based pre - filtering of a spatial join between a table containing gps coordinates and another containing the tessellations of complex regions requires an inner join with a ` between ` operator in the join constraint .query [ query : htmfilter ] is a simplified example of such pre - filtering query. we will refer to these types of queries as _range joins_. as range joins are highly optimized in the database engine , we expect excellent pre - filtering performance .the ` loop join ` hint is added to suggest a query plan that consist of scan of the ` tweet ` table , while doing index seeks on the much smaller ` regionhtm ` table .this is optimal as long as the htm index of the regions can be kept in memory . usingthe built - in spatial index of sql server , pre - filtering of a spatial join can be done with query [ query : sqlfilter ] .it translates into a rather complex execution plan that uses the spatial index for table ` tweet ` only , while calculating the tessellation of the geography objects in table ` region ` during query execution , or vice versa . by specifying query hints, one can tell the server which spatial index to use , but it seems impossible to use the spatial indices on _ both _ tables at the same time .this behavior has a tremendous impact on the performance of spatial joins when using the built - in indices . in case of the sql server geography index ,exact containment testing can be done by simply replacing the function call to ` filter ` with ` stcontains ` , as in query [ query : sqlcontains ] . when using the htm index , points in full trixels are already accurately classified with query [ query : htmfilter ] , only points in partial trixels need further processing to filter out false positive matches .this is done in query [ query : htmcontains ] which again relies on the spatial functions of sql server . also note , that query [ query : htmcontains ] , by referencing the column ` region.geo ` , uses the entire region for testing point containment . in case of computing the spatial join of billions of coordinates with a limited number of complex regions , it is well worth to pre - compute the intersections of partial trixels and regions first and use them for containment testing instead of the whole regions . due to publicational constraints we omit the query but all performance metrics quoted in the paperare measured using the pre - computed intersections of the regions and partial trixels for exact containment testing .we measured the index generation time for the 50 continental states ( plus washington d.c . )of the united states using two different depths ( and grids ) of the sql server geography index and three different depths ( level 12 , 14 and 16 ) of htm . for comparison , the resolution of the sql server index roughly corresponds to a level 12 htm index and the grid resolution corresponds to a level 16 htm index .the benchmarks were run on a 16-core database server with 96 gb memory . as our dataset fits into memory ,queries are basically cpu - limited .index generation times are summarized in table [ tab : idxgen ] .note , that sql server executed the geography index generation on two threads , while htm ranges were generated on a single thread only . while we have no control over the internals of geography indices , the iterative refining of htm tessellation could be replaced with a smarter , multi - threaded one .also , the size of the geography indices is internally limited to 8192 entries per region , while the htm indices were calculated without pruning , ultimately resulting in much larger index sizes . ....select t.id , h.id , h.partial from tweet t inner loop join regionhtm h on t.htmid between h.start and h.end .... .... select t.id , r.id from tweet t inner join region r on r.geo.filter(t.coord ) = 1 .... [ query : sqlfilter ] .... select tweet.id , region.id from tweet t inner join region r on r.geo.stcontains(t.coord ) = 1 .... .... select tweet.id , regionhtm.id from tweet t inner loop join regionhtm h on t.htmid between h.start and h.end inner join region r on r.id = h.id where h.partial = 0 or r.geo.stcontains(t.coord ) = 1 .... it is also rather instructive to compare the two indexing schemes by the false positive rates of pre - filtering .the results are listed in table [ tab : fp ] .false positives rates of the htm index are significantly lower in all cases , especially for higher index depths . in the case of the sql serverspatial index , increasing the resolution does not help , but the opposite : it just makes things worse , as the number of index rows ( and the resolution of the tessellation ) is limited to a maximum of 8192 cells , insufficiently small for complex maps . strictly limiting index size only helps when the number of shapes to be indexed is large and the shapes are relatively small and simple .when spatial indices fit into the memory , or at least can be read quickly from the disk , pre - filtering using range joins is expected to be significantly faster , even for indices with millions of rows , rather than exact containment testing against complex shapes . to test the performance of point classification , we prepared three samples having approximately 300 thousand , 1 million , 5 million points in each , uniformly sampled from the original database .some tests were also run using the entire data set of more than one billion tweets .the coordinates covered the entire world but the majority of them were within the continental united states .the geographical distribution of the samples is realistic and follows the population density weighted by the local twitter usage rate . to evaluate index performance , we computed a spatial join between a the samples of gps coordinates with different cardinality and the 51 regions , first with pre - filtering only , then with exact containment test .results of pre - filtering are listed in table [ tab : prefilter ] , while exact containment metrics are shown in table [ tab : total ] .all queries were executed using cold buffers , thus include i / o and cpu times .the spatial join performance of the htm index turned out to be about a hundred times better than the performance of the built - in geography index .pre - filtering itself is about a thousand times faster than the built - in index , and usually could be done in a few seconds for the smaller samples .such short query times are hard to be measured correctly , and values show a significant scatter when the queries are repeated .the two main reasons behind the significantly better performance of htm are : 1 ) htm - based pre - filtering could benefit from the spatial index on both tables , whereas sql server s geography library only used the index for one of the tables and calculated the tessellation for the other table on the fly .2 ) the extensive pruning of index entries resulted in a very high rate of false positives in case of sql server s geography index .because of the pruning , increasing the index resolution could not actually increase the resolution of the tessellation in case of the rather complex maps . by using a significantly larger , but still manageable index table , and by intersecting the trixels of the tessellations with the regions to reduce the complexity of exact containment testing, htm indexing could reduce the cost of spatial joins to a minimum .based on these results , it is clear that running the point classification using only the built - in geography index of sql server index is not a viable solution for any task similar to ours , namely , when the number of points is in the billions range ..index generation time and number of index rows of the regions .[ cols="<,>,>",options="header " , ]in this paper , we investigated the feasibility of efficient classification of gps coordinates of twitter messages by geographic regions using a relational database management system , microsoft sql server 2012 .we evaluated the performance of the built - in spatial indexing technology side by side with a customized solution based on hierarchical triangular mesh ( htm ) indexing .the built - in spatial index was found to be inadequate to perform spatial joins between large sets of gps coordinates ( on the scale of billions ) and complex geographic regions .we showed that our solution , a heuristic combination of existing techniques for handling spatial data in a relational database environment , can easily be a hundred times faster and makes the computation of the aforementioned spatial join available in reasonable time .we pointed out that the strength of htm indexing is the great control the database programmer has on both the index structure and query plans ( via hints ) .we also demonstrated that aggressive pruning of spatial indices is not a good idea when indexing of very complex regions is a requirement , as range - join - based pre - filtering is significantly faster than exact containment testing , even in case of millions of index entries . to make exact containment testing even faster ,we pre - computed the intersections of the complex geographic regions and partial htm trixels and use these much smaller shapes to filter out false positives .the authors thank the partial support of the european union and the european social fund through project futurict.hu ( grant no . : tamop-4.2.2.c-11/1/konv-2012 - 0013 ) , and the otka 103244 grants .eitkic 12 - 1 - 2012 - 0001 project was partially supported by the hungarian government , managed by the national development agency , and financed by the research and technology innovation fund and the makog foundation . | we present a case study about the spatial indexing and regional classification of billions of geographic coordinates from geo - tagged social network data using hierarchical triangular mesh ( htm ) implemented for microsoft sql server . due to the lack of certain features of the htm library , we use it in conjunction with the gis functions of sql server to significantly increase the efficiency of pre - filtering of spatial filter and join queries . for example , we implemented a new algorithm to compute the htm tessellation of complex geographic regions and precomputed the intersections of htm triangles and geographic regions for faster false - positive filtering . with full control over the index structure , htm - based pre - filtering of simple containment searches outperforms sql server spatial indices by a factor of ten and htm - based spatial joins run about a hundred times faster . |
future fifth generation ( 5 g ) mobile networks are expected to provide an unprecedented capacity in supporting the rapid growth of mobile data traffic with very limited spectrum resources .new multiple access technique , i.e. , non - orthogonal multiple access ( noma ) , which allow multiple concurrent communications , has been recognized as one of most efficient solutions to fulfill these requirements . recently , several non - orthogonal code division multiple access ( cdma ) schemes , named sparsely spread code division multiple access ( scdma ) , low - density spreading , and sparse code multiple access , have been developed for multiple access channels .all of these techniques rely on sparse signature sequences and near - optimal joint multi - user belief prorogation ( bp ) detections on sparse graphs .we collectively call these techniques scdma. it has demonstrated many advantages with respect to the capacity load and detection complexity over the conventional dense cdma and orthogonal multiple access schemes . in the downlink of a general scdma system, a base station simultaneously communicates with multiple users .data streams for the multiple users are first spread ( encoded ) into vectors by multiplying their signature sequences , which are sparse and the elements are usually selected from a given alphabet set .multiple data streams after spreading are superimposed at the base station and broadcasted to the users over a common channel , i.e. , using the common resources such as time and frequency .a multi - user bp detection is performed at each user to recover the data streams .the performance of scdma detection is mainly determined by a signature matrix that consists of all the users signature sequences as its row vectors .generally , the signature matrix should have a good sparsity , i.e. , without short cycles in the formed factor graph , to achieve a good bp detection performance .theoretically , if its factor graph has no cycles , the bp detection converges to the maximum likelihood ( ml ) detection performance .moreover , the equivalent signal constellation after spreading and superposition should have large euclidean distance which ultimately determines the performance bound of ml detection .this motivates us to design the elements in the signature matrix in scdma .signature design has been investigated for dense spreading in conventional cdma , where an orthogonal or low - correlated sequence set are constructed to maximize an equivalent cdma channel capacity .the problem becomes more complex for sparse spreading in scdma since the design should be implemented under the sparsity constraint of the signature matrix .the problem becomes even more difficult when a two - dimensional modulation scheme is employed as in the scenarios of and .works and show that a user constellation rotation significantly affects detection performance of multi - user superposition codes .convolutional code is employed for each user in and the multiple access scheme is referred to as trellis code multiple access ( tcma ) , which can be regarded as a spatial case of the scenarios in and with unitary spreading length .work considers two - user tcma and designs the user constellation rotation by maximizing an equivalent channel capacity .work considers a general multi - user scdma with a non - trivial spreading length . for a given regular factor graph structure, shows that a latin - rectangular signature matrix significantly outperforms a randomly generated signature matrix due to a large minimum code distance property. however , many open research problems , including how to efficiently find an optimal signature matrix with the maximum minimum code distance for an scdma system , how to efficiently estimate the ml detection performance , and how to design signature matrix that works well under both ml and bp detections , are still yet to be resolved . in this paper, we consider a general scdma system with a two - dimensional quadrature amplitude modulation ( qam ) and give a theoretical framework for signature design .we give a formal definition of scdma code distance and a distance enumerator analysis to estimate the ml detection performance . for a given factor graph structure of an scdma code ,we design the optimal signature matrix with the maximum minimum code distance .we construct two scdma code families whose factor graphs have very few short cycles .the constructed scdma codes outperform the existing codes in terms of both word error rate ( wer ) performance and detection complexity .our numerical results show that their bp detections exactly converge to their ml detection performances with few iterations .simulations for turbo - coded scdma systems with variety communication rates are given to verify the validity of our design in more practical applications .the remainder of the paper is organized as follows .section [ sec : model ] describes the scdma system model and introduces three detection algorithms .section [ sec : distance ] defines the scdma code distance and some properties on code distance are shown .section [ sec : design ] gives the optimal signature matrix design for scdma codes .section [ sec : const ] gives two constructions of code families with few short cycles in their factor graph and large minimum code distance .section [ sec : simulation ] gives simulations for our design in both uncoded and turbo - coded scdma systems .section [ sec : conclude ] concludes this paper .figure [ fig : scma ] shows a -user downlink scdma transmitter model at the base station .there are data streams to be transmitted to mobile users . after a forward error correction ( fec ) encoding ,each user s data stream is modulated and spread by multiplying its signature sequence .figure [ fig : scma ] illustrates the spread processing for an individual symbol of each user s data stream .here we consider qam with , where is the imaginary unit .the output after spreading is for , where with or , is called a signature sequence of user .here we considered unitary energy for each nonzero element of the signature sequence .it should be emphasized that the spreading vector is sparse , i.e. , the majority of elements might be 0 .number of nonzero elements in a spreading vector is called an effective spreading length .the users data streams after spreading are superimposed and transmitted over orthogonal channel resources , e.g. , ofdma tones or mimo spatial layers .the transmitted vector is represented as which is referred to as an scdma codeword .note that there is a total of number of scdma codewords corresponding to the different variations of , where is the transpose of a matrix .matrix ] takes the expectation of a random variable. therefore , code node outputs a probability message of to data node .the approximate bp detection may work well when the code node degree is large or the noise level is high since at these two cases , interference term is more like gaussian .the processing complexity of the code node reduces to , where is the maximum code node degree .in this section , we first define an scdma code distance and distance enumerator function , which is used to formulate a union bound for ml detection . some propertiesabout scdma code distance enumerator function and the minimum code distance are derived .[ def : d ] distance between two scdma codewords is where is the scdma code set . is called the minimum distance of scdma code . applying ( [ eq : codeword ] ) to ( [ eq : distance ] ) , we obtain the following lemma immediately .[ lem : distance ] let and be the universal set of length vectors over .the minimum distance of the scdma code with spreading signature matrix is where and . to give a global description of the code distance spectrum of an scdma code, we have the following definition .[ eq : ad ] distance enumerator function for an scdma code with signature matrix is where is a dummy variable , and for qam is the cardinality of the code set. equation ( [ eq : enumerator ] ) in fact gives an average distance spectrum for all the codewords in the code set .the distance enumerator function of scdma code can be used to calculate a multi - user union bound developed in .it is an wer upper bound for ml detection .let be the distance enumerator function of an scdma code with signature matrix , where can be regarded as the average number of codeword pairs with distance .the wer under ml detection is upper bounded by + _ union bound : _ note that ( [ eq : union ] ) has different form as that in since the definition of code distance in this work has different form from that in .we give the following properties for the scdma code distance enumerator function and minimum code distance .[ lem : rot1 ] holds for ._ proof : _ equation ( [ eq : rot1 ] ) holds because holds for any .[ lem : rot2 ] holds for any , where is the set of integer numbers ._ proof : _ equation ( [ eq : rot2 ] ) holds because where , holds for any .[ lem : add ] for a give signature matrix , it holds that where and are signature matrices obtained by adding a row ( resource ) and column ( user ) to , respectively .similarly , we can obtain an opposite proposition of lemma [ lem : add ] by deleting a user or resource .[ col : bound ] for a give signature matrix , where is the minimum effective spreading length . _proof : _ equation ( [ eq : bound ] ) is from the fact that is the minimum distance that is achieved by the matrix obtained by delating all the columns of except the one with the minimum effective spreading length . where ^t ] and the other is for signature matrix ] is an optimal signature matrix with . _proof : _ see appendix [ app : two - user ] . using ( [ eq : enumerator ] ) , the distance enumerator function of the two - user scdma code with the optimal signature matrix is calculated as : , which could be used to estimate the wer performance based on the union bound ( [ eq : union ] ) . signature matrix ] for , and , for , , is a identity matrix , and is the following permutation matrix this scdma code family has a load of .vectors , should be optimized to achieve the maximum minimum distance .it is easy to see that there is only one length- cycle in its corresponding factor graph .since if we delete one edge in the graph will be cycle - free , we have used the simplified labeling for the remaining tree graph based on corollary [ cor : nontree ] .[ eg : plus46 ] consider in construction [ const : improve ] .the graph will be cycle - free by deleting . through a full search, we obtain the following optimal -user , -resource scdma code : with the minimum distance . if we delete the edges corresponding to in construction [ const : improve ] , the graph will become a tree , and the maximum minimum code distance will reduce to , which is achieved by allocating single - resource optimal signature vector to each row as in example [ eg : tree ] . introducing the part of in construction [ const : improve ] increases the minimum code distance for , and thus , improves the performance of ml detection .[ eg : plus68 ] similarly , by considering in construction [ const : improve ] , we obtain the following optimal -user , -resource scdma code : its load is and the minimum distance is , which achieves the upper bound of corollary [ col : bound ] .the following construction gives a higher load scdma code family .[ const : nontree ] where , \theta^k_j\in [ 0 , \pi/2 ) , k=1, ... ,k-1 ] , and , are permutation matrices .this scdma code family has a load of .permutation matrices should be carefully selected to avoid short cycles , and vectors should be optimized to achieve the maximum minimum distance . since if we delete edges corresponding to , in construction [ const : nontree ]the graph will be cycle - free , we have used the simplified labeling for the remaining tree graph .[ eg:84 ] consider in construction [ const : nontree ] . by selecting and in ( [ eq : p ] ) , the generated factor graph has only one length- cycle . since the graph will be cycle - free by deleting the edge in , using theorem [ thm : nontree ] , we obtain the following optimal signature matrix where and .its load is and the minimum code distance is .if we allocate the single - resource optimal signature vector for each row of in construction [ const : nontree ] , using lemma [ lem : bound ] , we can show that the minimum distance . for ,the minimum distance is , which achieves the upper bound of corollary [ col : bound ] . and optimal codes constructed in example [ eg : tree ] with , under ml detection and their union bounds .the wer of two - user suboptimal signature ] has an asymptotic performance gain of near 2 db over the suboptimal signature of ] , which is used in 3gpp let networks . by puncturing its parity bits ,we can obtain different turbo encoding rates : .for all the simulations , the data stream length for turbo encoding of each user is . for both bp and approximate bp detections , the global decoding iteration ( each global iteration includes a turbo decoding iteration and an scdma iteration ) number is 30 , which is enough for all the considered decodings converge to their best performances. moreover , codes given by example [ eg : tree ] and constructions [ const : improve ] , [ const : nontree ] have irregular effective spreading profile , i.e. , effective spreading lengths for symbols of different users may be different . to realize user fairness , we alternately use column permutations of a signature matrix so that each user s symbol is spread with equal effective spreading length in average sense .for example , the signature matrix given in example [ eg : plus46 ] has effective spreading length profile for the six users . in our simulations, we divide the modulated symbol stream within a turbo codeword of each user into three sub - streams with equal length .the first sub - streams of the six users are spread based on signature matrix in example [ eg : plus46 ] .for the second and third sub - streams we use permuted matrices and , respectively , where is a column permutation matrix that swaps columns and .the permuted signature matrices have the same distance property with the original matrix but have the effective spreading length profiles and , respectively . by doing this ,the average effective spreading length for each symbol , which is the same for each user , becomes .therefore , the detection error rate of each user will also be the same .figure [ fig : bercoded1 ] illustrates ber of rate- and turbo - coded -user -resource scdma systems under bp and approximate bp ( abp ) detections , where the optimal scdma code obtained in example [ eg : optmat ] and the code with latin - rectangular labeling proposed in are considered .the sum communication rates of these two turbo - coded scdma systems are bit / resource and bit / resource .the rate- turbo - coded scdma system with the optimal scdma code designed in example [ eg : optmat ] has a performance gain of about db over the same rate scdma system with latin - rectangular labeling under both bp and abp decodings .this gain increases if we considered higher rate turbo code , which works at higher regime , i.e. , the gain increases to db for the rate- turbo - coded scdma system .comparing with bp decoding , the performance loss of the abp is about db for rate- turbo - coded scdma system at low regime since the interference term in ( [ eq : interfer ] ) is very similar to gaussian .this performance loss increases to db at the high regime for the rate- turbo coded scdma system .figure [ fig : bercoded2 ] illustrates ber of rate- and turbo - coded scdma systems under bp detection , where the optimal codes obtained in examples [ eg : optmat ] , [ eg : plus46 ] and the code with latin - rectangular labeling are considered .the rate- turbo - coded scdma system with the optimal scdma codes designed in examples [ eg : plus46 ] has slightly better ber than the code designed in examples [ eg : optmat ] and has a performance gain of about db over the code with the latin - rectangular labeling .they have the sum communication rate of bit / resource . for the even higher encoding rate , i.e., a rate- turbo - coded scdma system that works at higher regime , this gain increases and the code in example [ eg : plus46 ] has larger performance gains of about db and db over the code in example [ eg : optmat ] and the code with the latin - rectangular labeling . in this case , the sum communication rate reaches bit / resource . figure [ fig : bercoded3 ] compares four pairs of turbo - coded scdma systems : \a ) rate- turbo - coded scdma systems with -user 2-resource optimal tree scdma code constructed in example [ eg : tree ] and -user 4-resource scdma code designed in example [ eg : plus46 ] .their communication rate is bit / resource .\b ) rate- turbo - coded scdma systems with -user 3-resource optimal tree scdma code constructed in example [ eg : tree ] and -user 6-resource scdma code with the optimal scdma code designed in example [ eg : plus68 ] .their communication rate is bit / resource .\c ) rate- turbo - coded scdma systems with -user 2-resource optimal tree scdma code constructed in example [ eg : tree ] and -user 4-resource scdma code designed in example [ eg : plus46 ] .their communication rate is bit / resource .\d ) rate- turbo - coded scdma systems with two - user single - resource optimal scdma code obtained in theorem [ thm : two - user ] and -user 4-resource scdma code designed in example [ eg:84 ] .their communication rate is bit / resource .each pair has the same communication rate but the code with more users has steeper ber cure , better asymptotic ber performance , due to the joint multi - user processing gain .the rate- turbo - coded two - user single - resource optimal scdma code still has a 1 db performance gain over the same rate turbo - coded suboptimal scdma code used in .and ) scdma codes ( the optimal codes obtained in examples [ eg : optmat ] , [ eg : plus46 ] and the code with latin - rectangular labeling ) under bp detection with iterations.,width=3 ] and ) scdma codes ( the optimal codes obtained in examples [ eg : tree ] , [ eg : plus46]-[eg:84 ] , and two - user single - resource scdma with optimal [ theorem [ thm : two - user ] ] and suboptimal labeling ) under bp detection with iterations.,width=3 ]we gave a code distance analysis and signature optimization for overloaded scdma systems .good scdma codes that work well under both bp and ml detections with low detection complexities are constructed .the constructed codes can support very diverse high - rate services . as an initial work, we only analyzed the code distance of uncoded scdma systems , i.e. , without fec code , and scdma with qam and equal power for each user .one possible extension is to do distance analysis for coded scdma systems , which leads to a joint fec and scdma code design .the new system can be treated as a concatenated code .some works related to concatenated code are given in .another possible extension is to consider a more general modulation and unequal - power user transmissions .although we focused on scdma systems , our design also apply to several similar well - documented system proposals , such as tcma and superposition modulation ._ proof : _ we first prove that there exists an optimal signature matrix in , \theta\in[0 , \pi/4) ] .moreover , signature matrices , \theta\in[0,\pi/4) ] give the same distance enumerator function since for any \in\triangle\mathcal{x}^2 ] , where is the complex conjugate of . to continue prove theorem [ thm : two - user ] , we simplify the expression of minimum distance as due to the following facts : since for any ,the following holds we obtain the final expression of minimum distance as since for , increases as increases , and decreases as increases , the optimal should satisfy , which leads to , i.e. , $ ] with .the theorem is proved ._ proof : _ let be an index subset .let , where is the complementary set of . from lemma [lem : distance ] , where . since each row of is a length- optimal signature vector for a single - resource scdma system , for a given , hold for any , where if holds , otherwise , . since , decreases as increases .moreover , since , where is the degree of the -th code node in , we have for any . since by varying , can be any proper subset of , by denoting we obtain the lemma is proved ._ proof : _ we first prove that for each there exists with .we just need to show that for a given , there exists an which is a row rotation of .assume that is given .we determine as follows .since the zero elements in are predetermined by the factor graph , we only determine the nonzero elements in .the procedure is similar as that in the proof of theorem [ thm : tree ] except some modifications .terminate the procedure , otherwise , repeat step ii .+ note that if there are still undetermined labelings for edges in at the end of the procedure , we simply use the same labeling as in .moreover , the above procedure only applies to the case that the remaining graph after deleting edges in is connected .if it is not connected , i.e. , it contains multiple trees , we can label each of them independently in a similar way .k. alishahi , s. dashmiz , p. pad , and f. marvasti , design of signature sequences for overloaded cdma and bounds on the sum capacity with arbitrary symbol alphabets , " _ ieee trans .inf . theory14411469 , mar . | sparsely spread code division multiple access ( scdma ) is a non - orthogonal superposition coding scheme that permits a base station simultaneously communicates with multiple users over a common channel . the detection performance of an scdma system is mainly determined by its signature matrix , which should be sparse to guarantee large euclidean distance for the equivalent signal constellation after spreading and superposition . good signature matrices that perform well under both belief prorogation and the maximum likelihood detections are designed . the proposed design applies to several similar well - documented schemes , including trellis code multiple access ( tcma ) , low density spreading , and superposition modulation systems . sparsely spread cdma , non - orthogonal multiple access , signature design , code distance . |
in recent years , networks have gained popularity as a tool to represent , organize , and interpret phenomena arising in many fields of science , including physics , biology , social sciences , etc .questions as diverse as the structure of the world wide web , the robustness of a nation s banking system or its power grid , or the mechanism of functions inside a cell can be expressed in terms of networks .these applications have led to networks whose structure and complexity have gone far beyond the examples studied before in the classical computer science literature . driven partly by the emergence of these new applications ,research in network science has also undergone a revolutionary change in recent years .while traditional network science was basically a subject of graph theory and focused on networks with rather simple structure , recent studies often took the viewpoint of treating networks as complex systems , and used tools and concepts from statistical mechanics .while the structure and topology of networks has been under much investigation , the dynamics on the network is less well understood despite the fact that it leads to important and nontrivial questions .for example , any network with positive - weighted edges defines a markov jump process ( mjp ) ( and vice versa ) and in many applications , it is of interest to understand the interplay between the network structure and the dynamics of this mjp .our aim here is to address such questions within the framework of transition path theory ( tpt ) , originally introduced in ( see also for reviews ) and already used in in the context of networks and mjps the present work can be viewed as a continuation of this last paper . in a nutshell , the basic idea in tpt is to single out two specific sets of nodes and analyze the statistical properties of the reactive trajectories by which transitions between these sets occur if the sets are chosen appropriately , this permits to extract the most salient features of the dynamics on the network and relate them to its topology .this is like probing an electrical network by wiring it at different locations and analyzing how the current flow from the nodes wired positively to those wired negatively .tpt is also related to the potential - theoretic approach to metastability championed by bovier and collaborators , albeit the emphases of both approaches are different .the potential - theoretic approach has been introduced as a theoretical tool to obtain rigorous bounds on the low - lying eigenvalues that characterize the slowest relaxation phenomena in mjps displaying metastability .tpt on the other hand permits to characterize exactly the statistical properties of the transition pathways on complex networks that are not necessarily metastable , or such that the low - lying part of their spectrum is too complicated to be estimated analytically .importantly tpt can also be used as a computational tool in such situations . by being able to analyze the flow of transitions between specific parts of the network , for example by generating numerically reactive trajectories by which these transitions occur , or even no - detour transitions paths , and analyzing their statistical properties, tpt can provide invaluable information about the network and the dynamics it supports .( a ) the two lowest minima of the potential energy of the lj .( a ) : the face - centered cubic truncated octahedron with the point group is the lowest minimum .( b ) : the icosahedral structure with the point group is the second lowest minimum . throughout this paperwe refer to them as fcc and ico , respectively.,title="fig:",height=188 ] ( b ) the two lowest minima of the potential energy of the lj .( a ) : the face - centered cubic truncated octahedron with the point group is the lowest minimum .( b ) : the icosahedral structure with the point group is the second lowest minimum . throughout this paperwe refer to them as fcc and ico , respectively.,title="fig:",height=188 ] to make this last point and illustrate the usefulness of the tools developed in this paper , we will apply them to analyze the network developed by david wales and collaborators to model the dynamics of lennard - jones clusters with 38 atoms ( ) . is a prototypical example illustrating how the complexity of a system s energy landscape ( and its associated network ) affects its dynamical properties , a feature that is also observed in other complex phenomena such as protein folding or glassy dynamics . has a double - funnel landscape : its global minimum , a face - centered - cubic truncated octahedron , lies at the bottom of one funnel , whereas its second lowest minimum , an incomplete mackay icosahedron , lies at the bottom of the other ( see fig . [ fcc_ico ] ) .the deeper octahedral funnel is also narrower , and believed to be mostly inaccessible from the liquid state .thus , when self - assembles by crystallization , it does so by reaching the bottom of the shallow but broader isocahedral funnel , and an interesting question is how does manage to subsequently find its ground state structure by travelling from the shallow funnel to the deep one ?this question of rearrangement is the one that we will address below .it is made complicated by the ruggedness of the energy landscape of , which has an enormous number of local minima separated by a hierarchy of barriers of different heights .the remainder of this paper is organized as follows . in sec .[ sec : tpt ] we summarize the main outputs of tpt . in sec .[ sec : current ] we introduce sampling tools based on the theory . in sec .[ sec : metastable ] we discuss the case of metastable networks , and establish connections between tpt and the potential theoretic approach to metastability as well as large deviation theory that arise in these situations . in sec .[ sec : lj38 ] we apply the tools introduced earlier to analyze the rearrangement of the network . finally , some concluding remarks are given in sec .[ sec : lj38 ] .tpt for networks and markov jump processes ( mjps ) is discussed in detail in ( see also ) . herewe give a brief summary of the theory , then discuss algorithms based on it that can be used to characterize the flows on the network .we also comment on the connections between tpt and spectral approaches to network analysis , bovier s potential theoretic approach to metastability in mjps , and large deviation theory .we will consider mjps on a countable state - space with infinitesimal generator : where for denotes the probability that the process jumps from state to state in the infinitesimal time interval $ ] .any such mjp is equivalent to a network which we denote by : the set of states of the mjp is the set of nodes in the network , and is the set of edges , i.e. the set of ordered pairs with such that .conversely , any network with positive weighted edges is equivalent to an mjp by interpreting the weights on these edges as off - diagonal entries of the mjp generator .we assume that the generator is irreducible and that the mjp is ergodic with respect to the equilibrium probability distribution satisfying for simplicity , we also assume that the mjp is time - reversible , i.e. that the detailed balance property holds we denote by the instantaneous position of the mjp and following standard conventions we assume that the function is right - continuous with left limits ( _ cdlg _ ) .tpt is a framework to understand the mechanism by which transitions from any subset to any disjoint subset occur in the mjp .specifically , tpt analyzes the statistical properties of the _ reactive trajectories _ by which these transitions occur : if denotes an infinitely long equilibrium trajectory of the mjp , the reactive trajectories associated with it are the successive pieces of during which it has last left and is on its way to next .tpt gives explicit expressions for the probability distribution of the reactive trajectories , their probability current , their rate of occurrence , etc .besides the equilibrium probability distribution and the generator , the expressions for these quantities involve the committor , defined as the probability that the process starting at a state will first reach rather than : where denotes the first hitting time of set starting from : the committor is also known as equilibrium potential of the capacitor , and is denoted by in the collection of works of bovier _ et al . _ ( see e.g. ) .it satisfies and it can be used to estimate various statistical descriptors of the reactive trajectories .for example , the equilibrium probability to find the process in state and that it be reactive which is called the _ probability distribution of reactive trajectories _ is given by indeed , the equilibrium probability to find the trajectory in is , and the probability that it is reactive , is the product between , which gives the probability that it will reach rather than next , and , which by time - reversibility gives the probability that it came from rather than last .note that is only non - zero if .note also that this distribution is not normalized to one : the quantity gives the probability that the trajectory be reactive ( i.e. the proportion of time it spends traveling from to at equilibrium ) , and the probability to find the trajectory at state at equilibrium conditional on it being reactive is .similarly , we can calculate the average number of transitions per unit time that the reactive trajectories make from state to state : the additional factor beside the usual accounts for the requirement that , in order to be reactive , the trajectory must have reached coming from last and it must reach next after leaving . by antisymmetrizing obtain the _ _ probability current of reactive trajectories _ _ : this current is key to understand the mechanism of the reaction as it permits to locate the productive channels by which this reaction occurs in contrast , both and indicate where the reactive trajectories go , but these locations may include many dynamical traps and/or deadends that these trajectories visit but do not contribute to their current towards .we will elaborate on these points in sec .[ sec : current ] .the current also permits to calculate the average number of transitions per unit time as the total current out of or into : this quantity is referred to as the _ reaction rate _ and it can also be expressed as follows from the detailed balance condition and the conservation of the current ( theorem 2.13 in ) : for all .the reaction rate should not be confused with the rates and defined respectively as the inverse of the average time it takes the trajectory to go back to after hitting or back to after hitting .these rates are given by where are the proportions of time such that the trajectory last hit or , respectively .in this section we show how the outputs of tpt can be used to understand the mechanism of the transitions from to .if we want to know where these trajectories go , this can be done by analyzing and .some of the locations visited by the reactive trajectories may be deadends , however , in the sense that not much current goes through them . in order to determine the productive paths ( in term of probability current ) taken by the reactive trajectories , we need to analyze the current .some tools to perform this analysis were already introduced in .for example , it was shown how to identify a dominant representative path , in the sense that this path maximizes the current it carries .while such a path can be informative about the mechanism of the reaction , it can also be misleading in situations where the probability current of reactive trajectories is supported on many paths which carry little current individually in other words , in situations where the reaction channel is spread out . herewe introduce tools that are appropriate in these situations as well , since we expect them to be quite generic in complex networks .specifically , we provide ways to generate directly reactive trajectories that flow from to without even returning to , or even trajectories that only take productive steps towards .the statistical analysis of these trajectories then provides ways to analyze the flows in the network , which we also discuss .the following technical assumptions will be used below to simplify the discussion : * _ if and , i.e. the mjp can not jump directly from to with this condition , every reactive trajectory visits at least one state outside of ._ * _ and if ._ * _ and . _ it is straightforward to generalize the statements in propositions [ th : tpp ] and [ th : lftpp ] below to situations where these assumptions do not hold , as indicated in the proofs , but it makes them slightly more involved .our first result is a proposition that indicates how to generate reactive trajectories directly .the main idea is to lump onto an artificial state all the pieces of the trajectory in the original mjp during which it is not reactive .we call the process obtained this way the transition path process , following the terminology introduced in , where a similar construction was made in the context of diffusions : [ th : tpp ] suppose that assumptions ( a ) and ( b ) hold , let , and consider the process on the state - space defined by the generator with off - diagonal entries given by where is the probability that the trajectory is reactive ( see eq . ). then this process has the same law as the one obtained from the original mjp by mapping every non - reactive piece of its trajectory onto state .in particular , on the invariant probability distribution of the transition path process coincides with the probability distribution of the reactive trajectories given in , and the average number of transition per unit time that the transition path process makes between states in is given by and the associated current by .the proof of this proposition is given at the end of this section .note that we can supplement the transition path process with the information that when it jumps to from , it comes from state with probability and when it jumps to from , it reaches state with probability with this information added , the invariant probability current of the transition - path process is the same as the one in of the reactive trajectories even if we include edges that come out of or into . by construction , in the transition path process ( like in the reactive trajectories it represents ) , the trajectories go from to directly , without ever returning to in between in the transition path process , these returns arise through visits to state . in contrast , if we were to simply turn into a source and into a sink , the process one would obtain could take many steps to travel from to because it could revisit often before making an actual transition this problem is especially acute if and are metastable states since , by definition , is then revisited often before a transition to occurs ( more on metastability in sec .[ sec : metastable ] ) . in such situations ,the reactive trajectories are much shorter since by construction they only contain this last transitioning piece. it should be stressed , however , that the reactive trajectories could still take many steps to travel from to and be complicated themselves .for example if the transition mechanism involves dynamical traps or deadends along the way , the reactive trajectories will wander a long time in the region between and before finally making their way to . in such situations , it is convenient to construct a process that carries the same probability current as the reactive trajectories , but makes no detour to go from to . by thiswe mean the following : if we look at the way the committor function varies along a reactive trajectory , it will start at 0 in and go to 1 in , but it will not necessarily increase monotonically between these values along the way .let us call the pieces of the reactive trajectories along which the committor increases the productive pieces , in the sense that they are the ones that bring these trajectories closer to the product , whereas they make a detour along any other piece .imagine patching together these productive pieces in such a way that the resulting process is markov and carries the same probability current as the reactive trajectories .it turns out that there is a precise way to do so , and this defines what we call the no - detour transition path process : [ th : lftpp ] suppose that assumptions ( a ) , ( b ) , and ( c ) hold , let and consider the process on the state - space defined by the generator with off - diagonal entries where and . then this process has the same stationary current as the transition path process , but the committor function increases monotonically along each of its paths on . in particular , these paths have no loops .the proof of this proposition is given at the end of this section .processes similar to the one in this proposition were introduced in .note that the equivalent of the no - detour transition path process for diffusions is somewhat trivial since the ` no - detour ' trajectories in this context are simply the flowlines of the probability current of reactive trajectories , which are deterministic .note also that we can again supplement this process with the information that when it jumps to from , it comes from state with probability , and when it jumps to from , it reaches state with probability .propositions [ th : tpp ] and [ th : lftpp ] can be used to generate reactive trajectories and no - detour reactive trajectories , which can then be analyzed using a variety of statistical tools to characterize the mechanism of the reaction .how to do so in practice will be illustrated on the example of in sec .[ sec : lj38 ] .particularly useful is to quantify how these trajectories go through specific cuts in the network , as we explain in sec .[ sec : cutstubes ] .under assumption ( b ) , the generator is irreducible because is . to prove the assertions of the proposition, we will verify that the invariant distribution of the transition path process is given by so that the average number of transitions per unit time it makes between any pair of states , that is , , is to show that is the invariant distribution of the transition path process , we consider two cases : and .for we have using the detailed balance condition , , a few terms cancel out and we are left with where we used if and if , and the last equality follows from the definition of the committor . for we have which terminates the proof .note that if assumption ( a ) does not hold , then we also need to account for the direct jumps from to in the original mjp as additional visits into state .if assumption ( b ) does not hold , we can fatten the states and to include all the nodes such that and , respectively . with this modification, the proposition is valid .the fact that the process has no loops follows directly from the form of its generator in particular the network defined by , , has no loops except for the ones through . the proof of the rest of the statement is similar to that of proposition [ th : tpp ] : under assumptions ( b ) and ( c ) , the generator is irreducible because is and we will show that the invariant distribution in the network with the generator in is equal to so that the average number of transitions per unit time it makes between any pair of states , that is , , is this will imply that the transition path process and the no - detour transition path process have the same stationary current , as claimed in the proposition . to show that is the invariant distribution , we consider again two cases : and . if we have using the detailed balance condition , , and the fact that we obtain where we used if and if , and the last equality follows from the definition of the committor .for we have which ends the proof .to remove assumption ( a ) , we need to account for the direct jumps from to in the original mjp as additional visits into state . to remove assumption ( b ) , we can fatten the states and to include into them all the nodes such that and , respectively . and to remove assumption ( c ) , we can restrict the statement of the proposition to the unique ergodic component of the chain with generator composed of all the states in that can be reached starting from .recall that a cut in a network is a partition of the nodes in into two disjoint subsets that are joint by at least one edge in .the set of edges whose endpoints are in different subsets of the partition is referred to as the cut - set . herewe will focus on --cuts that are such that and are on different sides of the cut - set .any --cut leads to the decomposition such that and ( see fig .[ fig : cut ] ) .--cut between the sets and whose nodes are shown in blue and green respectively .the edges of the cut - set are shown with dashed lines.,scaledwidth=50.0% ] we can use cuts to characterize the width of the transition tube carrying the current of reactive trajectories .a specific set of cuts is convenient for this purpose , namely the family of isocommittor cuts which are such that their cut - set is given by the isocommittor cuts are the counterparts of the isocommittor surfaces in the continuous case .these cuts are special because if and , the reactive current between these nodes is nonnegative , , which also means that every no - detour transition path contains exactly one edge belonging to an isocommittor cut since the committor increases monotonically along these transition paths .therefore , we can sort the edges in the isocommittor cut according to the reactive current they carry , in descending order , and find the minimal number of edges carrying at least % of this current . by doing so for each value of the committor and for different values of the percentage , onecan then analyze the geometry of the transition channel - how broad is it , how many sub - channels there are , etc .the result of this procedure will also be illustrated on the example of in sec .[ sec : lj38 ] . finally note that the reaction rate can be expressed as the total current through any cut ( not necessarily an isocommittor cut ) as ( compare ) the proof of this statement is elementary and will be omitted .in this section , we briefly discuss the case of metastable networks .we start by giving a spectral definition of metastability , then discuss the connections of our results to the potential theoretic approach to metastability and to large deviation theory .metastable networks and mjps have been the subject of many studies ( e.g. ) . by definition , they are such that the spectrum of their generator contains one or more groups of low - lying eigenvalues .let us assume without loss of generality that or and denote by the solutions of the eigenvalue equation then the detailed balance condition implies that the eigenvalues are real , non - negative , and can be ordered as there is a low - lying group of eigenvalues if there exists a and an such that to see that this condition implies metastability , notice that the spectral decomposition of the generator , leads to the following expression for the transition probability distribution to find the walker at state at time if it was at initially : if holds , it means that on time - scales such that in , we have , and up to errors that are exponentially small in , the sum in can effectively be truncated at : in other words , on these time scales the fast processes described by the eigenvalues of index and above have already relaxed to equilibrium and what remains are the slow processes associated with the eigenvalues of index and below .this also means that the dynamics on these time scales can effectively be reduced to a markov jump processes on a state space with states .note that the spectral decomposition in also leads to a spectral decomposition for the current : where the eigencurrent associated with the pair is the eigencurrent should be compared to : as can be seen , can be obtained from by substituting the eigenvector for the committor .this suggests that if and corresponds to an eigencurrent associated with a slow process in the low - lying group , then it should be possible to find sets and such that the current of reactive trajectories between these two sets approximates .this is indeed the case , and this observation is at the heart of the potential theoretic approach to metastability developed by bovier and collaborators . in a nutshell , this approach says that , up to shifting and scaling , any low lying eigenvector can be approximated by the committor function for the reaction between two suitably chosen sets and .this observation is useful for analysis because it permits to focus on a specific eigenfunction / eigenvalue pair by studying the variational problem that the committor satisfies , that is , by minimizing the dirichlet form associated with the generator : over all subject to the boundary conditions that if and if .the minimizer of is the committor function and , by , its minimum is also the reaction rate .the discussion above makes a ( brief ) connection between the potential theoretic approach to metastability and tpt .in fact , tpt gives a way to reinterpret the various objects used in the potential theoretic approach in terms of exact statistical descriptors of the reactive trajectories .this reinterpretation is interesting because tpt applies regardless on whether the system is metastable or not .in other words , all of the formulas given in secs . [ sec : react ] and [ sec : current ] are exact no matter what the sets and are .this has the advantage that we can use the tools of tpt to analyze reactions even in situations where does not necessarily hold .more generally , our emphasis is different : we are mainly interested in using tpt to compute numerically the pathways for a reaction of interest between sets that are known before hand , rather than estimating analytically the low lying part of the spectrum . indeed, while this second goal rapidly becomes out of reach in practice for complex systems ( and typically require to make specific assumptions about the network like e.g. the ones discussed in sec . [ sec : ldt ] below ) , the first one remains achievable in a much broader class of situations , as will be illustrated in sec .[ sec : lj38 ] on the specific example of .another question of interest is when does condition applies ?one such situation occurs when the state - space is finite , , and the pairwise rates are logarithmically equivalent to in the limit as .the asymptotic properties of the eigenvalues in such systems , not necessarily with detailed - balance , was first established by a. wentzell using the tools from large deviation theory ( ldt ) developed in and summarized in ( see also ) . herewe will focus on a sub - case of the one investigated by wentzell which is relevant in the context of , namely , when the generator of the mjp is of the form where , , and are parameters .the generator corresponds to a dynamics on the network where every node has an energy associated with it , and jumps between adjacent nodes on the network follow arrhenius law , with a rate depending exponentially on the energy barrier to hop from to : the information about the network topology is embedded in the energies by setting if and are not adjacent on the network , i.e if .the parameter plays the role of the temperature , and is a prefactor which we will assume temperature - independent .the generator satisfies the detailed balance condition with respect to be boltzmann - gibbs equilibrium probability distribution , , ... are the indices of these minima , there is an edge between any pair if there is a mep with a single saddle point along it connecting and . by using the energy of the saddle point as cost for the edge , one can find the minimal spanning tree of the network ( solid edges in the top right panel ) using e.g. kruskal algorithm , and thereby obtain its disconnectivity graph ( bottom right ) . on this disconnectivity graph ,the pairs of numbers at the branching points indicate which of the nodes in the corrsponding bottom parts of the tree connect at that level of energy . using the dijkstra - based algorithm proposed in we can also calculated the minmax path connecting two states , for example between states 1 and 7 ( solid red path in the bottom left panel ) .this minmax path is relevant in regimes where ldt applies ., scaledwidth=100.0% ] in the set - up above , we can use the temperature as control parameter , in such a way that holds when . in that limit , for reasons that will become clear below, in general there are as many low - lying groups of eigenvalues as there are states ( i.e. as for all ) , and wentzell s approach provides a way to estimate each of these eigenvalues .to see how , it is convenient to organize the states of the chain on a disconnectivity graph , that is , a downward facing tree in which each node lies at the end of a branch at a depth equal to its energy , and branches in the tree are connected at the lowest energy barrier that connect all the nodes on one side of the tree to those on the other side a cartoon example is shown in fig .[ fig:7well ] : because this will be relevant in our analysis of , in this example we start from a continuous energy landscape that we convert into a network whose disconnectivity graph is then obtained from its minimal spanning tree ( bottom right , all solid edges ) calculated using e.g. kruskal s algorithm ( see e.g. ) .the eigenvalues can then be estimated recursively from the disconnectivity graph as follows : start by identifying the lowest barrier in the tree , i.e. the adjacent pair on the tree such that is minimum over all .the node identifies the well that the system can escape by crossing the barrier of minimum height , and the largest eigenvalue in the system corresponds to the inverse of the time scale of this escape , i.e. it can then be estimated as where the symbol means that the ratio of the logarithms of both sides of this equality tends to 1 as .now remove the node and its branch from the tree , and repeat the construction : that is , in the new tree find the pair such that is minimum over all , to obtain an estimate for the next largest eigenvalue , . by iterating upon this procedure , in stepswe can then estimate for as intuitively , this procedure corresponds to lumping together the states that can be be reached on timescales of order or below , and analyzing what happens on the next timescale to get .after steps in the procedure we end up with a degenerate tree made of a single node lying at the very bottom of the original tree ( and of course we already know that ) .note that in the discussion above , we assumed that the barriers identified along the way are all different ( that is , strictly increasing with ) , which is the generic case and leads to eigenvalues that are all well - separated : if some of these barriers are equal , it means that some of the eigenvalues are asymptotically equivalent , and this case can be treated as well by generalizing the construction above .note also that estimates more precise than and can be obtained using the potential theoretic / tpt approach : in the present situation , at any stage in the iteration procedure , the states and are those that should be set as and , respectively .another interesting construction provided by ldt is the decomposition of the stochastic network into freidlin s cycles . for systems satisfying the detailed balance condition and with a rate matrix as in, the decomposition into cycles simplifies , as was recently discussed in .here we summarize this discussion and refer the interested reader to the original paper for details . in a nutshell ,the decomposition into cycles focuses on which states are most likely to be reached from a given state : in the zero temperature limit , if the system is in state , with probability one it will reach next the state connected to by the smallest barrier , i.e. searching consecutively for the next most likely state defines a dynamics on the network that generically ends with cycles made of two states : each of these cycles contain a local minimum of energy on the disconnectivity graph ( that is , a state at the bottom of a group of branches on the tree ) , and the state connected to this minimum by the lowest barrier .these cycles are called 1-cycles by freidlin .once we have identified them , we can remove from the tree the state with highest energy in each of these 1-cycles , and repeat the construction iteratively .these gives 2-cycles , 3-cycles , etc .until we again end up with a tree with only 2 nodes on it . in this construction , we can also keep the information about the state in the original network by which any -cycle is exited : with probability 1 as , this is the state whose barrier is the lowest to escape all the states contained in this -cycle .a corollary of the fact that cycles are exited in a predictable way is that , between any two nodes in the network taken as sets and , there exists a single path on the network that concentrates all the current of the reactive trajectories as .this path has a minmax property : the maximal barrier separating every pair of states and on the path is minimal among the maximal barriers along all paths in the network connecting and ( see fig . [ fig:7well ] for an illustration ) . in , the construction of the hierarchy of freidlin s cycles was performed via a sequence of conversions of rate matrices into jump matrices followed by taking limits .relying on the properties of the hierarchy of cycles specific for the systems with detailed balance , an efficient dijkstra - based algorithm was also proposed for computing the minmax path .importantly , this algorithm did not built the whole hierarchy of cycles , but only computed the sub - hierarchy relevant to the transition process of interest , and did not require any pre - processing of the stochastic network .we conclude this section on ldt with a remark .as explained above , the ldt picture applies in the limit when , in which case the hierarchy of different barriers in the disconnectivity graph corresponds to timescales that become infinitely far apart as .while this picture is indeed correct at extremely low temperature , we do not expect it to remain valid as the temperature is increased , even if the system does remain strongly metastable ( i.e. such that some low - lying groups of eigenvalue do persist ) .rather , we expect that the transition channel will rapidly broaden if the networks is large , and that the mechanism of the reaction wil depart from that predicted by ldt .our analysis of by tpt will indeed confirm this picture .a lennard - jones cluster is made of particles ( or atoms ) interacting via the lennard - jones pairwise potential given by .\ ] ] here denotes the positions of the particles in the cluster , is the distance between particles and , and and are parameters measuring respectively the strength and range of the interactions . at the most fundamental level , the finite - temperature dynamics of the cluster can be modeled as a continuous diffusion over the potential .this dynamics is extremely complicated owing to the multiscale nature of this potential which , when is large ( e.g. as we will consider below ) possesses an enormous number of local minima separated by a hierarchy of barriers of various height .a few thermodynamic properties of these clusters are known , however .first , it is known that the majority of global potential energy minima for lennard - jones clusters of various sizes involve an icosahedral packing . however , lennard - jones clusters with special numbers of atoms admit a high symmetry configuration based on a face - centered cubic packing , with a lower energy .the smallest cluster with this property contains atoms .the global potential energy minimum of the cluster is achieved by a truncated octahedron with the point group ( fig . [ fcc_ico ] ( a ) ) , which from now on we will simply refer to as fcc .the second lowest minimum is the icosahedral structure with the point group ( fig .[ fcc_ico ] ( b ) ) , which we will refer to as ico .it is also known that the basin around ico is much wider than that around fcc these two basins are usually referred to as funnels in the literature .this has thermodynamic consequences when the temperature of the system is non - zero .indeed , the fcc basin only remains the preferred basin for with ( here denotes boltzmann constant ) . at ,the system undergoes a solid - solid phase transition where the ico basin becomes more likely due to its with greater configurational entropy ( see e.g. fig . 4 in ) .next , at , the outer layer of the cluster melts , while the core remains solid .then the cluster completely melts at .the difference of widths of the two basins also has dynamical consequences . indeed , due to its larger width , the ico basin is the one that is most likely to be reached by the system after crystallization even if .the question then becomes how does the system reorganize itself to get out of the dynamical trap around ico and in its preferred state around fcc ?it is also of interest to understand how this process is influenced by the temperature , since the rearrangement pathway is likely to be influenced by it .these are the type of questions that we will address in this section , as an illustration of the tpt - based network analysis tools presented earlier .this study is complementary to those conducted by wales and collaborators in the same context using different tools .the problem of rearrangement of has been the object of much studies in the past 15 years ( see e.g. ) .an interesting approach to the problem has been proposed by david wales and collaborators , who undertook an ambitious program aiming at mapping the evolution of onto a network / mjp and reducing the analysis of the dynamics of to the study of this network . while this mapping is technically hard to perform in practice and required a lot of inventiveness , it is conceptually quite simple to understand .if the temperature of the system is small enough , it will spend a long time near the bottom of the energy well around the local minima it is currently in before a thermal fluctuation large enough will manage to push it above an energy barrier separating it from an adjacent well . the system will then fall near the bottom of this adjacent well and the process will repeat . in this regime , the dynamics can be reduced to a basin hoping : the local minima of the energy become the nodes on the network , two such nodes are connected by an edge if the system can transit from one minimum to the another by crossing a single barrier , and the rate / weight of the directed edge from one node to another involves ( via arhennius formula ) the height of the energy barrier(s ) that must be crossed to perform this transition this construction was illustrated on a toy example in fig .[ fig:7well ] .an additional simplification made in the case of is to lump together all the minima and saddle point that are equivalent by symmetry ( point group , permutation , etc . ) .all together this construction led to a network for that contains a single connected component with 71887 nodes associated with the lowest local minima on the landscape ( which include fcc and ico ) , and 119853 edges this information is publicly available from the database wales s website .the database also contains the information about the generator , whose off - diagonal entries are in a form consistent with here is proportional to the inverse of the system s temperature , , , and are , respectively , the point group order , the value of the potential energy , and the geometric mean vibrational frequency for the local minimum associated with node , , and are the same numbers for the transition state connecting the local minima and ( there may be more than one of them for every pair adjacent on the network ) , and is the number of vibrational degrees of freedom .as in , if there is no minimum energy path connecting the minima with index and via a single saddle point , we set and .note that , by construction , the generator defined by satisfies detailed - balance with respect to the following boltzmann - gibbs equilibrium distribution : the network representation of via will be our starting point here .the majority of local minima / nodes listed in wales database do not have special names for example , fcc and ico are simply listed 1st and 7th , respectively . except for these two , in the sequel we will simply refer to the other minima by their indices in the database .we also work in reduced units in which the temperature is measured in units of . since we are interested in the mechanism of rearrangement between ico and fcc , we take the nodes of these two states as sets and , respectively .we also checked that our results do not change significantly if we fatten these states by including in them the nodes that are in the connected component around them where all the nodes have energy within of that of fcc and ico , respectively . a key preliminary step in the application of tpt to the calculation of the committor function .this calculation requires solving which , in the present case , is a system of linear equations with the same number of unknowns .the detailed balance property allows us to make the matrix in symmetric by multiplying each row by .the resulting system can then be solved using the conjugate gradient method with the incomplete cholesky preconditioning ( see e.g. ) .this works for . for lower values of the temperature, the scale separation between the possible values of for different becomes too large for the computer arithmetics . in order to overcome this difficulty we truncate the network by keeping only the nodes whose energy is below a given cap this is legitimate because , the lower the temperature , the least likely it is to observe a reactive trajectory venturing at energies much higher than above that of the overall barrier between ico and fcc .for each value of temperature we set this cap as high as possible while keeping the system nonsingular in the computer arithmetics . the energy caps and resulting network sizes for the different values of temperature that we considered are listed in table [ table1 ] .the values in parentheses are the difference between the caping energy and that of fcc , .all in all , we computed the committor for temperatures ranging from to using steps of .the disconnectivity graphs of the network we used at three different temperatures , , , and are shown in fig .[ fig : dgraph ] . on these figures, we only included the nodes through which at of the total current of reactive trajectories goes and we colored the branches of the graph according to the value of the committor of the nodes at the end of these branches .as can be seen , as the temperature increases , the committor function becomes less step - like , and a higher number of nodes gets values than are in between the extreme 0 and 1 ..energy caps and network sizes used for different value of the temperature . [ cols="^,^,^",options="header " , ] [ table1 ] ( left ) , ( center ) , ( right ) .each disconnectivity graph includes only those local minima through which at least 1% of the reaction pathways from ico to fcc pass.,scaledwidth=100.0% ] ico and fcc at different temperatures .these rates display an almost perfect arrhenius - like behavior in this temperature range , even though the mechanism of the rearrangement becomes increasingly complex as the temperature increases .the zoom shown in the inset shows that a cross - over between and occurs ar ( i.e. ) : this is the temperature above which ico becomes more favorable than fcc due to entropic effects related to the relative widths of the funnels around these two structures.,scaledwidth=100.0% ] .the edges shown carry at least 10% of the total reactive flux from ico to fcc the thickness of the arrow is proportional to the precentage of current the edge carries , and the actual percentage is also displayed next to it .the values of the committor at the nodes are show in greyscale , with the explicit values of given for some of them .the blue arrows show the minmax path from ldt : at this low temperature , most of the current goes along this path .the highest barrier crossed along the minmax path is between nodes and ( ) .also show in inset is the energy profile along the minmax path.,title="fig:",scaledwidth=100.0% ] at . at this higher temperature ,most of the edges carry less than of the total current : in particular , we can no longer go from ico to fcc by following edges that carry at least of the current .this also implies that the minmax path from ldt is no longer relevant to explain the mechanism of the rearrangement at this temperature the edges along this path that carry more than of the current are still shown in blue .the edges between nodes 8 and 3223 and nodes 3223 and 354 carry less than of the current : we show them because these edges belong to the dominant representative path introduced , i.e. the path maximizes the current it carries .this path is different from the minmax path but , as can be seen in this example , it is not relevant either in situations where the transition channel becomes spread out ., title="fig : " ] . as the temperature increases , the no - detour paths tend to cross higher barriers ., scaledwidth=100.0% ] is plotted against the index of these edges ordered by this magnitude .the inset shows the empirical cumulative distribution function of the current through the edges in the cut . the number of edges that must be included to account for a given percentage of the total current increases rapidly with temperature , indicative of the broadening of the reaction channel for the rearrangement.,scaledwidth=100.0% ] at .the way this representation was constructed is explained in text.,scaledwidth=100.0% ] at ., scaledwidth=100.0% ] at .,scaledwidth=100.0% ] at .,scaledwidth=100.0% ] once the committor function has been calculated , we can use tpt to calculate the rate of rearrangement of and characterize its mechanism . using formulae with ico and fcc, we obtain the rates at which the system rearranges itself between these two states .these rates are shown in fig .[ fig : rate ] as a function of the inverse temperature .as can be seen , both rates are almost perfectly straight on a log - linear scale , and can be fitted by since the energy barriers between fcc and ico and ico and fcc are 4.219 and 3.543 , respectively , , these fits are consistent with arrhenius law .the fits in also compare well with the ones calculated in for the temperature range : and . note also that the rates cross at the value ( i.e. ) : this temperature is the one above which tpt predicts that ico becomes preferred over fcc , which is slightly higher than the value listed in sec .[ sec : thermo ] .this crossover is due to entropic effects related to the relative widths of the funnels around ico and fcc .the arrhenius - like nature of the rates may suggest that the mechanism of rearrangement of the cluster is quite simple , and dominated at all the temperatures that we considered by the hoping over the lowest saddle point separating ico and fcc .this impression , however , is deceptive . to see why , in figs .[ fig : cartoon1 ] and [ fig : cartoon2 ] let us compare cartoon representations of the current of reactive trajectories given in at two different temperatures , and .the way these representations were constructed is by plotting all the nodes in the network such that the current of reactive trajectories along the edges between them carry at least 10 of the total current , and connecting these nodes by an arrow whose thickness is proportional to the magnitude of the current . as can be seen in fig .[ fig : cartoon1 ] , at , most of the current concentrate on a single path : this path coincides with the minmax path between ico and fcc predicted by ldt . at the higher temperature of , however , we see that this minmax path becomes mostly irrelevant , and in fact we can no longer go from ico to fcc following edges that carry at least 10 of the current .the reason is that the current becomes very spread out among the edges of the network , indicative that the tube carrying most of the current of reactive trajectories also becomes quite wide . to quantify further this observation, we used proposition [ th : lftpp ] to generate samples of the no - detour transition path process at every temperature .( in the present example , it turns out that the network is so complex that the reactive trajectories themselves , which we can in principle generate via proposition [ th : tpp ] , are too long to be sampled efficiently .this arises because these trajectories wander too often into quasi - deadends or in between intermediate structures , and this is why we focused on no - detour transition paths which are much shorter and can be generated in great number . )we used this sample of no - detour transition paths to first analyze the height of the highest energy barrier along these paths measured with respect to .the empirical cumulative distribution functions of these barrier heights are shown in fig .[ fig : barriers ] .as can be seen , at the low temperature of , this distribution is very peaked around the value , which is the height of the lowest saddle point separating ico and fcc . at higher temperatures ,however , this distribution broadens significantly , indicative that higher barriers become frequently crossed by the no - detour transition paths .this is an entropic effect : in essence , we can think of the height of the barrier in terms of ` bonds ' between the lennard - jones particles that need to be broken for the rearrangement to proceed .what our results show is that the number of no - detour paths increases very rapidly with the maximal number of bonds that are ever broken along them . at low temperature ,the rearrangement proceed mostly by no - detour paths along which no more than about 4 bounds are broken , because these paths are energetically favorable . at higher temperature , however, no - detour paths along which 5 , 6 or even 7 bonds break start to matter : even though they are less favorable energetically , their sheer number means that they eventually carry more current globally .a consequence of this effect is that the width of the reaction channel also broadens significantly with temperature .this is quantified in fig .[ fig : fluxincut ] , where we analyze the current along the edges in the isocommittor cut . by ordering these edges by the magnitude of the current they carry , and plotting this current magnitude as a function of the edge index , we arrive at the plots on the main panel of fig .[ fig : fluxincut ] . as can be seen , as the temperature increases, these plots widen with temperature , and display a power law behavior for a range of edge indices .the inset of fig .[ fig : fluxincut ] shows the cumulative distribution of the current through the edges in the isocommittor cut , and show that the higher the temperature , the more edges need to be included to get a significant percentage of the total current : for example , at , thousands of edges in the cut ( that is , most of them ) need to be included in order to account for of the current .the mechanism of rearrangement thus departs significantly from the one predicted by ldt , even though the rates remain arrhenius - like even at this high temperature .we tried to capture visually the complexity of the mechanism of rearrangement using the representation of the network of current of reactive trajectories shown in figs .[ fig : net06][fig : net15 ] .these figures were constructed as follows .we plotted every node of the network through which at least of the total current went .we ordered these nodes along the -axis according to the cumulative distribution function of their committor , using a coloring from blue to green to indicate their actual committor value . along the -axis, we ordered the nodes according to the inverse of the magnitude of current of reactive trajectory , , they carry ( the higher the node , the least current it carries ) and we connected the nodes by lines whose darkness is proportional to the magnitude of the current between them .we also faded the color as this magnitude decreased .finally , we used dots of different sizes to represent the nodes : the bigger the node , the larger is the magnitude of the average number of transitions per unit time that the reactive trajectories make through this node , see .this is a way to try to capture deadends and dynamical traps on the network , i.e. node that the reactive trajectories visit often but through which little current of these reactive trajectories go . in the figures these deadends are nodes that are high and big .overall , what these figures confirm is that , as the temperature increases , the curent of reactive trajectories spreads more and more on the network , and the reaction channel broadens .it also confirms that there exists many deadends and dynamical traps on the network .this last aspect makes tpt particularly suitable to analyze the mechanism of rearrangement : indeed , a spectral analysis of the network along the lines discussed in sec .[ sec : spectral ] is both hard to perform in the present situation and uninformative because it is too global .we have presented a set of analytical and computational tools based on tpt to analyze flows on complex networks / mjps .we expect these tools to be useful in a wide variety of contexts .the network representation of that we used here as illustration is just a specific example of markov state model ( msm ) used to map a complex dynamical system onto a mjp ( see e.g. ) . during the last decade ,such msms have emerged as a way to analyze timeseries data generated e.g. by molecular dynamics simulations of macromolecules , general circulation models of the atmosphere / ocean system , etc . in these contexts , massively parallel simulations , special - purpose supercomputers , and high - performance graphic processing units ( gpus )permit to generate time series data in amounts too large to be grasped by traditional `` look and see '' techniques .msms provide a way to analyze these data by partitioning the conformation space of the molecular system into discrete substates , and reducing the original kinetics of the system to markov jumps between these states in other words , by interpreting the timeseries as some dynamics on a network , with the states in the msms playing the role of the nodes on the network , and the transition rates between these states being the weights of the directed edges between these nodes .while msms typically provide an enormous simplification of the original timeseries data , the associated networks are typically quite complex themselves , with many nodes , a nontrivial topology of edges between them , and rates / weights on these edges that can span a wide range of scales .the tools that we derived from tpt can be used for the nontrivial task of analyzing these networks / msms .more generally , we expect the tools developed in this paper to be useful to analyze and interpret other networks that have emerged in many areas as a way to represent complex data sets .we thank prof . david wales for providing us with the data of the network and miranda holmes for interesting discussions .m. c. held an sloan research fellowship and was supported in part by darpa yfa grant n66001 - 12 - 1 - 4220 , and nsf grant 1217118 .e. v .- e . was supported in part by nsf grant dms07 - 08140 and onr grant n00014 - 11 - 1 - 0345 .bowman , g. r. , pande , v. s. , and no , f. eds . : an introduction to markov state models and their application to long timescale molecular simulation .advances in experimental medicine and biology , 797 .springer ( 2014 ) .den hollander , f. : three lectures on metastability under stochastic dynamics . in methods of contemporary mathematical statistical physics ( r. kotecky , ed . ) .lecture notes in math .springer , berlin .( 2009 ) . vanden - eijnden e.:transition path theory . in _computer simulations in condensed matter : from materials to chemical biology .m ferrario , g ciccotti , k binder , pages 439-78 .springer , berlin ( 2006 ) . | a set of analytical and computational tools based on transition path theory ( tpt ) is proposed to analyze flows in complex networks . specifically , tpt is used to study the statistical properties of the reactive trajectories by which transitions occur between specific groups of nodes on the network . sampling tools are built upon the outputs of tpt that allow to generate these reactive trajectories directly , or even transition paths that travel from one group of nodes to the other without making any detour and carry the same probability current as the reactive trajectories . these objects permit to characterize the mechanism of the transitions , for example by quantifying the width of the tubes by which these transitions occur , the location and distribution of their dynamical bottlenecks , etc . these tools are applied to a network modeling the dynamics of the lennard - jones cluster with 38 atoms ( ) and used to understand the mechanism by which this cluster rearranges itself between its two most likely states at various temperatures . |
omitting o and b - type stars in consideration of their short lifespans and strong ionizing winds ( e.g. , * ? ? ?* ; * ? ? ?* ) , as well as the transitory case of a - type stars , f - type main - sequence stars represent the hot limit of stars with a significant potential for providing circumstellar habitable environments ( e.g. , * ? ?* ; * ? ? ?generally , the investigation of habitability around different types of stars , particularly main - sequence stars , is considered a theme of pivotal interest to the thriving field of astrobiology ; see , e.g. , , , , , and for previous studies and reviews . for planetary host stars like any other star the stellar mass determines their lifetime ( shorter with larger mass ) , luminosity and effective temperature ( both higher with larger mass ) .main - sequence stars moderately more massive than the sun with masses between 1.1 and 1.6 , i.e. , f - type stars , are of particular interest as hosts to exosolar planets and exomoons in orbit about those planets . compared to stars of later spectral types ,f - type stars are characterized by relatively large habitable zones , although from a general astrobiological point of view , they exhibit the adverse statistical property of being less frequent ( e.g. , * ? ? ?* ; * ? ? ?* ) . on the other hand , despite their reduced lifetimes compared to g - type stars ,their lifetimes still exceed several billion years ( e.g. , * ? ? ?* ) , allowing the principle possibility of exobiology , potentially including advanced life forms .while habitable zones and their evolution can , in general , be well characterized in terms of the total amount of stellar irradiation , the spectral energy distribution ( including its portion of energetic radiation ) may nevertheless be significant as well for the facilitation of habitability ( e.g. , * ? ? ?the emergent radiation of f - type stars consists of significantly larger amounts of uv compared to the sun thus entailing potentially unfavorable effects on planetary climates and possible organisms ( e.g. , * ? ? ?previous studies showed that increased levels of uv , as well as the even more energetic euv radiation , can trigger a variety of chemical planetary atmospheric processes , including exoplanetary atmospheric evaporation ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?hence , the radiation output provided by f - stars places an additional constraint on circumstellar habitability , which needs to be considered as part of more comprehensive assessments ; see , e.g. , for previous results .it is the aim of the present study to consider some of these processes in an approximate manner while also taking into account the evolutionary status of the f - type host stars . in sect . 2, we comment on the concept of habitability , which also includes a description of the climatological habitable zone .additionally , we discuss the governing equations of the present study involving both the dna action spectrum and planetary atmospheric attenuation . in sect . 3 , we describe the spectral energy distribution of the host stars based on sophisticated photospheric models and their spectral energy output computed by the phoenix code for the range of effective temperatures relevant to f - type stars .the specific effective temperature and total luminosity of each star for a given mass and age are derived from well - tested stellar evolution models , which also convey the time scales of the circumstellar conditions during stellar main - sequence evolution as well as the extents of the climatological habitable zones .results and discussion are given in sect .4 , which focuses on habitability gauged by the damage inflicted upon dna .particular emphasis is placed on the relevance of the different types of uv ( i.e. , uv - a , uv - b , and uv - c ; see sect . 2.2 for definitions ) .our summary and conclusions are given in sect .a key aspect in the study of circumstellar habitability is the introduction of the climatological habitable zone , a concept , evaluated by .they utilized 1-d climate model to estimate the position and width of the habitable zone around solar - like stars as well as other types of main - sequence stars .the basic premise consists in assuming a earth - like planet with a co/h / n atmosphere and , furthermore , that habitability requires the presence of water on the planetary surface . in their workthey distinguished between the _ conservative _ habitable zone ( chz , with limits of 0.95 and 1.37 au ) and the _ general _ habitable zone ( ghz , with limits of 0.84 and 1.67 au ) ; subsequent work about the ghz has also been given by and others .the physical significance of the various kinds of hzs obtained by are given as follows : the ghz is defined as bordered by the runaway greenhouse effect ( inner limit ) and the maximum greenhouse effect ( outer limit ) . concerning the latter it is assumed that a cloud - free co atmosphere shall still be able to provide a surface temperature of 273 k. by contrast , the inner limit of the chz is defined by the onset of water loss .in this case , a wet stratosphere is assumed to exist where water is lost by photodissociation and subsequent hydrogen escape to space .furthermore , the outer limit of the chz is defined by the first co condensation attained by the onset of formation of co clouds at a temperature of 273 k ; see , e.g. , for additional details and applications . owing to the shape of the photospheric spectra and the total amount of the radiative energy fluxes ,the limits of the habitable zones are known to depend both on the stellar effective temperatures and the luminosities .the results for the chzs and ghzs as well as the appropriate earth - equivalent positions for main - sequence stars between spectral type f0 and g0 are given in fig . 1 .for f0 v stars , the chz extends between 2.27 and 2.92 au and the ghz extends between 1.99 and 3.67 au .furthermore , for f8 v stars , the chz extends between 1.29 and 1.80 au and the ghz extends between 1.14 and 2.21 au .the corresponding stellar data are given in table 1 .they are those also adopted for the photospheric models computed with the phoenix code ( see sect .3.1 ) , which have subsequently been used for our astrobiological studies .another aspect of habitability concerns the limits of the climatological habitable zones for the various types of stars .they have been calculated based on the formalism by .it provides a suitable polynomial fit and , furthermore , implements the required correction for the solar effective temperature in consideration of that used for an unusually low value of 5700 k instead of 5777 k as currently accepted .next we comment on the relevance of the dna action spectrum .the most fundamental radiometric technique to quantify radiative damage on biomolecules and microorganisms is spectroradiometry .biological effectiveness spectra can be derived from spectral data by multiplication with an action spectrum of a relevant photobiological reaction with the action spectrum typically given in relative units normalized to unity for , e.g. , nm . the biological effectiveness for a distinct range of the electromagnetic spectrum such as uv radiation is determined by where denotes the stellar irradiance ( ergs s nm ) , the wavelength ( nm ) , and the planetary atmospheric attenuation function ; see . here and are the limits of integration which in our computations are set as 200 nm and 400 nm , respectively .although a significant amount of stellar radiation exists beyond 400 nm , this portion of the spectrum is disregarded in the following owing to the minuscule values for the action spectrum in this regime .planetary atmospheric attenuation , sometimes also called extinction , results in a loss of intensity of the incident stellar radiation . in eq . ( 1 ) = 1 indicates no loss and = 0 indicates a complete loss ( see sect .2.3 ) ; note that can be attained by various types of methods , which may also consider detailed atmospheric photochemical models .based on previous work , the uv region of the electromagnetic spectrum has been divided into three bands termed uv - a , uv - b , and uv - c .the subdivisions are somewhat arbitrary and differ slightly depending on the discipline involved . herewe will use : uv - a , 400 - 320 nm ; uv - b , 320 - 290 nm ; and uv - c , 290 - 200 nm . following , the division between uv - b and uv - c is chosen as 290 nm since uv at shorter wavelengths is unlikely to be present in terrestrial sunlight , except at high altitudes .the choice of 320 nm as the division between uv - b and uv - a is suggested by the level of photobiological activity although subdivisions at 330 or 340 nm have previously also been advocated . in order to compute the irradiance toward targets in circumstellar environments , typically positioned in stellar habitable zones ,a further equation is needed , which is where is the stellar radiative flux , is the stellar radius and is the distance between the target and the star . in the framework of this work , we will focus on planets at different positions in stellar hzs . for stellar uv radiation we will consider photospheric radiation only ( see sect .3 ) because the chromospheric uv radiation from f - type stars is of minor importance . action spectra for dna , also to be viewed as weighting functions , have previously been utilized to quantify and assess damage due to uv radiation .besides dna , action spectra have also been derived for other biomolecules , for biostructures such as cellular components as well as for distinct species , especially extremophiles ( e.g. , * ? ? ?* ; * ? ? ?* and references therein ) . provides information on the dna action spectrum for the range from 285 nm ( uv - c ) to 400 nm ( uv - a ) .she found that between 400 and 300 nm , the action spectrum increases by almost four orders of magnitude ( see fig . 2 ) .the reason for this behavior is the wavelength - dependency of the absorption and ionization potential of uv radiation in this particular regime .a further significant increase in the dna action spectrum occurs between 300 and 200 nm ( see below ) .fortunately , however , the earth s ozone layer is very sufficient to filter out this type of lethal radiation ( e.g. , * ? ? ?* ; * ? ? ?* ) , which in our models is mathematically dealt with by considering an appropriate attenuation function ( see eq . 1 ) . the behavior of the dna action spectrum in the wavelengths regime between 200 and 290 nm has been given by .he points out that a significant reason for the susceptibility to uv - induced damage are -electron systems , notably because of their to energy transitions .it is found that this type of interaction also accounts for protein damage and damage to enzymes of photosynthesis , which indicates that these types of biochemical reactions are of general importance .the increase in the relative biological damage is about a factor of 35 relative to the damage at the reference wavelengths of 300 nm ( see fig . 2 ) .a relevant ingredient to our study is the consideration of planetary atmospheric attenuation , which typically results in a notable reduction of the received stellar radiation .appropriate values for can be obtained through the analysis of theoretical exoplanetary models ( e.g. , * ? ? ?* ; * ? ? ?* ) , inspired by recent results from the _ kepler _ mission , or the usage of historic earth - based data ( e.g. , * ? ? ?within the scope of the present work that is mostly focused on the impact of photospheric radiation from stars of different spectral types , as well as on the role of active and inactive stellar chromospheres , we assume that is given by a parameterized attenuation function att defined as \ \ ; \ ] ] see fig . 3 for a depiction of different examples .here denotes the start - of - slope parameter , ( in nm ) the center parameter , and the maximum ( limited to unity ) of the distribution .for example , provided information about the ultraviolet irradiance reaching the surface of archean earth for various assumptions about earth s atmospheric composition ; the latter allow us to constrain the wavelength - dependent attenuation coefficients for earth 3.5 gyr ago .furthermore , there is a large array of recent studies about exosolar planetary atmospheres , including those for rocky planets .they encompass models regarding the detailed treatment of atmospheric photochemistry , including the build - up and destruction of ozone , as discussed by , e.g. , , and .these models provide information on different exoplanetary structures due to outside forcings , including variable stellar radiation , which in principle allow the derivation of detailed planetary atmospheric attenuation functions .in accordance to previous studies of our group , as given by and , which mostly focused on super - earth planets , we rely again on selected stellar evolution models that have been computed with the well - tested eggleton code ; see , e.g. , for a description of an adequate solar evolutionary model .the eggleton code allows us to take into account the changing properties of the host star during its evolution on the main - sequence and , subsequently , as a red giant .we use an advanced version of the eggleton code , including updated opacities and an improved equation of state as described by .besides other desirable characteristics , the adopted evolution code utilizes a self - adapting mesh and also permits treating overshooting " a concept of extra mixing , which has been thoroughly tested considering observational constraints .in particular , the two convection parameters , the `` mixing length '' and the `` overshoot length '' , have been calibrated by matching accurately the physical properties of various types of stars , including giants and supergiants of well - known masses , found in well - studied , eclipsing binary systems . for the abundance of heavy elements , which decisively affect the opacities , we use the near - solar value of .this choice is an appropriate representation of present - day samples of stars in the thin galactic disk , noting that they exhibit a relatively narrow distribution ( to 0.03 ) about this value .the adopted evolution code also considers a detailed and well - tested description of the stellar mass loss , which becomes important for the final stages of giant star evolution ; see and .regarding our models of planetary host stars , the principal input parameters are the total luminosity and effective temperature , which are found to change with time . obviously , the resulting total lifetime of the star is a quantity of high significance as well . while the effective temperature and luminosity of the host star already allow a good representation of circumstellar habitability , as done through the stellar climatological habitable zone ( see sect .2.1 ) , the irradiation especially in the uv regime is of pivotal importance as well for arriving at a realistic evaluation of habitability .the main constraint arises from having sustainable conditions for biological organisms and biochemical processes , which provide the basis of life . here, damaging ultraviolet radiation must be of particular concern .its share regarding the total stellar luminosity critically depends on the stellar effective temperature and increases significantly from late ( f9 ) to early ( f0 ) f - type stars .we employ the necessary and accurate account of stellar radiation , including its spectral energy distribution , by utilizing a number of photospheric models computed by the phoenix code following ; see fig .the adopted range of models for the f - type stars are in response to effective temperatures of 7200 k for spectral type f0 , 7000 k for f1 , 6890 k for f2 , 6700 k for f3 , 6440 k for f5 , 6200 k for f8 , and 6050 k for g0 .the phoenix code iterates the principal physics and structure of a stellar atmosphere until a final model is obtained , which is in radiative and hydrostatic equilibrium ; see . as part of this procedure , energy transport by convection is also considered .the phoenix code solves the equation of state , including a very large number of atoms , ions , and molecules . with respect to radiation , the one - dimensional spherically symmetrical radiative transfer equation for expanding atmospheresis solved , including a treatment of special relativity ; see .opacities are sampled dynamically over about 80 million lines from atomic transitions and billions of molecular lines , in addition to background ( i.e. , continuous ) opacities , to provide an emerging spectral flux as reliable and realistic as possible . as part of our study , all spectra emergent from a stellar model atmosphere have first been calculated with a high resolution of , for a highly complete inclusion of the lines .the spectra were then binned down to a much lower resolution , which is more practical for our subsequent astrobiological analyses .main - sequence stars , including the sun , represent the slowest phase of stellar evolution , since at that stage the largest energy reservoir of the star is consumed : its central hydrogen is converted to helium ; see fig . 5 for examples of f - star evolutionary tracks .all later phases , where the star becomes a red giant twice , before and after the ignition of the central helium burning , are much faster and present much larger changes in stellar luminosity ; see for a detailed study of the implications for habitable super - earths .hence , as previously found , main - sequence stars are , in general , most promising in the context of astrobiology .though there are limits , since an increasing mass accelerates stellar evolution considerably .nonetheless , f - type stars corresponding to the mass - range of about 1.1 to 1.6 , provide stable lifetimes of 2 to 4 billion years , expected to be sufficient for the origin and evolution of life .however , regarding the evolution of effective temperatures and , in consequence , of the spectral distribution of the emergent radiation , f - type stars differ somewhat from the sun , especially the most massive ones : their effective temperatures rise to much higher values while still staying on the main - sequence , and there is a quick fall afterwards , and a rise again toward the end of this phase .the reason for this evolutionary behavior resides in the cores of f - stars : while the solar core is not convective but relies purely on radiative energy transport , f - stars employ the more efficient process of convection for transporting their high amounts of produced energy to the outer layers . inside the most massive f - stars , where core convection has gained full power ,rising bubbles even overshoot the boundary set by the schwarzschild criterion . hence , as a by - product , the convective cores of f - stars benefit from an enhanced chemical mixing by gaining access to a larger hydrogen reservoir around them , within reach of the `` overshoot length '' .therefore , f - type stars still spend a relatively long time on the main - sequence , making good in part for their faster evolution in general .this phenomenon is most notable between 1.4 and 1.5 , where overshooting sets in .however , the evolutionary behavior of f - stars after the end of central hydrogen burning also differs a bit from that of the sun , which does neither experience overshooting nor any core convection .these particulars of stellar evolution determine both the stellar effective temperatures and luminosities ( see table 1 ) ; they also drive the changes of the spectral characteristics of the irradiation and , finally , the inner and outer limit of the respective habitable zones . compared to g - type and k - type stars , f - type stars move relatively fast through the final stages of evolution beyond the main - sequence . in those stages ,very dramatic changes in luminosity ( increasing by over a factor 10 ) , stellar radius ( increasing by over a factor 100 ) and effective temperature ( decreasing by over a factor 2 ) occur .the corresponding time scales are limited to a few hundred million years compared to the sun , which will take about 2 billion years for completing these steps .for these reasons , we will restrict the study of circumstellar habitability of f - stars to their phases on and near the main - sequence .next we focus on the evolution of f - star climatological habitable zones , and the subsequent identification of the inner and outer limits of the chzs and ghzs . as pointed out in sect .3.2 , f - type stars , alike the sun , encounter a slow but consistent growth of luminosity during their main - sequence phase as well as characteristic changes of the stellar effective temperatures .this is relevant for all types of f - stars , i.e. , for the entire range of masses between about 1.2 and 1.5 as addressed in the following .these masses are identified as such at the zero - age main - sequence ( zams ) , and they experience little change prior to the departure of the star toward the red giant domain ; the latter is outside the scope of this study .an example is given as figure 6 .it shows the evolution of the climatological habitable zone for a star with an initial mass of 1.3 .it also depicts the development of the earth - equivalent position given as ( with as stellar luminosity in solar units ) , which is progressively moving outward .note that by the end of central hydrogen burning , the total stellar radiation output has almost doubled , entailing a considerable increase of both the inner and outer limits of the climatological habitable zone , identified either as chz or ghz .furthermore , the stellar spectral types change due to alterations of their effective temperatures during the course of stellar evolution , including main - sequence evolution . in the latter case ,stars of initial masses between 1.2 and 1.5 stay within the f - type range ( see table 2 ) .although it is appropriate to explore uv - based habitability at any distance from a planetary host star , we rather select the inner and outer limits of the chz ( labelled as ic and oc , respectively ) , and the inner and outer limits of the ghz ( labelled as ig and og , respectively ) as venues of our study . results are given in table 3 and figure 7 .we especially consider the extreme positions of the ig , ic , oc , and og attained as consequence of the stellar main - sequence evolution , i.e. , as identified at the zams stage and when the stars depart from the main - sequence .our study indicates once again that when stars age the climatological habitable zones become broader and migrate outwardly .a special aspect of significance to astrobiology concerns the continuous domains of both the chzs and ghzs , also referred to as continuously habitable zones ( see fig .7 ) , where the criteria for the chzs and ghzs ( see sect .2.1 ) are fulfilled for a distinct period of time ( here : the stellar main - sequence stages ) .the continuously habitable zones have evidently much smaller widths than the habitable zones defined by the extremes for the same amounts of time .the continuous domains of habitability for our set of stars are particularly small for the chzs . for a 1.2 this domain extents from 1.366 to 1.688 au , whereas for a 1.5 it extents only from 2.535 to 2.566 au ; this aspect should be taken into account for any comprehensive habitability assessment .the aim of our study is the investigation of biological damage inflicted upon dna due to stellar uv radiation for objects around main - sequence f - type stars .our results will be given in terms of , a parameter defined as the ratio of damage for a given distance from the star for an object with or without atmospheric attenuation relative to the damage for an object at 1 au from a solar - like star without atmospheric attenuation present .we will focus on the following aspects : ( 1 ) influence of spectral types of the host stars , especially for f0 v , f2 v , f5 v , and f8 v , ( 2 ) relevance of planetary positions in the stellar habitable zones , including the general and conservative inner limits , earth - equivalent positions , and the general and conservative outer limits , ( 3 ) effects of planetary atmospheric attenuation approximated by analytic functions , and ( 4 ) the relative importance of uv - a , uv - b , and uv - c regarding the dna damage .the dna damage in the habitable zones around f0 v , f2 v , f5 v , and f8 v stars in cases without planetary atmospheric attenuation is found to be almost always more significant than for the reference object at earth distance from a solar - like star ( see figure 8) .the sole exception is for objects at the outer limit of the ghz around an f8 v star , which in the solar system would correspond to an orbit beyond mars ; here is given as 0.95 .planetary atmospheric attenuation effectively reduces the damage of dna , as expected ( see figure 9 ) . in this work , as a standard example adopted for tutorial reasons , the planetary attenuation parameters are chosen as , , and , except otherwise noted . in this case of attenuation ,the damage on dna , for positions at the inner limit of the ghzs , is drastically reduced compared to the reference object without any attenuation . for other positions in the stellar habitable zones ,the reductions are relatively high as well ( see figure 9 ) .it is particularly intriguing to compare the respective values of for f0 v , f2 v , f5 v , and f8 v stars for different positions in the stellar habitable zones as they indicate the role of the significantly higher uv fluxes of f - type stars compared to the photospheric radiation of a solar - like star . for the inner limits of the ghzs ( i.e., the closest positions to the stars considered ) , for f0 v , f2 v , f5 v , and f8 v stars are identified as 9.8 , 7.3 , 4.2 , and 3.6 without atmospheric attenuation ; they decrease to 0.23 , 0.19 , 0.14 , and 0.14 , respectively , when our default choice of atmospheric attenuation is applied . at earth - equivalent positions ( which depend on the stellar spectral typemuch like the above - mentioned limits of habitability ) , for each star are given as 7.1 , 5.2 , 3.0 , and 2.5 , respectively , without atmospheric attenuation , and as 0.16 , 0.14 , 0.10 , 0.10 with attenuation .at the outer limits of ghzs , for each star is given as 2.9 , 2.1 , 1.1 , and 0.95 without atmospheric attenuation , and as 0.067 , 0.055 , 0.038 , and 0.037 with attenuation .intermediate values are obtained for the inner and outer limits of the chzs . in all cases dna damage due touv - a , compared to uv - b and uv - c , is minuscule at best . due to uv - a at the inner limit of the ghz for an f0 v star is without atmospheric attenuation , which is the maximum value of all cases considered .this value is further reduced to in case of default atmospheric attenuation . of the three regimes of uv , the one most affected by our above choice of planetary attenuation is uv - c . for f0v , f2 v , f5 v , and f8 v , the dna damage due to uv - c is 96% , 95% , 93% , and 91% , respectively , of the total for the uv regimes combined . if atmospheric attenuation is included , the dna damage due to uv - c is reduced to 1.7% , 1.9% , 2.3% , and 2.6% of the damage due to uv - c without planetary attenuation .it comprises 70% , 68% , 65% , and 61% of the total for each case of attenuation .the relative damage attributable to uv - c decreases with stellar types from f0 v to f8 v. the damage due to uv - a and uv - b also decreases with stellar types from f0 v to f5 v , but it slightly increases between f5 v and f8 v because of the shape of the stellar photospheric spectrum ( see figure 5 ) .since atmospheric attenuation is expected to impact preferably the uv regime , which in turn is most relevant to the dna damage , we also explored the effects of parameter choices for the attenuation function att ( see eq .3 ) . for tutorial reasons ,we focused on the f2 v star , and varied one of the parameters , i.e. , , , or , at a time . figure 10 depicts two cases out of several combinations of fixed parameters that were investigated . in the top panels of figure 10 ,the parameter is varied from 0.01 to 1.0 , whereas the other parameters remain fixed at = 300 and = 0.5 . in the bottom panels ,the parameter is varied from 250 to 350 and the two other parameters remain fixed at = 0.05 and = 0.5 .the two panels on the left show the relative impact of uv - a , uv - b , and uv - c on dna at an earth - equivalent position .note that the impact of uv - a is , however , completely unrecognizable in all panels .the two panels to the right show the dna damage at five selected positions in the habitable zone of the f2 v star , which are : the inner limits of the ghzs and chzs , the earth - equivalent position , and the outer limits of the chzs and ghzs . it is found that damage inflicted on dna is considerably larger for objects closer to the star .thus , in fig .10 , the top two lines represent the inner limits of the chz and ghz , and the bottom two lines represent the outer limits of the chz and ghz .the parameter depicts the rate of change in att with increasing wavelength ( see fig .thus , a smaller value of corresponds to a gentler slope of att , but also to a considerably higher value of att at relatively short wavelength ( i.e. , near 200 nm ) , which implies a higher effect of attenuation where damage inflicted on dna is most severe ( see fig . 2 ) .the parameter describes the center wavelength of the attenuation function ; it also changes the response of each regime to varied values of .thus , the appearance of as function of the attenuation parameter heavily depends on the fixed value of parameter .for example , let us consider the case . here , the dna damage due to uv - a increases , while the damages due to uv - b and uv - c decrease as the parameter increases ( see fig . 10 ) . here decreases from 0.73 to 0.0074 for an earth - equivalent position with increasing .the largest changes occur between and 0.08 , where due to uv - c declines drastically . in these cases , especially for our standard choice of attenuation ( see sect .4.1 ) , most of the dna damage is attributable to uv - c . but for , to uv - b and due to uv - c are about the same , and for larger values of , due to uv - b becomes greater than due to uv - c .we also varied the parameter while keeping the values for and fixed . is identified as a decreasing function for variable values of .since parameter determines the center of the attenuation function , the effect of uv diminishes from the lowest wavelengths as is increased . in other words , uv - cis affected the most by varying .if and are used , decreases from 1.32 at to 0.14 at and to 0.0012 at for objects at earth - equivalent positions .this behavior is shown in figure 10 .if and are used , the appearance of each functional dependence is similar to the case of ; however , of uv - b exceeds that of uv - c at . the functional dependence for uv - c is steeper for than for .the total amount of decreases from 1.36 at to 0.048 at and to at for objects at the earth - equivalent position .dna damage is inversely proportional to the square of distance between a planet and the host star , and thus , the ratio of dna damage ( with or without the consideration of atmospheric attenuation ) at one position to another does not change with the selected combination of parameters .because of that , if , the ratio of dna damage at a position to the reference value , at different positions are compared , the differences are smaller for more effective combinations of parameters . for example , the dna damage at the inner limits of the ghzs and the chzs , and at the outer limits of the chzs and ghzs are 139% , 108% , 62% , and 40% of dna damage as measured by , respectively , as found for the f2 v star .if attenuation is considered , and if the attenuation parameters are chosen as , is found as 1.01 , 0.78 , 0.73 , 0.45 , and 0.29 at the inner limit of the ghz and chz , at the earth - equivalent position , and at the outer limit of the chz and ghz , respectively . by contrast , if the same parameters are set as , our standard choice ) , is identified as 0.0119 , 0.0092 , 0.0086 , 0.0053 and 0.0034 , respectively .we also studied the influence of stellar main - sequence evolution , resulting in pronounced changes in the stellar effective temperatures and luminosities regarding the damage inflicted on dna .again , we consider cases without and with planetary atmospheric attenuation ( see sect . 4.1 ) .specifically , we explore the change in at specific positions in the habitable zones for stars with masses of 1.2 , 1.3 , 1.4 , and 1.5 .the selected positions are the general outer limit at zams ( bottom dotted line in figures 11 and 12 ) , the conservative outer limit at zams ( bottom dashed line ) , the general inner limit at the end of main - sequence ( top dotted line ) , the conservative inner limit at the end of main - sequence ( top dashed line ) , the earth - equivalent positions at zams ( i.e. , minimum distance ; top solid line ) and the end of main - sequence ( i.e. , maximum distance value ; bottom solid line ) ; see sect . 3.3 . the distances for various positions are given in table 4 and figure 7 .furthermore , we also consider average ( i.e. , time - independent ) earth - equivalent positions , which are derived by interpolating the conservative outer limit at the zams and the conservative inner limit at the end of main - sequence evolution noting that these limits correspond to the continuous domains of the chzs .generally , the earth - equivalent positions are computed as weighted averages between the inner and outer limits of the chzs , which can be approximated as and ( with in solar units ) , respectively , with the earth - equivalent positions given as . for stars with masses of 1.2 , 1.3 , 1.4 , and 1.5 , the average earth - equivalent positions are obtained as 1.56 , 1.82 , 2.10 , and 2.54 au , respectively , which in the solar system would correspond to an approximate distance nearly between mars and ceres .moreover , we also consider evolving earth - equivalent positions , noting that the inner and outer limits of chzs change on evolutionary time scales ( see fig . 7 ) ; see discussion below . in principle , the damage inflicted on dna at any position within the stellar habitable zones at the zams increases as a function of the stellar mass ( see figures 11 and 12 ) . regarding the inner and outer limits of both the chzs and ghzs , the damage on dna first increases with evolutionary time , but then starts decreasing with time , while the stellar luminosity keeps increasing and both the chzs and ghzs continue to migrate outward . if no planetary attenuation is considered , the zams values at average earth - equivalent positions for stars of 1.2 , 1.3 , 1.4 , and 1.5 are found as 0.91 , 2.46 , 3.41 , and 5.02 , respectively .the maximum values for the dna damage , expressed as , at average earth - equivalent positions are given as 1.96 , 2.63 , 3.48 , and 5.07 , respectively .these values are obtained at stellar evolutionary times of 1.95 , 0.93 , 0.27 , and 0.14 gyr , respectively ; it means that the maximum amounts of damage inflicted on dna are attained much earlier for stars of higher mass ( and , by implication , of earlier spectral type and higher initial effective temperature ) . at the end of main - sequence evolution , the values at average earth - equivalent positionsare then reduced to 0.96 , 1.04 , 1.07 , and 0.32 , respectively ; they are comparable to or smaller than those for a reference object orbiting a solar - like star .if default planetary atmospheric attenuation is assumed , corresponding to , the damage is drastically reduced , i.e. , by up to 96% , 97% , 97% , and 98% for stars of 1.2 , 1.3 , 1.4 , and 1.5 , respectively .the precise amount of reduction is a function of the stellar evolutionary status .the amount of reduction moreover depends on the type of star , which determines the shape of the emergent spectrum , therefore entailing different amounts of damage .as previously discussed , the damage is dominated by the uv - c regime ( see eq . 1 and fig .if standard attenuation is considered , the zams values for at average earth - equivalent positions for stars of 1.2 , 1.3 , 1.4 , and 1.5 are reduced to 0.038 , 0.082 , 0.098 , and 0.121 , respectively .the maximum values at those positions are given as 0.075 , 0.087 , 0.099 , and 0.122 , whereas at the end of the main - sequence stages , they are further reduced to 0.040 , 0.041 , 0.041 , and 0.014 , respectively . for specific age intervals , changes in different ways in the vicinities of stars of different masses .the following data refer to objects at average earth - equivalent positions , but they can also be converted for other star planet distances in a highly straightforward manner .we are particularly interested in the change of between 0.5 gyr and 2.5 gyr , i.e. , during the early stages of the systems , contemporaneous with the time when life originated on earth ( e.g. , * ? ? ?* and references therein ) , for a star with masses of 1.2 , increases from 1.75 to 1.96 until 1.95 gyr , and thereafter decreases to 1.92 at the average earth - equivalent position if planetary atmospheric attenuation is absent . for a star with masses of 1.3 , increases from 2.61 to 2.63 until 0.93 gyr , and then decreases to 1.86 . for stars with masses of 1.4 and 1.5 , decreases from 3.46 to 1.10 and decreases from 4.88 to 0.52 , respectively .thus , a planet in the habitable zone of a 1.2 mass star will experience the least change in the amount of dna damage , whereas that change will be greatest for a 1.5 mass star .adequate models should also take into account effects due to planetary atmospheric attenuation . in this case , even the climatological habitable zone of a 1.5 mass star may be able to offer a relatively well uv protected environment throughout the early 2 gyr period . the same statementis expected to apply for f - type stars of lower mass as well .if our default model of atmospheric attenuation is adopted , for a 1.2 mass star increases from 0.069 to 0.075 until 1.95 gyr , and then decreases to 0.074 ; note that these data refer to average earth - equivalent positions . for a 1.3 mass star , for the same setting increases from 0.086 to 0.087 until 0.93 gyr , and then decreases to 0.067 . in comparison , for stars with masses of 1.4 and 1.5 , decreases from the maximum value of 0.099 to 0.042 and from 0.118 to 0.021 , respectively . for other planetary positions , the respective values for be obtained through appropriate scaling .moreover , we also computed the dna damage at evolving ( i.e. , time - dependent ) earth - equivalent positions , depicted as dashed lines in fig .7 . for all cases from 1.2 to 1.5 , the earth - equivalent positions at zams are located very close to the stars .in fact , they are situated interior to the continuous domains of both the chzs and ghzs . as the stars age ,the earth - equivalent positions migrate outward , crossing or even passing the continuous domains of the chzs , especially for relatively massive f - type stars .since the damage inflicted on dna is proportional to the square of distance from the star , the attained results reflect the behavior of the planetary positions . the maximum dna damage at evolving earth - equivalent positions without atmospheric attenuation for stars of 1.2 , 1.3 , 1.4 , and 1.5 given as 2.55 , 3.56 , 4.74 , and 7.74 , respectively , occurring at very early stages of the stellar lifetimes , whereas the minimum dna damage are given as 0.92 , 0.99 , 1.01 , and 0.29 , respectively , obtained at the end of main - sequence evolution ( see fig .damage on dna is reduced by planetary atmospheric attenuation , as expected . for default attenuation ,the maximum damage is found as 0.099 , 0.119 , 0.136 , and 0.186 , respectively , whereas the minimum damage is found as 0.038 , 0.040 , 0.039 , and 0.012 , respectively ( see fig .in this study , we investigated the general astrobiological significance of f - type main - sequence stars .dna has been taken as a proxy for carbon - based macromolecules following the paradigm that extraterrestrial biology may most likely be associated with hydrocarbon - based biochemistry .consequently , the dna action spectrum was utilized to describe the impact of the stellar uv radiation .we considered an array of important aspects , including ( 1 ) the role of stellar main - sequence evolution , ( 2 ) the situation for planets at different positions within the stellar habitable zones , and ( 3 ) the general influence of planetary atmospheric attenuation , which has been described based on a parameterized attenuation function .the damage on dna was described by the output parameter , defined as the ratio of damage for a given distance from the star for a general object relative to the damage for an object at 1 au from a solar - like star with attenuation absent .\(1 ) for average earth - equivalent planetary positions , located inside the continuous domains of the chzs , for stars of spectral type f0 v , f2 v , f5 v , and f8 v are obtained as 7.1 , 5.2 , 3.0 , and 2.5 , respectively .earth - equivalent planetary positions depend on the stellar spectral type , and are at greater distance for stars of higher effective temperature or , by implication , larger mass .these results are consistent with the earlier work by .\(2 ) for the inner and outer limits of the chzs as well as ghzs , the results for can be obtained by scaling .specifically , the damage inflicted on dna is considerably increased at the inner limits of the chzs and ghzs and considerably decreased at the outer limits of the chzs and ghzs relative to average earth - equivalent positions . for f2 v stars ,the scaling factors for the inner limits are given as 139% and 108% and for the outer limits as 62% and 40% relative to that at average earth - equivalent positions .\(3 ) owing to the form of the dna action spectrum ( see fig .2 ) , in the absence of significant planetary atmospheric attenuation , most of the damage on dna is because of uv - c .damage due to uv - b is significantly lower , and damage due to uv - a is virtually nonexistent . regarding the latter ,the largest value in the context of this study was attained at the inner limit of the ghz for an f0 v star in the absence of planetary atmospheric attenuation , which is .\(4 ) planetary atmospheric attenuation , especially that associated with ozone layers ( e.g. , * ? ? ?* ) , is able to reduce damage inflicted on dna drastically . in consideration of realistic atmospheric attenuation functions ,this aspect entails a drastic reduction of damage associated with uv - c . on a relative scale, it thus tends to increase the importance of uv - b .\(5 ) it is particularly intriguing to assess the behavior of during stellar main - sequence evolution , which has been evaluated in detail for stars of masses of 1.2 , 1.3 , 1.4 , and 1.5 .if no planetary attenuation is taken into account , the zams values at average earth - equivalent positions are identified as 0.91 , 2.46 , 3.41 , and 5.02 , respectively .the values at other positions in the stellar habitable zones can be obtained by scaling noting that the incident stellar radiation is diluted following the inverse square law .\(6 ) taking average earth - equivalent positions as reference , the values are found to change with time in response to changes in the stellar parameters owing to the stellar main - sequence evolution . increases with time and reach maximum values of 1.96 , 2.63 , 3.48 , and 5.07 for stars with masses of 1.2 , 1.3 , 1.4 , and 1.5 , respectively ; they are attained at evolutionary times of 1.95 , 0.93 , 0.27 , and 0.14 gyr , respectively .thereafter , the values decline .they reach 0.96 , 1.04 , 1.07 , and 0.32 for the selected set of stars .these values are comparable to or smaller than that at an earth - equivalent position for a solar - like star .our study is a further contribution toward the exploration of the exobiological suitability of stars hotter and , by implication , more massive than the sun .although these stars are relatively rare compared to g - type solar - type stars , they possess significantly augmented habitable zones . on the other hand , their emergent photospheric uv fluxes are much larger; fortunately , however , they can be diminished through planetary atmospheric attenuation .thus , at least in the outer portions of f - star habitable zones , uv radiation should not be viewed as an insurmountable hindrance to the existence and evolution of life .future studies for f - type stars should encompass ( 1 ) detailed chemical models of planetary atmospheres , aimed at constraining the attenuation parameters , ( 2 ) examples of specific star planet systems with information attained from observational constraints , and ( 3 ) cases of f - type stars that are members of binary ( or higher order ) systems . studies of the circumstellar habitability for those systems also encompassing analyses of planetary orbital stability have been given by , e.g. , and others .* acknowledgements .* this work has been supported by the department of physics , university of texas at arlington ( s. s. , m. c. ) , by a ugto / promep - funded project ( d. j. ) , and by a conacyt master student stipend ( c. m. g. o. ) .additionally , k. p. s. is grateful for conacyt support of his sabbatical year projects under application no .207662 .guinan , e.f .& ribas , i. ( 2002 ) . in _ the evolving sun and its influence on planetary environments _ , proc .montesinos , b. , gimenez , a. and guinan , e.f . , pp .astr . soc ., san francisco .lammer , h. , bredehft , j.h . ,coustenis , a. , khodachenko , m.l . ,kaltenegger , l. , grasset , o. , prieur , d. , raulin , f. , ehrenfreund , p. , yamauchi , m. _ et al . __ * 17 * , 181249 .lcccccccccccc & & & & + 0.5 & 6178 & 1.770 & f8 k & 6444 & 2.644 & f5 & 6720 & 3.780 & f3 k & 7084 & 5.228 & f1 k + 1.0 & 6195 & 1.878 & f8 & 6447 & 2.839 & f5 & 6682 & 4.077 & f3 k & 6983 & 5.728 & f1 k + 1.5 & 6205 & 1.996 & f8 & 6432 & 3.036 & f5 & 6589 & 4.361 & f4 k & 6775 & 6.247 & f3 k + 2.0 & 6211 & 2.121 & f8 k & 6386 & 3.224 & f6 k & 6428 & 4.591 & f5 k & 6482 & 6.753 & f5 k + 2.5 & 6204 & 2.247 & f8 & 6303 & 3.383 & f7 k & 6218 & 4.755 & f8 k & 6117 & 7.047 & f9 + 3.0 & 6178 & 2.366 & f8 k & 6174 & 3.493 & f8 k & ... & ... & ... & ... & ... & ... + 3.5 & 6130 & 2.468 & f9 & ... & ... & ... & ... & ... & ... & ... & ... & ... + 4.0 & 6058 & 2.555 & g0 & ... & ... & ... & ... & ... & ... & ... & ... & ... + | we explore the general astrobiological significance of f - type main - sequence stars with masses between 1.2 and 1.5 . special consideration is given to stellar evolutionary aspects due to nuclear main - sequence evolution . dna is taken as a proxy for carbon - based macromolecules following the paradigm that extraterrestrial biology may be most likely based on hydrocarbons . consequently , the dna action spectrum is utilized to represent the impact of the stellar uv radiation . planetary atmospheric attenuation is taken into account based on parameterized attenuation functions . we found that the damage inflicted on dna for planets at earth - equivalent positions is between a factor of 2.5 and 7.1 higher than for solar - like stars , and there are intricate relations for the time - dependence of damage during stellar main - sequence evolution . if attenuation is considered , smaller factors of damage are obtained in alignment to the attenuation parameters . this work is motivated by earlier studies indicating that the uv environment of solar - type stars is one of the most decisive factors in determining the suitability of exosolar planets and exomoons for biological evolution and sustainability . |
the field of computational astrophysics is entering an exciting and challenging era .the large amount of observational data involving general relativistic phenomena requires the integration of numerical relativity with the traditional tools of astrophysics , such as hydrodynamics , magneto - hydrodynamics , nuclear astrophysics , and radiation transport .general relativistic astrophysics astrophysics involving gravitational fields so strong and dynamical that the full einstein field equations are required for its accurate description , is quickly becoming a promising area of research . correspondence should be addressed to mark miller . as a first step in our study of `` computational general relativistic astrophysics '' , our collaboration ( the ncsa / potsdam / wash u numerical relativity collaboration ) is building a code called `` cactus '' for solving the full set of einstein field equations coupled to a perfect fluid source .such a code will have many applications for astrophysical processes involving neutron stars and black holes . in this paperwe present the formulation and methods of the 3d general relativistic hydrodynamic part of the code , and its coupling to the spacetime part of the code . we also present various tests for the validation of the code .the complimentary presentation on the ( vacuum ) spacetime evolution part of the code has been given in . in the following we begin by discussing the background of our code development effort .two of the major directions of astronomy in the next century are high energy astrophysics ( -ray and -ray astronomy ) and gravitational wave astronomy .the former is driven by advanced x - ray and -ray satellite observations , e.g. , cgro , axaf , glast , xmm , integral , that are either current or planned in the next few years .high energy radiation is often emitted by highly relativistic events in regions of strong gravitational fields , e.g. , near black holes ( bhs ) and neutron stars ( nss ). one of the biggest mysteries of modern astronomy , -ray bursts , is likely related to processes involving interactions of compact binaries ( bh / ns or ns / ns ) or highly explosive collapse to a black hole ( `` hypernova '' ) ( see , e.g. , and references therein ) . such high energy astrophysical events often involve highly dynamical gravitational fields , strong gravitational wave emissions , and ejecta moving at ultrarelativistic speeds with relativistic lorentz factors up to .the modeling of such events can only be achieved by means of hydrodynamical simulations in the full theory of general relativity . the second major direction , gravitational wave astronomy , involves the dynamical nature of spacetime in einstein s theory of gravity .the tremendous recent interest in this frontier is driven by the gravitational wave observatories presently being built or planned in the us , europe , and outer space , e.g. , ligo , virgo , geo600 , lisa , lagos , and the lunar outpost astrophysics program .the american ligo and its european counterparts virgo and geo600 are scheduled to be on line in a few years , making gravitational wave astronomy a reality .the space detector lisa has been selected as one of the three `` cornerstone missions '' of the european space agency .these observatories provide a completely new window on the universe : existing observations are mainly provided by the electromagnetic spectrum , emitted by individual electrons , atoms , or molecules , and are easily absorbed , scattered , and dispersed .gravitational waves are produced by the coherent bulk motion of matter and travel nearly unscathed through space , coming to us carrying the information of the strong field regions where they were originally generated .this new window will provide very different information about our universe that is either difficult or impossible to obtain by traditional means .the numerical ( theoretical ) determination of gravitational waveforms is crucial for gravitational wave astronomy .physical information in the data is to be extracted through template matching techniques , which _ presupposes _ that reliable waveforms are known .accurate waveform detections are important both as probes of the fundamental nature of gravity and for the unique physical and astronomical information they carry , ranging from nuclear physics ( the equation of state of nss ) to cosmology ( direct determination of the hubble constant without going through the `` cosmic distance ladder '' ) . in most situations ,the waveform can not be calculated without a numerical simulation based on the full theory of general relativity .this need for waveform templates is an important motivation of our effort . in short , both of these frontiers of astronomy call for computational general relativistic astrophysics , i.e., the integration of numerical relativity with traditional tools of computational astrophysics , e.g. , computational hydrodynamics , radiation transport , nuclear astrophysics , and magneto - hydrodynamics . if we are to fully understand the observational data generated by the non - linear and dynamical gravitational fields , detailed modeling taking dynamic general relativity into full account must be carried out .we begin by briefly reviewing some of the significant existing investigations in the field of numerical general relativistic hydrodynamics ( gr - hydro in the following ) to set the stage for the description of our own work .while there has been much effort in the study of relativistic hydrodynamics in _ pre - determined _ ( fixed , or with its time evolution specified ) background spacetimes , we focus on studies that are most relevant to _ dynamical _ spacetimes with the matter flows acting as sources to the einstein equations . the pioneering work dates back to the one - dimensional supernova core - collapse code by may and white .it was based on a lagrangian ( i.e. , coordinates co - moving with the fluid ) finite difference scheme with artificial viscosity terms included in the equations to damp the spurious numerical oscillations caused by the presence of shock waves in the flow solution .numerous astrophysical simulations were based on this approach .one drawback is that the lagrangian character of the code makes it difficult to be extended to the multidimensional case .the pioneering eulerian ( i.e. , coordinates not co - moving with the fluid ) finite difference gr - hydro code was developed by wilson in the early 70 s .it used a combination of artificial viscosity ( av ) and upwind techniques .it became the kernel of a large number of codes developed in the 80 s .many different astrophysical scenarios were investigated with these codes , ranging from axisymmetric stellar core - collapse , to accretion onto compact objects , and to numerical cosmology . in the following , we give a short overview of this large body of work , paying more attention to the numerical methods used than to the physical results obtained . while there are a large number of numerical investigations in pre - determined background spacetimes based on the av approach ( e.g. , ) , we focus on those using a fully self - consistent treatment evolving the spacetime dynamically with the einstein equations coupled to a hydrodynamic source .although there is much recent interest in this direction , only the spherically symmetric case ( 1d ) can be considered essentially solved . in axisymmetry ,i.e. 2d , only a few attempts have been made , with most of them devoted to the study of the gravitational collapse and bounce of rotating stellar cores and the subsequent emission of gravitational radiation . was the first to calculate a general relativistic stellar core collapse .the computation succeeded in tracking the evolution of matter and the formation of a black hole but the numerical scheme was not accurate enough to compute the emitted gravitational radiation .the code in used a radial gauge and a mixture of polar and maximal slicing .the gr - hydro equations were solved with standard finite difference methods with av terms . in numerical scheme for the matter fields was more sophisticated , using monotonic upwind reconstruction procedures and flux limiters , with discontinuous solutions handled by adding av terms in the equations . in , a numerical study of the stability of star clusters in axisymmetry was performed . in this investigation ,the source of the gravitational field was assumed to be a configuration of collisionless ( dust ) particles , which reduces the hydrodynamic computation to a straightforward integration of the geodesic equations .three - dimensional extensions of these av based gr - hydro treatments have been attempted over the last few years .wilson s original scheme has been applied to the study of ns binary coalescence in under the assumption of a conformally flat spacetime , which leads to a considerable simplification of the gravitational field equations .a code employing the _ full _ set of einstein equations and self - gravitating matter fields is currently being developed . in this workthe complete set of the equations , spacetime and hydrodynamics , are finite differenced in a uniform cartesian grid using van leer s scheme with total variation diminishing ( tvd ) flux limiters ( see , e.g. , for definitions ) .shock waves are spread out using a tensor av algorithm . with this codethey have studied the gravitational collapse of a rotating polytrope to a black hole ( comparing to the original axisymmetric computation of ref . ) and the coalescence of a binary ns system .further work to achieve longer term stability is under way .the success of the artificial viscosity approach is well - known .however , it has inherent difficulties in handling the ultrarelativistic regime . in wilson s formulation of the gr - hydro equations ,there are explicit spacetime derivatives of the pressure in the source terms .this breaks the conservative character of the system and introduces complications into the numerical treatment .this motivated , in recent years , the effort of extending to relativistic hydrodynamics high - resolution shock - capturing schemes ( hrsc ) originally developed in classical ( newtonian ) computational fluid dynamics .such schemes are based on the solution of local riemann problems , exploiting the hyperbolicity of the hydrodynamic equations . to use such numerical treatments ,the hydrodynamic equations are first cast into a first order ( hyperbolic ) system of conservation ( or balance ) laws .the characteristic fields of the system are then determined which allows the construction of numerical schemes which propagate the information along the fluid characteristics .we refer the reader to for a review of these methods for general hyperbolic systems of conservation laws .hrsc schemes were first introduced into gr - hydro in , and applied in ( spherical ) dynamical spacetimes in and .the latter investigation focussed on , among other problems , the study of supernova core collapse ( including the infall epoch , bounce , and shock propagation ) .the numerical code was based on the radial - gauge and polar - slicing coordinate conditions . in gr - hydro equations were analyzed in the `` 3 + 1 '' formalism and the theoretical building blocks to construct a hrsc scheme in multidimensions were presented .axisymmetric studies using hrsc schemes are currently being carried out in .this investigation focussed on the study of accretion phenomena onto ( dynamic ) rotating black holes and the associated emission of gravitational radiation induced by the presence of the matter fields .axisymmetric studies will also provide useful test beds " in forthcoming investigations with the present 3d code discussed in this paper . as will be discussed in later sections of this paper , our present codeis based on the same hrsc algorithmic machinery as in the aforementioned works .we extend the treatment to 3d , and develop a code that makes no assumptions on the nature of the spacetime , the form of the metric , or the slicing and spatial coordinates .we refer the interested reader to the above references for a first understanding of the numerical schemes used in our work .we also want to mention a completely different approach for gr - hydro based on pseudospectral methods .these methods are well known for having extraordinary accuracy in smooth regions of the solution .the numerical error is evanescent , i.e. , it decreases as with being the number of coefficients in the spectral expansion .the main drawback of pseudospectral methods has been , traditionally , the inaccurate modeling of discontinuous solutions due to the appearance of the so - called gibbs phenomenon . in the presence of discontinuities , the numerical approximation of the solution does not converge at the discontinuity and spurious oscillations appear . recently , however , an innovative pseudospectral method based on a multidomain decomposition has been developed which circumvents the gibbs phenomenon .this new approach has already been shown to work remarkably well in the 3d numerical construction of mclaurin and roche equilibrium models . in this subsectionwe discuss the main issues we considered in choosing our approach , building on the existing work discussed above .the main aim of our program is to study violent and highly - energetic astrophysical processes like ns / ns coalescence within the framework of general relativity .these scenarios involve strong gravitational fields , matter motion with ( ultra ) relativistic speeds and/or strong shock waves .these features make the numerical integration of the hydrodynamic equations a very demanding task .the difficulty is exacerbated by the intrinsic multidimensional character of these astrophysical systems , and by the inherent complexities in einstein theory of gravity , e.g. , coordinate degrees of freedom and the possible formation of curvature singularities ( e.g. , collapse of matter configurations to black holes ) .these complications call for the use of advanced numerical methodology , a flexible code construction which allows for the use of different treatments , and a large amount of careful testbed studies . in the following we discuss these issues in more detail .two major issues in gr - hydro which are purely hydrodynamical in origin are the numerical modeling of flows with large lorentz factors and strong shock waves . in it was shown that the av based schemes have difficulties in handling ultrarelativistic velocity flows with lorentz factors . as a result , proposed using implicit finite difference schemes to handle the gr - hydro equations in the ultrarelativistic regime .however , investigations during the last decade have provided increasing evidence that the most appropriate schemes to deal with ultra - relativistic flow with strong shocks are those based on ( approximate or exact ) riemann solvers , i.e. , hrsc schemes .these methods have high accuracy ( second order or more ) in regions where the flow solution is smooth , and at the same time are able to resolve discontinuities in the solution ( e.g. , shock waves ) with little smearing .they have been extensively tested and found to be applicable in the ultra - relativistic regime ( see , e.g. , for a recent review ) . while we believe hrsc schemes may be capable of providing the technology for treating the hydrodynamic part of the evolution ,the field of computational gr - hydro still contains many issues that are as yet unexplored , especially for cases where the relativistic fluid is coupled to a dynamical spacetime . for a fully dynamical spacetime ,one major issue is the handling of the gauge degrees of freedom .this problem is exacerbated in 3d simulations without symmetry assumptions . in a general 3d problem , there is no preferred choice of gauge to restrict the metric functions as in lower dimension simulations ( e.g. , radial gauge and polar slicing in spherically symmetric simulations ) .lagrangian coordinate systems are inappropriate for complicated 3d flows .the inevitable lower resolution in 3d simulations also makes the problem more acute . even in vacuum spacetime studies ,the choice and implementation of appropriate gauge conditions for a general dynamical evolution is a largely unexplored territory .how will the gauge choices be affected by the presence of relativistic fluid flows or by the existence of strong shocks which create sharp features in the sources of the metric evolution ?for example , what will be a useful gauge condition for a process like the inspiral of a ns / ns binary ? these are completely open issues . in order to provide the capability to investigate these problems ,the code we construct here is designed to allow arbitrary gauge conditions , making no assumptions on the lapse function or the shift vector .another class of problems involves the connection of the numerical integration of the hydrodynamic equations to that of the spacetime equations .what is the best set of variables to use , locally measured quantities , coordinate variables , densitized quantities or some combination ? with the spacetime metric an evolved variable ,there are many choices .what is the best way to connect the hydrodynamics and the spacetime finite differencing steps to achieve not only a second order accurate scheme in both space and time , but also in a way that is suitable for long term evolutions ?even in newtonian strong field evolutions , coupling the hydrodynamic integration to the gravitational potential calculation in different ways can yield different long term behavior . as a consequence of the different character of the equations governing the geometry of the spacetime and the evolution of the matter fields , the numerical methods to handle them are drastically different .what are the effects of combining different methods , and is there a best combination for a particular class of problems ?with the recent development of hyperbolic formulations in gr , an interesting possibility would be to consider all of the dynamical variables , both spacetime and matter fields , to be members of one master state vector .the entire system of equations could then be written as a single ( vector ) conservation ( or balance ) equation .one could then apply the same hrsc schemes to the entire system .what advantages would this bring ?these are some of the issues that we have in mind in choosing our approach in developing the code as will be discussed next .our overall goal is to develop an efficient , flexible , computational tool for general relativistic astrophysics . specifically for this paper ,the aims are : ( 1 ) to establish the formulation , including the spectral decomposition of the gr - hydro equations , on which our code is based , ( 2 ) to validate the numerical code we constructed for solving the gr - hydro equations , and ( 3 ) to compare the different numerical schemes we used .the set of differential equations we are attempting to solve consists of very complicated , coupled partial differential equations involving thousands of terms .considering the complexity and generality of the code , along with the fact that the solution space of the differential equations is largely unexplored , it is essential that any physical result produced by a 3d gr - hydro code be preceded by a series of tests such as the ones we report here , in order to insure the fidelity of the discretization to the original differential equations .in fact , we consider the tests presented here to be a minimal set : _ any _ 3d gr - hydro code should be able to reproduce these results .further tests , especially those related to the long term stability of the code and detailed comparisons of 3d and 1d results will be presented in a forthcoming paper . in exploring the very complex system of the gr - hydro equations ,it is also essential to have the capability to compare results based on different mathematical formulations and different numerical schemes .our code is currently set up to allow two different formulations of the einstein equations : the standard arnowitt - deser - misner ( adm ) formulation and the bona - mass ( bm ) hyperbolic formulation ( other hyperbolic formulations will be included and reported later ) .the code allows for two different choices for finite differencing the adm equations : a standard leapfrog scheme and an iterative crank - nicholson scheme .the bm equations are finite differenced using a strang split to separate the source and flux updates .the latter are performed using a maccormack method . as for the numerical treatment of the hydrodynamic equations , the code has the capability of using three different hrsc schemes : the first one is the flux split method , mainly chosen for its simplicity .the second is roe s method ( note that contrary to , we do not use roe s averaging but instead employ arithmetic averaging ( see section [ sec : discrete ] below for details ) ) .the third scheme we use is the recently developed marquina s method .all three schemes are coupled to the spacetime evolution solver in a way which is second order accurate in both space and time . in this codewe also allow for arbitrary spacetime coordinate conditions .as mentioned previously , this enables the investigation of gauge choices in gr - hydro and allows the use of different coordinate systems for different astrophysical simulations .this capability is built into our development , and we have carried out tests with non - trivial lapses and shifts in this paper . however , more investigation is needed in this direction . as the aim of our program is to study_ realistic _astrophysical systems , which often require full 3d simulations and involve many different time and length scales , it is important that the computer code we develop be capable of carrying out large scale simulations .this requires the use of massively parallel supercomputers .the `` cactus '' code was built with this in mind .here we give a brief overview of the computational infrastructure of the code and its performance . for a more extensive review ,see .the cactus code achieves parallelism through the mpi message passing interface .this allows high performance portable parallelism using a distributed memory model .all major high performance parallel architectures , including the sgi origin 2000 , cray t3e , hp / convex exemplar , and ibm sp-2 support this programming model .the mpi layer of cactus also allows computing on clusters of networked workstations and pc s .parallelism in cactus is based on a generic domain decomposition , distributing uniform grid functions across multiple processors and providing ghost - zone based communications for a variety of stencil widths and grid staggerings .the code can also compile without mpi , allowing the same source code to be run on a single processor workstation and on massively parallel supercomputers .the platforms currently supported and tested include : the sgi origin 2000 ( up to 256 nodes ) , the cray t3e ( up to 1024 nodes ) , sgi o2 clusters , nt clusters , dec alphas , and sgi workstations .we have recently benchmarked a version of the code ( the `` gr3d '' version , constructed for the nasa neutron star grand challenge project , see http://wugrav.wustl.edu/relativ/nsgc.html ) on a 1024 node t3e-1200 , achieving over 140 gflop / sec and a scaling efficiency of over 95% ( for details of the benchmark , see http://wugrav.wustl.edu/codes/gr3d/ ) . besides the floating point and scaling efficiency, it is also noteworthy that a relatively large grid size ( 644 x 644 x 1284 grid points for 32 bit accuracy , and 500 x 500 x 996 grid points for 64 bit accuracy ) were used for the benchmarked run on the t3e-1200 .this is made possible by the efficient memory usage of the code . with the full set of the einstein equations coupled to the relativistic hydrodynamics equations ,a large number of 3d arrays are required to evolve the system . in order to have reasonable resolutions for realistic simulations ,it is essential that the code make efficient use of available memory .it is also essential that the code be highly optimized in order for these large simulations to be carried out in a reasonable time . during the code development , special attentionwas also given to software engineering problems , such as collaborative code development , maintenance , and management .the code was developed to be shared by the entire community for the investigation of general relativistic astrophysics .to minimize barriers associated with collaborative development , the code was constructed to have : 1 . a modular code structure that allows new code modules to easily plug in as `` thorns '' to the core part of the code ( the `` flesh '' ) .the `` flesh '' contains the parallel domain decomposition software , i / o , and various utilities .2 . a consistency test suite library to make sure that new thorns will not conflict with other parts of the code .3 . various code development tools , such as : documentation , elliptic solvers , and visualization tools , which provide a complete environment for code development , and testing . for detailed discussions of these and other features of collaborative infrastructure of the code ,see these computational features of the code significantly enhance our effort in constructing a multi - purpose code for general relativistic astrophysics .the organization of the paper is as follows : the formulation of the differential equations are given in section [ sec : formulation ] .a spectral decomposition of the gr - hydro equations suitable for a general non - diagonal spatial metric is presented .the details of the discretization of the equations and of the coupling of the spacetime and hydrodynamics are given in section [ sec : discrete ] .shock tube tests are performed in section [ sec : shocktube ] for shocks along the coordinate axes and along the diagonal .these test the hydrodynamic part of the code , with the background geometry held flat .we then go on to test the coupling of the hydrodynamics to curved and dynamical spacetimes .section [ sec : frw ] is on tests using friedmann - robertson - walker cosmologies with dust .section [ sec : tov ] contains tests using static spherical star solutions with a polytropic equation of state .we present a practical procedure which gives stable evolution of the surface region of the star .section [ sec : boost ] contains tests using the spherical star solutions described in section [ sec : tov ] but now relativistically boosted along the diagonal .this is a strong test of the fully coupled spacetime and hydrodynamics system , with all possible terms in the equations activated and with a non - trivial lapse and shift .finally , section viii contains a brief summary .all tests presented in sections [ sec : frw]-[sec : boost ] contain convergence studies performed in the following way : errors are obtained by subtracting the exact solution at a specific time from the computed solution for a number of dynamical variables .these errors are produced at three different resolutions , , , and . to demonstrate they have the correct convergence properties for a second order accurate discretization we check that each error function decreases by a factor of four for each factor of two increase in resolution .this is demonstrated by plotting the various error functions along 1-d lines .these convergence tests are an essential part in validating the code .in this subsection we present the hydrodynamic equations for a general curved spacetime in a form suitable for advanced numerical treatment .the equations for the evolution of the spacetime , including the hydrodynamic source , will be presented in a later subsection . the general relativistic hydrodynamic equations , written in the standard covariant form , consist of the local conservation laws of the stress - energy , , and the matter current density , where , is the rest mass density and the 4-velocity of the fluid . stands for the covariant derivative with respect to the 4-metric of the underlying spacetime . throughout this paperwe are using , unless otherwise stated , natural units ( ) .greek ( latin ) indices run from 0 to 3 ( 1 to 3 ) . in what followswe will neglect viscous effects , assuming the stress - energy tensor to be that of a perfect fluid where is the fluid pressure and is the 4-metric describing the spacetime .in addition , the relativistic specific enthalpy , , is given by where is the rest frame specific internal energy density of the fluid .the equations written in this covariant form are _ not _ suitable for the use of advanced numerical schemes . in order to carry out numerical hydrodynamic evolutions , and in particular to take advantage of the benefits of hrsc methods ,the hydrodynamic equations after the 3 + 1 split must be written as a hyperbolic system of first order flux conservative equations .we introduce coordinates and write eqs .( [ eq : stressenergycons ] ) and ( [ eq : masscons ] ) in terms of coordinate derivatives .we project eq.([eq : stressenergycons ] ) and eq.([eq : masscons ] ) onto the basis , with being a timelike vector normal to a given hypersurface .a straightforward calculation yields the set of equations in the desired form where denotes a partial derivative with respect to time and indicates a partial derivative with respect to the spatial coordinate .the evolved state vector is given , in terms of the primitive variables , as = \left [ \begin{array}{c } \sqrt{\gamma } w \rho \\ \sqrt{\gamma } \rho h w^2 v_j \\ \sqrt{\gamma } ( \rho h w^2 - p - w \rho ) \\ \end{array } \right ] , \label{eq : evolvedvar}\ ] ] where is the determinant of the 3-metric , is the fluid 3-velocity , and is the lorentz factor , . notice that the spatial components of the 4-velocity are related to the 3-velocity by the following formula : where and are , respectively , the lapse function and the shift vector of the spacetime . also notice that we are using a slightly different set of variables as those used in .we are now densitizing " the evolved quantities , , and , with the factor .the three flux vectors are given by .\ ] ] finally , the source vector is given by ,\ ] ] where is the 4-christoffel symbol a technical point must be included here . while the numerical code updates the state vector forward in time it makes use , internally , of the set of primitive variables defined above , .those are used throughout , e.g. , in the computation of the characteristic fields ( see below ) . these variables can not be obtained from the evolved ones in a closed functional form . instead , they must be recovered through some appropriate root - finding procedure ( an example of this can be found in ) .the use of hrsc schemes , as will be presented in detail in the next section , depends crucially on the knowledge of the spectral decomposition of the jacobian matrix of the system the characteristic speeds ( eigenvalues ) and fields ( eigenvectors ) are the key ingredients of any hrsc scheme .the spectral decomposition of the jacobian matrices of the general relativistic hydrodynamic equations with general equation of state was first reported in ( for polytropic eos see ) .however , we have found that the eigenvectors reported in are correct only in the case of a diagonal spatial metric . in this sectionwe display the full spectral decomposition valid for a generic spatial metric .we focus on the -direction , hence presenting the spectral decomposition of , as the other two directions can be found by simple permutation of indices .we start by considering an equation of state in which the pressure is a function of and , .the relativistic speed of sound in the fluid is given by ( see , e.g. , ) where , , is the entropy per particle , and is the total rest energy density which in our case is .we require a complete set of eigenvectors ] , then the scheme is upwind if then else endif * otherwise , the scheme is switched to the more viscous , entropy - satisfying , local - lax - friedrichs scheme is a curve in phase space connecting and .in addition , can be determined as where , , , are the right ( normalized ) eigenvectors of the system . for further technical information about this solverwe refer the reader to .the suitability of this scheme for the accurate integration of the hydrodynamic equations and many of its desirable properties can be found in ( newtonian hydrodynamics ) and ( relativistic hydrodynamics ) . in this sectionwe outline the discretization techniques used in the vacuum spacetime part of the code . for a more detailed discussionwe refer the reader to . herewe give the essential formulae for completeness and discuss in detail only the issues relevant to its coupling to hydrodynamics described in the next subsection .the bm system uses the so - called strang splitting to separate eq .( [ balance ] ) into two evolution steps . in the first step ,only the source terms are used to update the variables while in the second step , only the flux terms are used for the update to ensure second order accuracy in both space and time , this is done by first evolving the source terms forward in time half a time step , then evolving with only the flux terms a full time step , and finally evolving with only the source terms another half time step .the source terms are evolved forward using a second order accurate predictor - corrector method , while the flux terms are evolved using a second order accurate maccormack scheme .specific details of these methods are discussed in .the adm system supports the use of several different numerical schemes .currently , a leapfrog ( non - staggered in time ) and iterative crank - nicholson scheme have been coupled to the hydrodynamic solver .the leapfrog method assumes that all variables exist on both the current time step and the previous time step .variables are updated from to ( future time ) evaluating all terms in the evolution equations on the current time step .the iterative crank - nicholson solver first evolves the data from the current time step to the future time step using a forward in time , centered in space ( ftcs ) first order method .the solution at steps and are then averaged to obtain the solution on the half time step .this solution at the half time step is then used in a leapfrog step to re - update the solution at the final time step .this process is then iterated .the error is defined as the difference between the current and previous solutions on the half time step .this error is summed over all grid points and all evolved variables .this process is repeated until some desired tolerance is reached .care is taken to make sure that at least two iterations are taken to make the process second order accurate .our code evolves the spacetime geometry and the matter fields separately .this allows different methods to be used for each system ( spacetime and hydrodynamics ) .the coupling of those different evolution algorithms in a way that is second order accurate in both space and time is highly method dependent .we will therefore discuss the coupling of each system , adm or bm , with hydrodynamics , separately .a summary of the different combined schemes appears in table [ table : discrete_names ] .[ tb ] .this table summarizes the abbreviations used for the various methods used for the spacetime and hydrodynamical evolutions . [cols="^,^,^",options="header " , ]in this paper we present a new three - dimensional , eulerian , general relativistic hydrodynamical code constructed for general relativistic astrophysics .this code is capable of evolving the coupled system of the einstein and hydrodynamic equations .the code is constructed for a completely general spacetime metric based on a cartesian coordinate system , with arbitrarily specifiable lapse and shift conditions .this paper discussed the general relativistic hydrodynamics part of the code , and its coupling to the spacetime code , in parallel to the presentation of the spacetime ( vacuum ) part of the code in .we have derived a spectral decomposition for the gr - hydro equations valid for general spatial metrics , generalizing the results of which were only valid for the case of a diagonal metric . based on this spectral decomposition ,three different approximate ( linearized ) riemann solvers , flux - split , roe and marquina , were used to integrate the relativistic hydrodynamic equations .we tested these methods individually and compared the results against one another .while we found all methods converging to second order in the discretization parameter , we also compared the absolute values of errors of the different methods .which method produced the smallest absolute error , and whether the spacetime or hydrodynamical evolution was the dominant source of error , depends on the initial data being evolved .for the shocktube problem , only the hydrodynamical evolution was relevant since the evolution took place on a flat background metric . for an evolution along a coordinate axis , the roe and marquina methods were superior to the flux split method .for an evolution where the shockfront is along the diagonal , the flux split method was slightly more accurate than both the roe and marquina method .for the frw evolutions , the spacetime evolution is the main source of error .the bm system tends to be more accurate than the adm system . for the tov tests , we find that the roe and marquina methods are more accurate than the flux split method , and the bm system is more accurate than the adm system . for the boosted tov test , the roe and marquina methods are again superior to flux split .we caution that these statements could depend on the resolution used and the duration of evolution .the hydrodynamic evolution is coupled to the spacetime evolution in a manner which is second order accurate in _ both _ space and time .the coupled code was subjected to a series of convergence tests , with different combinations of the spacetime and hydrodynamics finite differencing schemes , demonstrating the consistency of the discrete equations with the differential equations .the extensive convergence tests performed are important not only for the validation of the code , but have also been important debugging tools during the code development process .we consider the tests presented to be a minimal set that any 3d gr - hydro code should pass before actual applications .the test - beds that we report on in this paper include : special relativistic shock tubes , friedmann - robertson - walker cosmology tests , evolution of equilibrium configurations of compact stars ( solutions to the tolman - oppenheimer - volkoff equations ) , and the evolution of relativistically boosted tov stars transversing diagonally across the computational domain .the degree of complexity presented in these tests increases from purely special relativistic flows in flat backgrounds to fully general relativistic flows in dynamical spacetimes .in particular , the last test - bed ( the boosted star ) involves _ all _ possible terms in the coupled set of gr - hydro evolution equations and were carried out with a non - trivial lapse and shift vector .we found a simple , yet effective treatment for handling the surface region of a general relativistic self - gravitating compact object .the key idea is to replace the energy equation update by the condition of adaiabatic flow in regions of low density .while the surface region is not changing the overall dynamics of the star , numerical instabilities there could halt the numerical evolution if uncontrolled .the capability to handle the surface region in a stable fashion is important for the application of the code to the study of neutron star astrophysics .we have demonstrated this capability in the equilibrium and boosted star test - beds .refinement of this treatment for long term stability is presently being investigated .additional code calibrations that are underway include long - term stability analysis of single neutron stars , comparisons of waveforms from perturbed neutron stars , and comparisons with one - dimensional and axisymmetric ( 2d ) independent gr - hydro codes that we ( together with our collaborators ) constructed .those will be reported in later papers in this series .the formulation of the coupled set of equations and the numerical code reported in this paper were used for the construction of the milestone code `` gr3d '' for the nasa neutron star grand challenge project ( for a description of the project , see http://wugrav.wustl.edu/relativ/nsgc.html ) .the goal of this project is to develop a code for general relativistic astrophysics , and in particular , one that is capable of simulating the inspiral coalescence of a neutron star binary system .the coalescences of neutron star binaries are expected to be important sources of gravitational waves for interferometric detectors .the strongest signal will come from the highly dynamic `` plunge '' during the final phase of the inspiral ; a fully general relativistic code provides the only way to calculate this portion of the waveform . a version of the code which passed the milestone requirement of the nasa grand challenge project ,has recently been released to the community .this code has been benchmarked at over 140 gflop / sec on a node cray t3e with a scaling efficiency of over 95% , showing the potential for large scale 3d simulations of realistic astrophysical systems .further development of our general relativistic code , and its application to the specific study of the neutron star coalescence scenario , will be described in later papers in this series . to summarize, this paper presents the first ( and necessary ) steps towards constructing an accurate and reliable tool for the numerical study of astrophysical phenomena involving matter at relativistic speeds and strong gravitational fields .the general relativistic hydrodynamical module mahc " presented and studied in this paper is coupled to the cactus " code for the spacetime evolution .the cactus code is being developed by an international collaboration , with a major contribution coming from the albert einstein institute in potsdam ( germany ) , and with significant contribution from the relativity group ( wugrav ) at washington university in st .louis , missouri , and from colleagues at the national center for supercomputing applications in urbana , illinois .further development has been carried out at the university of the balearic islands in mallorca ( spain ) , the university of valencia ( spain ) , and elsewhere .the hydrodynamical module was developed mainly at wugrav , with significant contributions from the potsdam group , and has benefited from interactions with the hydro group at the university of valencia .we would like to thank miguel alcubierre , gabrielle allen , pete anninos , toni arbona , carles bona , steve brandt , bernd brgmann , dan bullok , tom clune , teepanis chachiyo , ming c. chu , greg comer , thomas dramlitsch , comer duncan , ed evans , ian foster , tom goodale , carsten gundlach , philip gressman , philip hughes , jos mara ibez , sai iyer , gerd lanferman , joan mass , peter miller , philippos papadopoulos , manish parashar , bo qin , k.v .rao , paul saylor , bernard schutz , edward seidel , john shalf , hisa - aki shinkai , joan stela , doug swesty , ryoji takahashi , robert young , paul walker , ed wang , and william wu for useful discussions and various help with the code development . this work is supported by nsf grants phy 96 - 00507 and 96 - 00049 , nsf nrac allocation grant no .mca93s025 , nasa grant nasa - nccs5 - 153 , the albert einstein institute , and the institutes of mathematical sciences , chinese university of hong kong .one of us ( j.a.f ) acknowledges financial support from the tmr program of the european union ( contract number erbfmbict971902 ) . c. bona, j. i. nez , j. mart , and j. mass , in _ gravitation and general relativity : rotating bodies and other topics _ , vol .423 of _ lecture notes in physics _ , edited by f. chinea ( springer - verlag , new york , 1993 ) , chap .shock capturing methods in 1d numerical relativity .the source and documentation of the released code can be downloaded at http://wugrav.wustl.edu /codes / gr3d . for credit of the code development , see the document http://wugrav.wustl.edu /codes/ gr3d / nasa_ms2.ps . | this is the first in a series of papers on the construction and validation of a three - dimensional code for general relativistic hydrodynamics , and its application to general relativistic astrophysics . this paper studies the consistency and convergence of our general relativistic hydrodynamic treatment and its coupling to the spacetime evolutions described by the full set of einstein equations with a perfect fluid source , complimenting a similar study of the ( vacuum ) spacetime part of the code . the numerical treatment of the general relativistic hydrodynamic equations is based on high resolution shock capturing schemes , specifically designed to solve non - linear hyperbolic systems of conservation laws . these schemes rely on the characteristic information of the system . a spectral decomposition for general relativistic hydrodynamics suitable for a general spacetime metric is presented . evolutions based on different approximate riemann solvers ( flux - splitting , roe , and marquina ) are studied and compared . the coupling between the hydrodynamics and the spacetime ( the right and left hand side of the einstein equations ) is carried out in a treatment which is second order accurate in _ both _ space and time . the spacetime evolution allows for a choice of different formulations of the einstein equations , and different numerical methods for each formulation . together with the different hydrodynamical methods , there are twelve different combinations of spacetime and hydrodynamical evolutions . convergence tests for all twelve combinations with a variety of test beds are studied , showing consistency with the differential equations and correct convergence properties . the test - beds examined include shocktubes , friedmann - robertson - walker cosmology tests , evolutions of self - gravitating compact ( tov ) stars , and evolutions of relativistically boosted tov stars . special attention is paid to the numerical evolution of strongly gravitating objects , e.g. , neutron stars , in the full theory of general relativity , including a simple , yet effective treatment for the surface region of the star ( where the rest mass density is abruptly dropping to zero ) . the code has been optimized for massively parallel computation , and has demonstrated linear scaling up to 1024 nodes on a cray t3e . |
when one analyzes data that arrive sequentially over time , it is important to detect changes in the underlying model which can then be adjusted accordingly .such problems arise in many engineering ( signal processing , speech recognition , communication systems ) , econometric and biomedical applications and can be found in an extensive literature widely scattered in these fields .inference on time - varying parameters in stochastic systems is therefore of fundamental interest in sequential analysis .consider an -valued time series , , , , such that at time moment the first observation and subsequently at each time moment a new observation arrives according to the model , where .suppose we are interested in certain characteristics of the conditional distribution of given the past : .here is an operator mapping conditional distributions into measurable -valued functions , , is a compact subset of .the goal is to estimate ( or to track ) at time instant , based on the data ( and prior information ) available by that time moment .the traditional parametric formulation is the most simple particular case of the above setting : the observations are independent and the parameter is a constant vector .the simplest nonparametric formulation deals again with independent observations and time - varying parameter , ( cf . .modeling observations by a markov chain with a time varying parameter of the transition law would add a next level of complexity ( cf .for the autoregressive model in ) .the proposed time series formulation admits an arbitrary dependence structure between the observations . another important and peculiar feature of our approach is that the multidimensional parameter , , besides being time - varying , is also allowed to depend on the past of the time series .it is thus a predictable process with respect to the natural filtration : .an example of such characteristics is the conditional expectation ] . combining this with ( [ g_bounded ] ) yields , . on the other hand , from ( [ c_theta ] ) , ( a1 ) and ( a2 )it follows that this relation and lemma [ lemma_bound ] below ( thus the conditions of lemma [ lemma_bound ] must hold ) will in turn imply the uniform bound ( [ g_bounded ] ) . generally , there is no universal way to find gain vectors which satisfy conditions ( a1 ) and ( a2 ) . in many practical situations ,the model is typically specified and it is an art to find gain vectors which satisfy ( a1 ) and ( a2 ) ; we discuss this issue in more detail in section [ sec : gains ] .the assumptions above look somewhat unnatural and cumbersome because they are assumed to hold for all , whereas functions involved in the conditions depend in general on whose dimension increases unlimitedly as increases .however , the assumptions become reasonable in the important case of markov chain observations of order , say , . in this case , for any we can use vector of bounded dimension instead of ( of growing dimension ) in all the quantities from conditions ( a1 ) and ( a2 ) .independent observations is a next simplification , also important in many practical applications . in this casethere is no past involved in the function , , it will only be a function of time .[ rem : filtration ] suppose that conditions ( a1 ) and ( a2 ) hold for the filtration and for some measurable gain functions , , but the parameter sequence is predictable with respect to a coarser filtration , i.e. , , , where for some measurable s .for example , the vector consist of two subvectors and ( i.e. , ) and with ( think of as unobservable part of and as observable ) .then , by the tower property of the conditional expectation , conditions ( a1 ) and ( a2 ) hold for the filtration as well if we take the new gain function ] and . in view of the lemma below , if ( a1 ) holds , then ( 1 ) will also hold ( and vice versa ) ; the values of the constants and appearing in the assumptions are different , though .the proof of this lemma is deferred to section [ sec : proofs ] .[ lemma1 ] let . if there exists a symmetric positive definite matrix such that and for some , then and for some ( depending only on ) such that and .conversely , if and for some such that and , then there exists a symmetric positive definite matrix such that and for some constants depending only on and .we start with a lemma which we will need in the proof of the main result .heuristically , since the gain vector moves , on average , towards and the sequence is bounded ( since is compact ) , the resulting estimating sequence should also be well - behaved .the following lemma states that the second moment of is uniformly bounded in for sufficiently small s . [ lemma_bound ]let assumptions ( a1 ) and ( a2 ) hold .then for sufficiently small there exists a constant such that the proof of this lemma is given in section [ sec : proofs ] .in fact , it is enough to assume that is sufficiently small for all for some fixed .this is the case if as , which is typically assumed .this lemma will be used in the proof of the main theorem below . from now on we assume that the sequence is such that lemma [ lemma_bound ] holds .the following theorem is our main result , it provides a non - asymptotic upper bound on the quality of the tracking algorithm ( [ eq : algorithm_main ] ) in terms of of the algorithm step sequence and oscillation of the process to track between arbitrary time moments , .[ theo : bound ] let assumptions ( a1 ) and ( a2 ) hold , the tracking sequence be defined by ( [ eq : algorithm_main ] ) and , . then for any and sequence ( satisfying the conditions of lemma [ lemma_bound ] ) such that and for all , the following relation holds : ^{1/2 } \!\!+ c_3\max_{k_0\le i\le k } { { \mathbb{e}}}\|\theta_{i+1}-\theta_{k_0}\|,\ ] ] where , , , constants are from assumptions ( a1 ) and ( a2 ) , is defined by ( [ c_theta ] ) and is from lemma [ lemma_bound ] .[ rem : norms ] by using ( [ eq : bounde ] ) , one can derive a bound for , with .indeed , as for any and , we obtain that for and for . for the sake of brevity , denote , , and , .we have = { { \mathbb{e}}}[(g_k - g_k)|\mathcal{f}_{k-1}]=g_k - g_k=0 , \quad k \in \mathbb{n}_0.\ ] ] it follows that , is a ( vector ) martingale difference sequence with respect to the filtration .rewrite the algorithm equation ( [ eq : algorithm_main ] ) as in view of ( a1 ) , the decomposition holds almost surely , with an -measurable symmetric positive definite matrix , so that by iterating the above relation , we obtain that for any \delta_{k_0 } + \sum_{i = k_0}^k \big[\prod_{j = i+1}^k(\bm{i}- \gamma_j \bm{m}_j)\big ] ( \delta\theta_i+\gamma_i d_i).\end{aligned}\ ] ] denote , and . applying the abel transformation ( lemma [ lemma : abel ] ) to the second term of the right hand side of ( [ eq : recursed ] ) yields ( \delta\theta_i+\gamma_i d_i ) = h_k - \sum_{i = k_0}^{k-1}\ !\gamma_{i+1}\bm{m}_{i+1 } \big[\!\prod_{j = i+2}^k(\bm{i}-\gamma_j \bm{m}_j)\big]h_i.\ ] ] in particular , note that if we take , and for , and for , we derive that ( since for ) which we will use later .using ( [ abel_transform ] ) , we can rewrite our expansion of in ( [ eq : recursed ] ) as follows : \delta_{k_0 } + h_k - \sum_{i = k_0}^{k-1}\gamma_{i+1}\bm{m}_{i+1 } \big[\prod_{j = i+2}^k(\bm{i}-\gamma_j\bm{m}_j)\big]h_i.\end{aligned}\ ] ] the previous display , the minkowski inequality and the sub - multiplicative property of the operator norm ( ) imply that in view of ( a1 ) and the condition for , , , almost surely .hence , , almost surely .this , lemma [ lemma : eig ] and the fact ( see ( [ eq : lambda ] ) from ( a1 ) ) that ] , , and then , beginning with the relation ( [ eq : recursed ] ) , work with the representation instead of just , using the relation ( [ different_bound ] ) for and the fact that , is a ( matrix ) martingale difference sequence with respect to the filtration .we will not pursue this here .imposing somewhat stronger versions of conditions ( a1 ) and ( a2 ) enables us to derive a similar non - asymptotic bound for the expectation of for all . of course, the bigger , the bigger the constants involved in the bound .the next theorem is a strengthened version of the previous result .[ theo : bound2 ] suppose that the conditions of theorem [ theo : bound ] are fulfilled .if , in addition ( to assumption ( a1 ) ) , and ( instead of ( a2 ) ) almost surely for all , then for any ^{p/2 } + c'_3\max_{k_0\le i\le k}{{\mathbb{e}}}\|\theta_{i+1}-\theta_{k_0}\|_p^p,\end{aligned}\ ] ] where , , and is the constant from lemma [ lemma : eig ] .now we have stronger versions of assumptions ( a1 ) and ( a2 ) : hold almost surely . along the same lines as for ( [ eq : after_triangle_lp ] ) , by using lemma [ lemma : eig ] , ( [ eq : a1_stronger ] ) , ( [ eq : bound_on_coefs ] ) and the elementary inequality , we obtain that \\ & \le k_p \|\delta_{k_0}\|_p \exp\big\{-\lambda_1\sum_{i = k_0}^k\gamma_i\big\ } + \big[1+\frac{k_p^2\lambda_2}{\lambda_1}\big ] \big(\max_{k_0\le i\le k } \|a_i\|_p+\max_{k_0\le i\le k } \|b_i\|_p\big)\end{aligned}\ ] ] almost surely , where constant is from lemma [ lemma : eig ] .take now the -th power of both sides of the inequality and apply the hlder inequality for to get recall that the sequence is a martingale with respect to the filtration and that the coordinates of verify almost surely , , . applying the maximal burkholder inequlity for and the davis inequality for ( cf . ) yields ^{p/2 } \le d b_p 2^p \bar{g}^p \bigg[\sum_{j = k_0}^k \gamma_j^2 \bigg]^{p/2},\end{aligned}\ ] ] for some constant .one can take for , cf .the second inequality of the theorem now follows by taking expectations on both sides of the bound on above and by using the last inequality .one can derive a similar result for the , by simply taking the -th power of the inequality ( [ eq : after_triangle_lp ] ) and then proceeding in the same way as in the proof of theorem [ theo : bound2 ] , with minor modifications in the argument for the martingale .once a bound on is established , one can use it for proving theorem [ theo : bound2 ] in another way .namely , since for any and , , with if and if . thus , a bound for will immediately follow from the obtained bound for . the bound will be of the same form as in theorem [ theo : bound2 ] , but with different constants .[ rem : close_parameter ] consider the following situation , which we will call case i. suppose we are not interested in tracking the , say , _ natural _ parameter of the model , but rather some other time - varying parameter , which is also assumed to be predicable with respect to the filtration .denote , .the difference , , can be seen as an approximation error .similar to ( [ eq : recursed ] ) , the following expansion can be derived for the quantity : \delta_{k_0}^ * + \sum_{i = k_0}^k \big[\prod_{j = i+1}^k(\bm{i}- \gamma_j \bm{m}_j)\big ] ( \delta\theta^*_i + \gamma_i\bm{m}_i \varepsilon_i+\gamma_i d_i).\end{aligned}\ ] ] now consider case ii : we want to track the natural parameter but the average gain makes an error , i.e. , , .the error term may be random but must be measurable with respect to . again , similar to ( [ eq : recursed ] ) , we can derive \delta_{k_0 } + \sum_{i = k_0}^k \big[\prod_{j = i+1}^k(\bm{i}- \gamma_j \bm{m}_j)\big ] ( \delta\theta_i + \gamma_i\eta_i + \gamma_i d_i).\ ] ] now notice that case i can actually be reduced to case ii by putting in the last relation and ( where ) , .therefore , consider only case ii from now on . under the conditions of theorem [ theo : bound ] , in the same way as for ( [ eq : bounde ] ) , we can derive the following bound : ^{1/2}\\ & + c_3{{\mathbb{e}}}\max_{k_0\le i\le k } \|\theta_{i+1}-\theta_{k_0}\| + c_3 { { \mathbb{e}}}\sum_{i = k_0}^k \gamma_i\|\eta_i\| . \end{aligned}\ ] ] similarly , under the conditions of theorem [ theo : bound2 ] , ^{p/2}\\ & + c'_3 { { \mathbb{e}}}\big[\max_{k_0\le i\le k } \|\theta_{i+1}-\theta_{k_0}\|_p + \sum_{i = k_0}^k\gamma_i\|\eta_i\|_p\big]^p . \end{aligned}\ ] ] clearly , ( [ eq : bounde2 ] ) and ( [ eq : boundas2 ] ) generalize the bounds of theorems [ theo : bound ] and [ theo : bound2 ] , where we had , . in case i , we have and with , , in relations ( [ eq : bounde2 ] ) and ( [ eq : boundas2 ] ) . noting that for all and , we can rewrite bounds ( [ eq : bounde2 ] ) and ( [ eq : boundas2 ] ) in terms of instead of with appropriate adjustments of corresponding constants .any gain function for which conditions ( a1 ) and ( a2 ) hold may be used with our algorithm , and whether a particular gain function is suitable or not depends on the model under study and the quantity that we wish to track . for certain types of models and quantities to track ,there are natural choices for the gain function .many different settings are investigated in the literature . in this sectionwe consider the construction of appropriate gain functions to be used in the algorithm ( [ eq : algorithm_main ] ) in several traditional settings .in particular , we relate our general approach to well known classical procedures such as robbins - monro and kiefer - wolfowitz algorithms and outline possible extensions .the traditional ` signal+noise ' situation can be represented by the following observation model : where , is a predictable process ( ) we are interested in tracking , is a martingale difference noise , with respect to the filtration .we use the algorithm ( [ eq : algorithm_main ] ) for tracking , and in this case we can simply take the following gain function since = - ( \hat{\theta}_k - \theta_k ) , \quad k \in \mathbb{n}_0,\ ] ] i.e. , .clearly , condition ( a1 ) holds and condition ( a2 ) follows as well if we assume , . indeed , according to ( [ g_bounded ] ) , it is enough to show the boundedness of the second moment of : \le c , \quad k \in \mathbb{n}_0,\ ] ] by virtue of the hlder inequality , lemma [ lemma_bound ] and ( [ c_theta ] ) .the classical nonparametric regression model fits into this framework so that our results can be applied .for example , the simplest nonparametric regression model with an equidistant design on ] for some smooth function . in this case, one should consider , , so that , , which should be comparable to .autoregressive models , for example , fall into this category ( cf. section [ sec : examples : ard ] ) .let us turn to more dynamical situations where the observations themselves depend on our tracking sequence . in their seminal paper , studied the problem of finding the unique -root of a monotone function , i.e. , the equation has a unique solution at .the function can be observed at any point but with noise : so that .a stochastic approximation algorithm of design points converging to is known as classical robbins - monro procedure .we now illustrate how this also fits into our general tracking algorithm scheme .in fact , the following model essentially extends the original setup of .suppose there is a time series ( with taking values in ) running at the background , which is not ( fully ) observable . instead, some other -dimensional ( related ) time series is observed , which we introduce below . as usual , let , .further , for a sequence of functions , let a -dimensional measurable function be the unique solution of the equation , where ] , where is a subvector of , independent of .another classical example is the algorithm of for successive estimating the maximum of a function which can be observed at any point , but gets corrupted with a martingale difference noise ( similarly , one can formulate the problem of tracking minima of a sequence of functions ) .the algorithm is based on a gradient - like method , the gradient of being approximated by using finite differences .there are many modifications of the procedure , including multivariate extensions , and they are all based on estimates of the gradient of .the following scheme essentially contains many such procedures considered in the literature and even extends them to a time - varying predictable maxima process . as in the previous subsection , suppose there is a time series , with taking values in , running in the background , which is not ( fully ) observable . instead , some other related time series is observed , which we introduce below . let , .suppose we are given a sequence of measurable functions , , , such that the function ] , .clearly , ( a2 ) holds in view of moment conditions on the quantities in ( [ kiefer_wolfowitz ] ) , however ( a1 ) is not satisfied in general since there is an approximation ( possibly nonzero ) term involved .yet , we are in the position of remark [ rem : close_parameter ] and thus the bound ( [ eq : bounde2 ] ) for the tracking error holds in this case .this bound is however useful only if the approximation errors s get sufficiently small as gets bigger .the most desirable situation is when , . for each particular model of form ( [ kiefer_wolfowitz ] ), one needs to determine conditions that should be imposed on the approximate gradients s in order to be able to claim a reasonable quality of the tracking algorithm by using our general result .conditions on approximate gradients s from ( [ kiefer_wolfowitz ] ) which provide control on the magnitude of the approximation errors s are comparable to the ones proposed in many papers .examples can be found in ; see further references therein .commonly , a finite difference form of the gradient estimate is used as noisy approximate gradient .below we outline two settings .first consider the following situation which is very close to the classical kiefer - wolfowitz setting : for some subvector of , independent of defined below and we wish to maximize the function . for simplicity ,let and all s are identically distributed ( although the generalization to the time - varying case is straightforward ) so that is to be maximized : . let be a positive sequence , be the standard orthonormal basis vectors in , en have the same distribution as , .denote , , likewise for and .the observations are the noisy finite difference estimates of the gradient : here is a martingale difference noise sequence with respect to the filtration , denotes the estimate of the maximum point according to the algorithm ( [ eq : algorithm_main ] ) with the gain . then , under some regularity conditions , where the magnitude of is controlled by . usually as in an appropriate way . to ensure that ( possibly with a small approximation error ) for some positive definite matrix satisfying ( [ eq : lambda ] ) , concavity of is typically required , either global or over a compact set which is known to include the maximum location . for example , if function is sufficiently smooth and strongly concave , then by taylor s expansion , the hessian matrix of at some point between and , the relations ( [ eq : lambda ] ) are fulfilled and the approximation error is small if is small .another approach ( due to ) is based on random direction instead of the unit basis vectors .we use the same notations as in the previous setting with one simplification : assume now that there are no vectors s involved in the model so that .let denote a sequence of independent ( is also assumed to be independent of ) random unit vectors in . at time moment we observe where the tracking sequence is defined by the algorithm ( [ eq : algorithm_main ] ) with the gain function .notice that one step in the previous ( classical kiefer - wolfowitz ) observation scheme requires in essence observations in design points , , whereas only two measurements must be made in the case of the above random direction observation scheme .this property was the main motivation for the random direction method introduced by .then , under some regularity conditions , \\ & = { { \mathbb{e}}}\big [ d_kd_k^t \big]\nabla f(\hat{\theta}_k)+ \eta_k = -\bm{m}_k(\hat{\theta}_k-\theta_k ) + \eta_k,\end{aligned}\ ] ] where (f)(\theta_k^*) ] are positive definite matrices and the hessian is negative definite .a particular choice of function is , , where s is a sequence of observations with values on a measurable space and is a loss function .then is the prediction risk of the predictor given by .classical examples are least squares and logistic regression ( cf . ): or ] almost surely , for some and sufficiently large , by using the fact that \ge\lambda_1 ] , where ] , we conclude that , for a sufficiently large , we can rewrite ( [ eq : bounde ] ) and ( [ eq : boundas ] ) as respectively , ^p \le c , \quad p\ge1.\ ] ] if we let , this is almost ( up to a log factor ) parametric convergence rate , the -factor in the rate can not be avoided and is in some sense a price for the recursiveness of the algorithm . if we are in the situation of theorem [ theo : bound2 ] , then by taking ( where is some small fixed number ) and by using markov s inequality and the second bound in the previous display , we derive that in view of the borel - cantelli lemma , it follows that as with probability 1 at a rate . the particular setup presented in this section , where the parameter is fixed , might seem out of place since we are mainly concerned with tracking time - changing parameters .we would like to point out that recursive algorithms in parametric situation can also be useful ; for example , the classical robbins - monro and kiefer - wolfowitz algorithms deal with the parametric case .recursive procedures often produce estimates in a fast , straightforward fashion .this is an advantage especially over offline " estimators obtained , say , as solutions to a certain system , which require iterative likelihood or least squares optimization or are obtained via other indirect methods , a situation which is common when dealing with markov models ( cf . section [ sec : examples : ard ] . )suppose now that the parameter we want to track is stabilizing .this situation might arise if the expectation of the sequence of values that the parameter takes is converging to some limiting value .it could also be the case that the data is being sampled with increasing frequency from an underlying , continuous time process which depends on a parameter varying continuously ; in this case , the parameter varies less because it has less time to change .regardless , we assume that verifies for and some positive sequence .assume that for some and .consider first the case . in this case, the variation of the parameter vanishes so quickly that we are essentially in the setup of the previous section , i.e. , as if the parameter is constant .indeed , take and as in the previous section .the first and second terms in both ( [ eq : bounde ] ) and ( [ eq : boundas ] ) can be bounded in the same way as in the previous section . using the relations between norms from remark [ rem : norms ] , we upperbound the third term in ( [ eq : bounde ] ) by a multiple of using the hlder inequality , we upper bound the third term in ( [ eq : boundas ] ) by a multiple of ^p \le c n^{-(\beta-1)p } \le c n^{-p/2}.\end{aligned}\ ] ] clearly , in both ( [ eq : bounde ] ) and ( [ eq : boundas ] ) the third term is of a smaller order than the second term .thus , the relations ( [ constant_parameter ] ) remain valid for the case .consider now the case .let , . by using the elementary inequality for and , we obtain that for any there is a sufficiently large constant such that \\ & \ge\frac{c_\gamma ( \log n_0)^{1/3}}{1 - 2\beta/3 } \big[n^{1 - 2\beta/3 } -n^{1 - 2\beta/3 } \big(1-n^{2\beta/3 - 1 } ( \log n)^{2/3}(1 - 2\beta/3)\big)\big]\\ & = c_\gamma ( \log n_0)^{1/3 } ( \log n)^{2/3 } \ge c \log n \ ] ] for sufficiently large , i.e. , .this yields the same upper bound for the first term in ( [ eq : bounde ] ) and ( [ eq : boundas ] ) as for the static parameter , namely , for any by taking sufficiently large .let us bound now the second term in ( [ eq : bounde ] ) and ( [ eq : boundas ] ) : for . for sufficiently large ( i.e. , ) the third terms in ( [ eq : bounde ] ) and ( [ eq : boundas ] ) are bounded similarly to ( [ eq : bound00 ] ) and ( [ eq : bound01 ] ) by , respectively , and finally we obtain that for and sufficiently large constant in the algorithm step , ( [ eq : bounde ] ) and ( [ eq : boundas ] ) can be rewritten as respectively ^p \le c,\ ] ] where is the burn - in period of the algorithm . if we choose and , , , in case , then we get the following bound of the convergence rate : for sufficiently large and sufficiently large constant thus , the choices , , are optimal in the sense of the minimum of the right - hand side of the above inequality .much in the same way as for ( [ eq : almost_sure ] ) , we can establish that for any , with probability 1 .finally , consider the case , i.e. , we assume the following weak requirement : , , for some uniform constant .take , for some , .then theorem [ theo : bound ] implies that we thus have that the algorithm will track down the parameter in the proximity of size , which we can try to minimize by choosing appropriate constants and .we consider now a different setup where we assume that the parameter is changing , on average , like a lipschitz function . in this setupwe let the time series ( [ model ] ) be sampled from a continuous time process , ] for some and , a space of vector valued lipschitz functions .let ( constant in ) for , and for .note that for as for any .we have so that once again the first term in ( [ eq : bounde ] ) and ( [ eq : boundas ] ) can be upper bounded by for any by taking sufficiently large . as to the second term , we evaluate from our assumption on the variation of the parameter , we have ^p.\end{aligned}\ ] ] combining these three bounds , we get that ( [ eq : bounde ] ) ( we also need the relations between norms from remark [ rem : norms ] ) and ( [ eq : boundas ] ) imply ^p.\end{aligned}\ ] ] if we consider step sizes of the form , the above proposed choices of and are optimal in the sense of tracking error minimum .note that the obtained convergence rate ( the asymptotic regime : the observation frequency ) coincides , up to a log factor , with the minimax rate of convergence in the problem of estimating nonparametric regression function over lipschitz functional class .in this section we present some examples of particular models to which our algorithm may be applied .we start with two toy examples and present thereafter some more involved examples .the toy examples illustrate the type of results that can be obtained from our main result and its extensions , how a gain function can be picked and modified , and how conditions ( a1 ) and ( a2 ) are checked .suppose we are monitoring independent poisson processes on ] .assume that is continuous , then for large enough .consider now the gain function of the type ( [ eq : gain_canonical_2 ] ) for the algorithm ( [ eq : algorithm_main ] ) so that = - ( \hat{\theta}_k-\theta_k ) , \notag\end{aligned}\ ] ] it follows that since }\lambda(t)\le l ] .note that and the average number of events per time unit will stabilize in time if , for example , as .the algorithm will then track the mean number of events per time unit .we can also assume that the intensity function belongs to for some and .let , .it follows that the tracking sequence based on the gain ( [ eq : gain_poisson ] ) will then track the sequence , ( as well as ) with the asymptotics seen in section [ sec : variational_setups ] ( cf .remark [ rem : close_parameter ] ) .assume that we observe , with fixed frequency , a process , , taking values on , .the observations available up to time moment is a random vector , with .we again skip the dependence on , although all the quantities below do depend on .the increments are assumed to be conditionally gaussian in the sense that given the past of the process , each increment has a multivariate normal distribution : the dependence on the past in the model comes from the fact that both the mean and the covariance processes of the above conditional distributions are predictable , i.e. , and , , with respect to the filtration .if the covariance structure of the process is known , we can use the gain ( [ eq : gain_canonical_1 ] ) which verifies for this gain , we assume that almost surely for some positive . we then obtain that and assumptions ( a1 ) and ( a2 ) are thus met for the gain from ( [ eq : gain_gaussian1 ] ) .now suppose that the covariance matrix of the process is unknown or difficult to invert .then we can use the gain ( [ eq : gain_canonical_2 ] ) , so that clearly , assumptions ( a1 ) and ( a2 ) are again met for the gain from ( [ eq : gain_gaussian2 ] ) if for some , , almost surely .the results of section [ sec : variational_setups ] can be applied to the algorithm based on the gain functions presented above for all three considered asymptotic regimes : constant parameter process , stabilizing ( on average ) process and lipschitz on average .although designed for different frameworks , it is interesting to compare the above resulting tracking algorithm with the famous _ kalman filter_. for simplicity , consider the one dimensional situation .suppose we observe where the parameter of interest , evolves according to with .at each step , the initial state and the noises are assumed to be mutually independent .one can show ( by combining both prediction and update steps ) that the kalman filter in this case reduces to we also derive the exact expression for the mean squared error of the algorithm : coming back to our framework , suppose we have observations ( [ model_kalman ] ) with predictable process such that , ; cf .section [ sec : variational_setups : stabilizing ] .then the kalman filter ( [ kalman_algorithm ] ) coincides with our tracking algorithm with the gain ( [ eq : gain_gaussian1 ] ) and a particular choice of the step sequence given by ( [ kalman_step ] ) .one should keep in mind that the two frameworks are different , but it would still be interesting to compare the convergence rates for some particular settings for stabilizing the parameter .for example , one can consider , , as in section [ sec : variational_setups : stabilizing ] .the above kalman filter setting has more structure and we expect therefore that the rate in this case ( which is of order , with defined by ( [ kalman_step ] ) ) should be faster than the rate obtained in section [ sec : variational_setups : stabilizing ] for our general framework .we were however unable to solve the recursive rational difference equation ( [ kalman_step ] ) for .note that the trivial case leads to the situation of a constant parameter and the sample mean as an estimator for that parameter .consider the following arch( ) model with drifting parameter where almost surely , is predictable and is a martingale difference noise with respect to the filtration , =\sigma_k^2 ] , , for some .consider the gain function for some truncating constant . since and = 1+\theta_k x_{k-1}^2 ] , . for example , we can take .we conclude that ( a1 ) holds for the gain ( [ eq : gain_arch_1 ] ) . to ensure ( a2 ) ,we evaluate \\ & \le 3{{\mathbb{e}}}\big[\big(\frac{\min(x_{k-1}^2 , t)}{x_{k-1}^2}\big)^2 \big ( ( 2 + 2\theta_k^2x_{k-1}^4 ) \epsilon_k^4 + 1+\theta_k^2 x_{k-1}^4\big ) \big ] \\ & \le 9 + 3 { { \mathbb{e}}}\big[\big(\frac{\min(x_{k-1}^2 , t)}{x_{k-1}^2}\big)^2 ( 2c_\theta \rho + c_\theta ) x_{k-1}^4\big ] \\ & \le 9 + 3c_\theta t^2(2\rho+1 ) . \ ] ] in this section we use the notation for the vector of the consecutive observations ending with . consider an autoregressive model with time varying parameters : where is -measurable , is a martingale difference noise with respect to the filtration such that , , starting random vector is given and such that , for some .for , associate with the ar(d ) model its polynomial it is well know that an ar(d ) model with autoregressive parameters is stationary if , and only if , the ( complex ) zeros of the polynomial are outside the unit circle .this motivates the definition of the parameter sets for some : cf . who also showed that the following embeddings hold : where is a uniform ball around zero in with radius .this gives some feeling about the size of the parameter set and implies in particular that the set is non - empty and bounded for all .the ar(d ) model can also be described by the following inhomogeneous difference equation where and , for any , is the square matrix of order this matrix is usually called the _ companion matrix _ to the autoregressive polynomial ; it is also sometimes called the _ state transition matrix_. one can show that the eigenvalues of are exactly the reciprocals of the zeros of .this means that the absolute values of the eigenvalues of for are all at most .this in turn implies that for any sequence of vectors , the pair of sequences forms a so called _ exponentially stable _ pair ( cf . ) . among other things ,this gives us that so long as the -th moments of both the initial and the noise terms are bounded , then the -th moments of all , , will be bounded as well ( cf . proposition 10 of ) .in the model ( [ eq : ar_p_model ] ) is considered with nonrandom but time varying for some smooth function , ] , =\sigma^2>0 ] , , for some constant .the proposed gain and the corresponding average gain are as follows : with some .note that this is a rescaled gain function of type from section [ sec : gains ] .clearly , and , according to lemma [ lemma : truncated_conditional ] , = \mathbb{e}\big[\min\{x_{k-1}^2,t\}|\bm{x}_{k-2}\big ] \ge \frac{(5-c)\sigma^2}{4},\ ] ] so that ( a1 ) holds . assumption ( a2 ) also holds since \le \max\{t^2,1\ } \sigma^2.\end{aligned}\ ] ] finally consider a version of general ar(d ) model .we will only outline the main steps , leaving out the details .assume that the noise terms in ( [ eq : ar_p_model ] ) form a gaussian white noise sequence with mean zero and variance and that the parameter process is constant within the batch of consecutive observations .for a -dimensional vector , introduce the toeplitz matrix associated with that vector whose entries are , , so that this matrix has constant ( from left to right ) diagonals .thus , is the column vector formed by starting at the top right element of , going backwards along the top row of and then down the left column of .denote and introduce and , the toeplitz matrices created from the vectors and respectively . under the imposed assumptions , we can rewrite the model ( [ eq : ar_p_model ] ) as follows : the matrix is upper triangular with a diagonal consisting of ones , whence invertible . from this point on ,we regard vector , , as an observation at time moment so that we can specify our observation model in terms of conditional distribution of given : where is a predictable process with respect to the filtration .notice that the observation process is of a markov structure . even if the normality of the noise is assumed in the model ( [ eq : ar_p_model ] ) , the models ( [ eq : ar_p_model ] ) and ( [ eq : ar_d_kernel ] ) still differ since in general the parameter process varies also within the batches of observations in the model ( [ eq : ar_p_model ] ) . however , this is not an issue . indeed , even though the parameter is allowed to vary within each batch of observation , we still can use the gain function ( which we derive below ) as if the parameter process is constant within the batches and establish an upper bound of type ( [ theo : bound ] ) for the quality of such a procedure .the error that is made by pretending that the parameter is constant within the batches can be absorbed into the third term of the right hand side of ( [ theo : bound ] ) . in this casewe propose a gain of the type ( [ eq : gain_canonical_1 ] ) : where is the conditional density of ( [ eq : ar_d_kernel ] ) .thus , the tracking sequence is updated with batches of observations from the autoregressive process .below , to ease the notation , we will often write and instead of and , respectively . as explained in section [ sec : gains ] , the corresponding average gain can be found as minus the gradient of the kullback - leibler divergence between the two conditional distributions with two different parameters .this observation is particularly useful if we are able to write this kullback - leibler divergence as an appropriate quadratic form .the kullback - leibler divergence between two -dimensional multivariate normal distributions and is given by let , i.e. , ( not to be confused with the vectors , ) and . according to ( [ eq : ar_d_kernel ] ) , and .now we compute .let be the toeplitz matrix associated with the vector where is in the -th position .matrix has ones above the main diagonal and zeros elsewhere and it is sometimes called _ upper shift matrix_. for , the powers are the toeplitz matrices associated with the vectors where occupies the -th position , , the zero matrix of order , and should be read as , the identity matrix of order .it follows that , so that and for all , the matrices have all eigenvalues equal to one ( so do their inverses ) , hence and we conclude that the logarithm in ( [ eq : kl_normals ] ) is zero . also , using basic properties properties of the trace and the representation for derived above , - d = \operatorname{tr}\big[\big(\bm{a}^{-1}(\vartheta)\bm{a}^{-t}(\vartheta)\big)^{-1}\big(\bm{a}^{-1}(\theta)\bm{a}^{-t}(\theta)\big ) \big]- d\\ & = \operatorname{tr}\big[\bm{a}^t(\vartheta)\bm{a}(\vartheta)\bm{a}^{-1}(\theta)\bm{a}^{-t}(\theta)\big]- d = \operatorname{tr}\big[\big(\bm{a}(\vartheta)\bm{a}^{-1}(\theta)\big)^t\bm{a}(\vartheta)\bm{a}^{-1}(\theta)\big]-d\\ & = 2\sum_{i=1}^d \operatorname{tr}\big[\bm{s}^i\bm{a}^{-1}(\theta)\big](\theta_i-\vartheta_i)+ \sum_{i=1}^d\sum_{j=1}^d \operatorname{tr}\big[\bm{a}^{-t}(\theta)(\bm{s}^i)^t\bm{s}^j\bm{a}^{-1}(\theta ) \big ] ( \theta_i-\vartheta_i)(\theta_j-\vartheta_j).\end{aligned}\ ] ] since the inverse of an upper - triangular matrix is upper - triangular , =0 ] , .we conclude that the previous display can be written as - d = ( \vartheta-\theta)^t\big[v_1(\theta ) v_2(\theta ) \ldots v_d(\theta)\big]^t \big[v_1(\theta ) v_2(\theta ) \ldots v_d(\theta)\big ] ( \vartheta-\theta),\ ] ] where the matrices on the right are comprised by columns .consider now the quadratic form in the kullback - leibler divergence ( [ eq : kl_normals ] ) .for any , since , we have which , together with the representation for derived above , imply where , ; notice also that . then (\vartheta-\theta).\ ] ] summarizing , we obtained that with ^t \big[v_1(\theta ) v_2(\theta ) \ldots v_d(\theta)\big ] \notag\\ \label{matrix_m } & \quad + \sigma^{-2 } \big[\bm{c}_1(\theta)y \ldots\bm{c}_d(\theta)y \big]^t \big[\bm{c}_1(\theta)y \ldots \bm{c}_d(\theta)y\big].\end{aligned}\ ] ] according to ( [ kl - gain ] ) , we can derive the expression for the average gain : where is given by ( [ matrix_m ] ) .note that the matrix does not depend on and is clearly positive semidefinite .we evaluate now its eigenvalues . in the representation ( [ matrix_m ] ) ,the first matrix in the sum is positive semi - definite but has at least one zero eigenvalue .it is also clear that the entries of this matrix are polynomials of the coordinates of , so that , if for a bounded set , then the largest eigenvalue of this matrix is upper bounded , uniformly over , by some constant , say , . as to the second (also positive semidefinite ) matrix in the sum of matrices from ( [ matrix_m ] ) , note that the entries of the matrices , , are polynomials in which are bounded uniformly over a bounded set .recall also that the trace of a matrix is equal to the sum of its eigenvalues .we conclude that uniformly in for any bounded . to derive a lower bound on the smallest eigenvalue of the matrix , note that this matrix can be rewritten in the form + \left [ \begin{array}{ccc|c } c_{1,1}(\theta ) & \cdots & c_{1,d-1}(\theta ) & c_{1,d}(\theta)\\ \vdots & \ddots & \vdots & \vdots\\ c_{d-1,1}(\theta ) & \cdots & c_{d-1,d-1}(\theta ) & c_{d-1,d}(\theta)\\ \hlinec_{d,1}(\theta ) & \cdots & c_{d , d-1}(\theta ) & 0 \end{array } \right]\ ] ] for and , where we swapped the -th entries of the matrices in the sum from ( [ matrix_m ] ) .we used also that and .note that the top left matrices in the block matrices above are gram matrices and therefore positive semidefinite .the matrix {i , j=1,\dots , d-1} ] .one can then proceed as in ( [ m_bound_1 ] ) ( and lemma [ lemma : truncated_conditional ] ) to show that for an appropriately large , > c ] , =\sigma^2>0 ] , , for some constant .then , for any such that , \ge \frac{(5-c)\sigma^2}{4},\quad k \in \mathbb{n}.\ ] ] we compute & = x_{k-1}^2 \theta_k^2 + 2x_{k-1}\theta_k \mathbb{e}[\xi_k|\bm{x}_{k-1 } ] + \mathbb{e}[\xi_k^2|\bm{x}_{k-1 } ] = x_{k-1}^2 \theta_k^2+\sigma^2,\\ \mathbb{e}[x_k^4 |\bm{x}_{k-1 } ] & = x_{k-1}^4\theta_k^4 - 4x_{k-1}^3\theta_k^3 \mathbb{e}[\xi_k|\bm{x}_{k-1 } ] + 6 x_{k-1}^2\theta_k^2 \mathbb{e}[\xi_k^2|\bm{x}_{k-1 } ] \\ & \;\;\ ; - 4x_{k-1}\theta_k \mathbb{e}[\xi_k^3|\bm{x}_{k-1}]+\mathbb{e}[\xi_k^4|\bm{x}_{k-1 } ] = x_{k-1}^4\theta_k^4 + 6x_{k-1}^2\theta_k^2\sigma^2+c\,\sigma^4.\end{aligned}\ ] ] for we have . using this relation , conditional version of jensen s inequality and the last display ,we derive : & = \frac{1}{2}\mathbb{e}\big[x_k^2 + \rho\sigma^2-|x_k^2-\rho\sigma^2| \big|\bm{x}_{k-1}\big]\\ & \ge \frac{1}{2 } \big[x_{k-1}^2\theta_k^2 + ( \rho+1)\sigma^2 -\big(\mathbb{e}\big[\big(x_k^2-\rho\sigma^2\big)^2|\bm{x}_{k-1}\big]\big)^{1/2}\big],\end{aligned}\ ] ] for .we now have , by plugging in the expressions derived above and simplifying , = \mathbb{e}\big[x_k^4|\bm{x}_{k-1}\big ] - 2\rho\sigma^2\mathbb{e}\big[x_k^2|\bm{x}_{k-1}\big ] + \rho^2\sigma^4\\ & = x_{k-1}^4\theta_k^4 + 2(3-\rho)x_{k-1}^2\theta_k^2\sigma^2 + ( c-2\rho+\rho^2)\sigma^4 = \big(x_{k-1}^2\theta_k^2+\frac{c+3}4\sigma^2\big)^2,\end{aligned}\ ] ] if we pick . combining the previous two displays ,we conclude that \ge \frac{(5-c)\sigma^2}{4 } , \quad k \in \mathbb{n},\ ] ] and the statement of the lemma follows . | we propose an online algorithm for tracking a multidimensional time - varying parameter of a time series , which is also allowed to be a predictable process with respect to the underlying time series . the algorithm is driven by a gain function . under assumptions on the gain , we derive uniform non - asymptotic error bounds on the tracking algorithm in terms of chosen step size for the algorithm and the variation of the parameter of interest . we also outline how appropriate gain functions can be constructed . we give several examples of different variational setups for the parameter process where our result can be applied . the proposed approach covers many frameworks and models ( including the classical robbins - monro and kiefer - wolfowitz procedures ) where stochastic approximation algorithms comprise the main inference tool for the data analysis . we treat in some detail a couple of specific models . * keywords : * on - line tracking ; predictable drifting parameter ; recursive algorithm ; stochastic approximation procedure ; time series ; time - varying parameter . |
[ sec : intro ] in the studies of necessary conditions for singular minimizers containing surfaces of gradient discontinuity various local jump conditions have been proposed .a partial list of such conditions include weierstrass - erdmann relations ( traction continuity and maxwell condition ) , quasi - convexity on phase boundary , grinfeld instability condition and roughening instability condition .while some of these conditions have been known for a long time , a systematic study of their _ interdependence _ have not been conducted , and a full understanding of which conditions are primary and which are derivative is still missing .the absence of hierarchy is mostly due to the fact that strong and weak local minima have to be treated differently and that variations leading to some of the known necessary conditions represent an intricate _ combination _ of strong and weak perturbations . in particular , if the goal is to find local necessary conditions of a _ strong local minimum _ , the use of weak variations gives rise to redundant information .for instance , euler - lagrange equations in the weak form should not be a part of the minimal ( essential ) local description of strong local minima . in this paper , we study strong local minimizers and our goal is to derive an _irreducible _ set of necessary conditions at a point of discontinuity by using only `` purely '' strong variations of the interface that are complementary to the known strong variations at nonsingular points .more specifically , our main theorem states that all known local conditions associated with gradient discontinuities follow from quasi - convexity on both sides of the discontinuity plus a single interface inequality which we call the _ interchange stability _ condition . while this condition is fully explicit and deceptively simple , to the knowledge of the authors , it has not been specifically singled out , except for a cursory mention by r. hill of the corresponding _ equality _ which we call _ elastic normality condition_. to emphasize a relation between this condition and strong variations we show that it is responsible for gteaux differentiability of the energy functional along special multiscale `` directions '' .we call them _ material interchange _ variations and show that they are devoid of any weak components .we also explain why the elastic normality condition , which r. hill associated exclusively with weak variations , plays such an important role in the study of strong local minima .the paper is organized as follows . in section [ sec : prelim ] we introduce the interchange stability condition and formulate our main result . in section [ sec : norm ]we link interchange stability with a strong variation which we interpret as material exchange .we then interpolate between this strong variation and a special weak variation which independently produces the normality condition .our main theorem is proved in section [ sec : proof ] , where we also establish the inter - dependencies between the known local necessary conditions of strong local minimum .an illustrative example of locally stable interfaces associated with simple laminates is discussed in detail in section [ sec : ex ] . in section [ sec : normality ] we build a link between the notions of elastic and plastic normality and show in which sense the elastic normality condition can be interpreted as the actual orthogonality with respect to an appropriately defined `` yield '' surface .we then illustrate the general construction by studying the case of an anti - plane shear in isotropic material with a double - well energy .[ sec : prelim ] consider the variational functional most readily associated with continuum elasticity theory here is an open subset of , and is the neumann part of the boundary .we can absorb the boundary integral into the volume integral by finding a divergence - free matrix field , such that on which suggests that the variational functional can be used in place of ( [ energy ] ) .we assume that is a continuous and bounded from below function on , where is the set of all matrices .we use the following definition of strong local minimum : [ def : slm ] the lipschitz function satisfying boundary conditions is a strong local minimizer . in the study of local necessary conditions in the interiorthis difference is irrelevant . ] if there exists so that for every for which , we have . in this paperwe focus on special singular local minimizers containing jump discontinuity of across a surface .then for every point there exist matrices and such that for any where is the unit normal '' . ] to .we further assume that which imposes kinematic compatibility constraint on the jump of the deformation gradient : where and is a called a shear vector .material stability of the deformation at point is understood as stability with respect to local variations of the form where is the unit ball can be supported in any bounded domain of , see ] . the corresponding energy variation defined by the condition of material stability can be written in different forms for points where is continuous and for points on jump discontinuity where satisfies ( [ fpmdef ] ) .the condition of material stability in the regular point is obtained by changing variables in ( [ strvar ] ) , to be closer to standard notations we redefine and write the necessary condition of material stability in the form of quasi - convexity condition where denotes the average over .we say that is _ strongly locally stable _ if for any .when the point lies at the jump discontinuity we can again change variables and write where is defined in ( [ fpmdef ] ) .the associated necessary condition can be then written in the form of quasiconvexity on the surface of jump discontinuity condition where .we say that the pair satisfying ( [ kincomp ] ) determines a _ strongly locally stable interface _ if for any .it is clear that the strong local stability of the interface implies strong local stability ( [ qcx ] ) of and . it will be convenient to reformulate conditions of strong local stability in terms of the properties of global minimizers of localized variational problems .thus , according to ( [ qcx ] ) , is strongly locally stable if and only if is a minimizer in the localized variational problem the value of the infimum in ( [ homloc ] ) coincides with the quasiconvex envelope of , i.e. the largest quasiconvex function that does not exceed .it is then clear that is strongly locally stable if and only if .similarly we say that the pair satisfying ( [ kincomp ] ) determines a strongly locally stable interface if solves the localized variational problem where we defined a lipschitz continuous function we are now in a position to formulate our main claim that strong local stability ( [ qcx ] ) of together with a single additional condition , which we call interchange stability , implies strong local stability of the interface : [ th : main ] let be a continuous , bounded from below function that is of class in a neighborhood of .assume that the pair satisfies the kinematic compatibility condition for some and .then the surface of jump discontinuity is strongly locally stable if and only if the following conditions are satisfied : * material stability in the bulk : , * interchange stability : . before we turn to the proof of theorem [ th : main ] it is instructive to look closely at the meaning of the scalar quantity entering the algebraic condition ( )[ sec : norm ] while it is natural that condition ( [ jdqcx ] ) of strong local stability of the interface implies strong local stability of each individual deformation gradient and , a less obvious claim of theorem [ th : main ] is that the only _ joint _ stability constraint on the kinematically compatible pair is provided by condition ( ) .a natural challenge is then to identify the variation producing this condition .we observe that conventional variations , linking both sides of a jump discontinuity and leading to maxwell condition or roughening instability condition , represent combinations of weak and strong variations .this creates unnecessary coupling and obscures the strong character of the minimizer under consideration .physically it is clear that if materials on both sides of the interface are stable and if we can interchange one material by another without increasing the energy , then the whole configuration should be stable . between the original and perturbed normals is plotted as a function of length in the tangential direction . ]the idea of material interchange is illustrated schematically in fig .[ fig : interchange ] where the two adjacent rectangular domains are flipped and then translated . at construction can be viewed as a interface generalization of the weierstrass `` needle variation '' since neither the fields are modified , except on a set of zero surface area . as we show in fig .[ fig : normalnuc ] this variation can be also interpreted as a strong variation of the interface normal .notice that if taken literally , the schematics of perturbed field shown in fig . [ fig : interchange ] is incompatible with a gradient of any admissible variation . to fix this technical problem we present below an explicit construction of the variationwhose gradient differs significantly from the one shown in fig .[ fig : interchange ] only on a set of an infinitesimal measure .we define a family of lipschitz cut - off functions on such that , when , while , when .let be another lipschitz cut - off function with , when and , when .suppose that is a unit vector in , such that .we then define the test function , to be used in ( [ jdstab ] ) , as follows where 3 in .,title="fig : " ] 3 in .,title="fig : " ] observe that , where the graph of is given in figure [ fig : interchange2 ] .we remark that the variation ( [ wdipole ] ) belongs to the class of multiscale variations proposed in : it uses a small scale and another small scale from ( [ strong ] ) .the interpretation of the function as an `` interchange driving force '' is immediately clear from the following theorem : [ th : wdipole0 ] suppose satisfy ( [ kincomp ] ) .let be given by ( [ wdipole ] ) .then where is given by ( [ ybardef ] ) and is the dimensional volume of the unit ball in . in order to compute the energy increment we use the weierstrass function we can then rewrite the energy increment as we easily compute observe that is non - zero only on , while is non - zero only on .therefore , in order to estimate the right - hand side , we identify 3 regions where ( see figure [ fig : regions ] ) : and in order to estimate we write , where it is easy to see that and in regions , in the region .thus thus , in the region and in region .we also see that , , while .thus , we estimate while we conclude that combining ( [ farfield ] ) and ( [ locfield ] ) we obtain ( [ wdipolevar ] ) . while the theorem [ th : wdipole0 ] associates the interchange stability ( ) with strong variations , the function is also known to be linked with stability with respect to weak variations . indeed ,after being projected onto the shear vector , the traction continuity condition , which can be viewed as a weak form of euler - lagrange equations , gives the following _ normality condition _ to understand the origin of ( [ norm ] ) consider the energy increment corresponding to classical weak variations we obtain the formula ( [ tracvar ] ) shows that if then the vanishing of the first variation implies the normality condition ( [ norm ] ) .the crucial observation is that our strong variation given by ( [ wdipole ] ) is also a scalar multiple of .this suggests the idea that our both weak and strong variations can be regarded as two limits of a single continuum of variations \} ] . to ensure the symmetry of the two limits, we consider the special case . ) and the interchange ( ) variation.,title="fig : " ] ) and the interchange ( ) variation.,title="fig : " ] it is clear that the fine structure of the energy landscape along such a path is not universal and depends sensitively on the function . for the purpose of illustration ,let us consider the energy density assuming kinematic compatibility ( [ kincomp ] ) and normality ( [ norm ] ) we obtain where with satisfying one can see that if the function has a `` double - well '' structure ( see figure [ fig : ws - interch](a ) ) the graph of looks like , see figure [ fig : ws - interch](b ) .the presence of `` energy barrier '' indicates that any `` combination '' of the interchange variation and the weak variation ( [ weak ] ) produces a cruder test of stability than either of the pure variations , incapable of detecting the existing instability .this result confirms our intuition that the realms of weak and strong variations are well separated and that the energy landscapes in the strong and weak topologies can be regarded as unrelated ( unless all non - trivial features are removed by assuming uniform convexity or quasiconvexity ) .we conclude this section by proving an important property of the interchange driving force .more specifically , we show that if the deformation gradients are strongly locally stable and if they are linked only by the kinematic compatibility condition ( [ kincomp ] ) , then the interchange driving force is non - negative .we first recall the definition of the maxwell driving force where .[ th : pospres ] assume that both and are strongly locally stable and satisfy the kinematic compatibility condition ( [ kincomp ] ) .then in particular , .the theorem is an immediate consequence of lemma [ lem : qcxnorm ] below , that shows that the algebraic inequality ( [ normest ] ) is a consequence of the `` weierstrass condition '' stated in the next lemma .[ lem : weier ] suppose is strongly locally stable. then the weierstrass condition holds the proof of the lemma can be found in .[ lem : qcxnorm ] suppose that both and satisfy the weierstrass condition ( [ arest ] ) and the kinematic compatibility condition ( [ kincomp ] ) . then the inequality ( [ normest ] ) holds . setting and in ( [ arest ] ) , we obtain writing , where , we obtain the inequality ( [ normest ] ) follows .theorem [ th : pospres ] , whose proof is now straightforward , quantifies to what extend conditions of stability of surfaces of jump discontinuity are stronger than conditions of strong local stability of each individual phase .[ sec : proof ] we are now in a position to prove theorem [ th : main ] . the necessity of ( s ) was already observed in section [ sec : prelim ] , and the necessity of ( i ) , even with equality , was shown in section [ sec : norm ] .the proof of sufficiency will be split into a sequence of lemmas .our first step will be to recover the known interface jump conditions . in order to prove these algebraic relationsonly the weierstrass condition ( [ arest ] ) will be needed .[ lem : jc ] assume that the pair satisfies the following three conditions * kinematic compatibility : for some and , * interchange stability of the interface : , * weierstrass condition : for all and .then the following interface conditions must hold : * the maxwell jump condition * traction continuity * interface roughening condition combining lemma [ lem : qcxnorm ] with ( i ) we conclude that , and hence , by ( [ normest ] ) , the maxwell condition ( [ maxwell ] ) holds . in order to prove the remaining equalities we set and in the weierstrass condition ( w ) , where and are as in ( [ kincomp ] ) and and are small parameters .then we obtain a pair of inequalities that hold for all and all . under our smoothness assumptions on the functions are of class in the neighborhood of in the -space . the taylor expansion up to first order in gives then the inequalities ( [ feps ] ) , together with and ( [ maxwell ] ) imply ( [ phbequil ] ) and ( [ pta ] ) .next we prove a differentiability lemma that guarantees the existence of rank-1 directional derivatives of quasiconvex and rank-1 convex envelopes at `` marginally stable '' deformation gradients .this result does not require any additional growth conditions , as in the envelope regularity theorems from .[ lem : r1cx ] let be a rank - one convex function such that .let where is an open subset of on which is of class .then for every and every , in particular , by our assumption is convex on .recall from the theory of convex functions that is monotone increasing on each of the intervals , .therefore , the limits exist .moreover , the convexity of implies that .let . by assumption .we also have , since .when therefore , . similarly , when we obtain .thus , we conclude that the limit on the left - hand side of ( [ deriv ] ) exists and is equal to thus , the convex function is differentiable at and is a tangent line to its graph at .convexity of then implies that for all . in general, one does not expect explicit formulas for the values of the quasiconvex envelope in terms of . in that respect lemma [ lem : rwformula ] below provides a nice exception to the rule .[ lem : rwformula ] assume that the pair satisfies all conditions of theorem [ th : main ] .then for all ] .let be -periodic function such that this function is lipschitz continuous , since and .the function is quasiconvex and the function is -periodic. therefore ( see ) , by lemma [ lem : rwformula ] hence the inequality ( [ jdqcx ] ) is proved , since is supported on .we remark that theorem [ th : main ] answers the question studied in by giving a complete characterization of all possible pairs of deformation gradient values that can occur on a stable phase boundary .global minimality of , given by ( [ ybardef ] ) also implies that any other interface conditions , like , for example , local grinfeld condition or roughening stability inequality ( * ? ? ?* remark 4.2 ) , must be consequences of ( k ) , ( i ) and ( s ) .[ sec : ex ] in this section we establish a relation between _ particular _ solutions of the variational problems ( [ homloc ] ) and ( [ jdloc ] ) which elucidates the role played in the theory by the normality condition .consider the set of all that are not strongly locally stable ; we called this set the `` elastic binodal '' in . for such the infimum in the variational problem ( [ homloc ] )may be reachable only by minimizing sequences characterized by their young measures .suppose that for some the young measure solution of ( [ homloc ] ) has the form of a simple laminate : the set of all such will be called the _ simple laminate region_. there is a direct connection between the simple laminate region and locally stable interfaces .[ th : link ] a strongly locally stable interface determined by corresponds to a straight line segment , so that the laminate young measure ( [ ymsol ] ) solves ( [ homloc ] ) with .conversely , every point , corresponding to a laminate young measure ( [ ymsol ] ) determines a strongly locally stable interface. if is a strongly locally stable interface determined by and then the pair satisfies conditions ( k ) , ( i ) and ( s ) . by lemma [ lem : rwformula ]the gradient young measure ( [ ymsol ] ) attains the minimum in ( [ homloc ] ) for , , and thus .if has a non - empty interior then formula ( [ rwformula ] ) says that the graph of the quasiconvex envelope over is formed by straight line segments joining and . in other wordsthe graph of is a ruled surface .conversely , if the gradient young measure ( [ ymsol ] ) attains the minimum in ( [ homloc ] ) , then satisfy the kinematic compatibility condition ( [ kincomp ] ) and the difference between ( [ single ] ) and ( [ rwformula ] ) is that ( [ single ] ) is assumed to hold for a single fixed value of .therefore , both material stability ( s ) at and interchange stability ( i ) need to be established .[ lem : cxty ] assume that the pair satisfies ( [ kincomp ] ) and that ( [ single ] ) holds for some . then and . the proof is based on the following general property of convex functions .[ lem : cx ] let be a convex function on ] .by convexity .\ ] ] if , then therefore , by the assumption of the lemma and convexity of it follows that this inequality in combination with ( [ cxdef ] ) establishes ( [ aff ] ) for .the proof of ( [ aff ] ) for is similar . to prove lemma [ lem : cxty ] we recall that for all . by ( [ single ] ) and rank-1 convexity of we have which is possible if and only if .then , defining and applying lemma [ lem : cx ] we obtain ( [ rwformula ] ) .we can also apply lemma [ lem : r1cx ] with , .the formula ( [ deriv ] ) allows us to differentiate ( [ rwformula ] ) at and : subtracting the two equalities we obtain .thus , we have shown that every in the simple laminate region gives rise to the pair satisfying all conditions of theorem [ th : main ] .theorem [ th : main ] then implies that the interface determined by is strongly locally stable .theorem [ th : link ] is now proved .[ rem : marg ] the system of algebraic equations ( [ kincomp ] ) , ( [ maxwell ] ) , ( [ phbequil ] ) and ( [ pta ] ) defines a co - dimension 1 surface called the `` jump set '' . we have shown in that under some non - degeneracy assumptions the jump set must lie in the closure of the binodal region .in fact all points on the jump set are `` marginally stable '' and detectable through the nucleation of an infinite layer in an infinite space .it follows that the existence of a strongly locally stable interface has significant consequences for the geometry of .the presence of stable interfaces implies that a part of the jump set must coincide with a part of the `` binodal '' , the boundary of .the rank-1 lines joining and , both of which lie on the binodal , cover the simple lamination region .[ sec : normality ] in this section we show that the algebraic equation interpreted above as condition of interchange equilibrium , is conceptually similar to the well known _ normality condition _ in plasticity theory . to build a link between the two frameworkswe now show that a microstructure in elasticity theory plays the role of a `` mechanism '' in plasticity theory .consider a loading program with affine dirichlet boundary conditions .suppose that for an interval of values of the loading parameter .then , for every the deformation gradient will be accommodated by a laminate ( [ ymsol ] ) , so that we now interpret the representation ( [ ldprog ] ) from the point of view of plasticity theory . while the deformations associated with the change of and in each layer of the laminate are elastic , the deformation associated with the change of parameter , affecting the microstructure and modifying the young measure , can be regarded as `` inelastic '' .in fact , it is similar to lattice invariant shear characterizing elementary slip in crystal plasticity theory . to be more specific, we can decompose the strain rate as follows where is the elastic strain rate and is the `` plastic '' strain rate .next we notice that in equilibrium the `` inelastic '' strain rate defines an _ affine _ direction along the quasiconvex envelope of the energy ( see lemma [ lem : rwformula ] ) .this suggests that there is a stress plateau with which one can associate a notion of the `` yield '' stress . to find an equation for the corresponding `` yield surface '' we choose a special loading path where the elastic fields in the layers do not change .then , differentiating ( [ rwformula ] ) in , we find that the total stress field lies on the hyperplane which we interpret as the `` yield surface '' associated with `` plastic '' mechanism .if we now rewrite our elastic normality condition in the form it becomes apparent that the `` plastic '' strain rate is orthogonal to the yield surface . to strengthen the analogy we observe that in plasticity theory the yield surface marks the set of minimally stable elastic states . in elastic frameworkthe states adjacent to the jump discontinuity are also only marginally stable , see remark [ rem : marg ] .the fact that in elasticity setting the normality condition appears as a part of energy _minimization _ while in plasticity theory it is usually derived by maximizing plastic _ dissipation _ , is secondary in view of the implied rate independent nature of plastic dissipation .the analogy between elastic and plastic normality conditions becomes more transparent if we consider a simple example .suppose that our material is isotropic and the deformation is anti - plane shear .take the energy density in the form where the shear moduli of the `` phases '' are positive . in this scalar examplethe quasiconvex and convex envelopes of the energy density coincide , and hence we can write ( see fig .[ fig : aps ] ) \dfrac{\mu_{-}}{2}|\bf|^{2}+w_{-},&\text { if } |\bf|\ge{\varepsilon}_{-}\\[3ex ] \text { if } { \varepsilon}_{+}\le|\bf|\le{\varepsilon}_{-}. \end{array}\right.,\ ] ] where observe also that the binodal region coincides with the simple laminate region since for the gradient young measures attain the infimum in ( [ homloc ] ) . by fixing on the circle then obtain the unique which furnishes the `` plastic '' mechanism .the associated `` yield plane '' can be written explicitly we observe that as is varied over the circle , the yield lines form an envelope of the circle in stress space ( see fig .[ fig : env ] ) , which is the image of the binodal under the map .since the stress in each phase of the laminate is always the same we can write thus , in an arbitrary loading program the total stress will be confined to the yield surface envelope , provided .two cautionary notes are in order .first , in contrast with conventional plasticity theory , the regions of stress space both inside and outside of the `` yield '' surface are elastic .this distinguishes our `` transformational plasticity '' where hysteresis is infinitely narrow , from the classical plasticity where hysteresis is essential .such geometric picture continue to hold as long as , in particular , it holds for all scalar problems ( ) .the second observation is that for , our `` hardening free '' plastic analogy breaks down because the total stress in an arbitrary loading program is no longer confined to any surface . in this casethe `` plastic '' mechanism operates on a set of full measure and the proposed analogy requires a generalization .this material is based upon work supported by the national science foundation under grant no .1008092 and the french anr grant evocrit ( 2008 - 2012 ) .j. d. eshelby .energy relations and energy momentum tensor in continuum mechanics . in m.kanninen , w. adler , a. rosenfeld , and r. jaffee , editors , _ inelastic behavior of solids _ , pages 77114 .mcgraw - hill , new york , 1970 .l. tartar .tude des oscillations dans les quations aux drives partielles non linaires . in _ trends and applications of pure mathematics to mechanics ( palaiseau , 1983 ) _ , pages 384412 .springer - verlag , berlin , 1984 . | strong local minimizers with surfaces of gradient discontinuity appear in variational problems when the energy density function is not rank - one convex . in this paper we show that stability of such surfaces is related to stability outside the surface via a single jump relation that can be regarded as interchange stability condition . although this relation appears in the setting of equilibrium elasticity theory , it is remarkably similar to the well known _ normality _ condition which plays a central role in the classical plasticity theory . |
the concept of _ entanglement _ has played a crucial role in the development of quantum physics . in the early days entanglementwas mainly perceived as the qualitative feature of quantum theory that most strikingly distinguishes it from our classical intuition .the subsequent development of bell s inequalities has made this distinction quantitative , and therefore rendered the non - local features of quantum theory accessible to experimental verification .bell s inequalities may indeed be viewed as an early attempt to quantify the quantum correlations that are responsible for the counterintuitive features of quantum mechanically entangled states . at the time it was almost unimaginable that such quantum correlations could be created in well controlled environments between distinct quantum systems .however , the technological progress of the last few decades means that we are now able to coherently prepare , manipulate , and measure individual quantum systems , as well as create controllable quantum correlations . in parallel with these developments ,quantum correlations have come to be recognized as a novel resource that may be used to perform tasks that are either impossible or very inefficient in the classical realm .these developments have provided the seed for the development of modern quantum information science .given the new found status of entanglement as a resource it is quite natural and important to discover the mathematical structures underlying its theoretical description .we will see that such a description aims to provide answers to three questions about entanglement , namely ( 1 ) its characterisation , ( 2 ) its manipulation and , ( 3 ) its quantification . in the following we aim to provide a tutorial overview summarizing results that have been obtained in connection with these three questions .we will place particular emphasis on developments concerning the _ quantification _ of entanglement , which is essentially the theory of _ entanglement measures_. we will discuss the motivation for studying entanglement measures , and present their implications for the study of quantum information science .we present the basic principles underlying the theory and main results including many useful entanglement monotones and measures as well as explicit useful formulae .we do not , however , present detailed technical derivations .the majority of our review will be concerned with entanglement in bipartite systems with finite and infinite dimensional constituents , for which the most complete understanding has been obtained so far .the multi - party setting will be discussed in less detail as our understanding of this area is still far from satisfactory .it is our hope that this work will give the reader a good first impression of the subject , and will enable them to tackle the extensive literature on this topic .we have endeavoured to be as comprehensive as possible in both covering known results and also in providing extensive references .of course , as in any such work , it is inevitable that we will have made several oversights in this process , and so we encourage the interested reader to study various other interesting review articles ( e.g. ) and of course the original literature .* _ what is entanglement ? _ * any study of entanglement measures must begin with a discussion of what entanglement _ is _ , and how we actually _ use _ it . in the followingwe will adopt a highly operational point of view .then the usefulness of entanglement emerges because it allows us to overcome a particular constraint that we will call the _ locc constraint _ - a term that we will shortly explain .this restriction has both technological and fundamental motivations , and arises naturally in many explicit physical settings involving quantum communication across a distance .we will consider these motivations in some detail , starting with the technological ones . in any quantum communication experimentwe would like to be able to distribute quantum particles across distantly separated laboratories .perfect quantum communication is essentially equivalent to perfect entanglement distribution .if we can transport a qubit without any decoherence , then any entanglement shared by that qubit will also be distributed perfectly .conversely , if we can distribute entangled states perfectly then with a small amount of classical communication we may use teleportation to perfectly transmit quantum states . however , in any forseeable experiment involving these processes , the effects of noise will inevitably impair our ability to send quantum states over long distances .one way of trying to overcome this problem is to distribute quantum states by using the noisy quantum channels that are available , but then to try and combat the effects of this noise using higher quality local quantum processes in the distantly separated labs .such local quantum operations ( ` lo ' ) will be much closer to ideal , as they can be performed in well - controlled environments without the decoherence induced by communication over long - distances .however , there is no reason to make the operations of separated labs totally independent .classical communication ( ` cc ' ) can essentially be performed perfectly using standard telecom technologies , and so we may also use such communication to coordinate the quantum actions of the different labs ( see fig .[ fig2 ] ) .it turns out that the ability to perform classical communication is vital for many quantum information protocols - a prominent example being teleportation .these considerations are the technological reasons for the key status of the _ local operations and classical communication _ ` locc ' paradigm , and are a major motivation for their study . in a standard quantum communication setting two partiesalice and bob may perform any generalized measurement that is localized to their laboratory and communicate classically .the brick wall indicates that no quantum particles may be exchanged coherently between alice and bob .this set of operations is generally referred to as locc.,width=302 ] however , for the purposes of this article , the fundamental motivations of the locc paradigm are perhaps more important than these technological considerations .we have loosely described entanglement as the _ quantum correlations _ that can occur in many - party quantum states .this leads to the question - how do we define quantum correlations , and what differentiates them from _ classical correlations _ ?the distinction between ` quantum ' effects and ` classical ' effects is frequently a cause of heated debate .however , in the context of quantum information a precise way to define classical correlations is via locc operations .classical correlations can be defined as those that can be generated by locc operations .if we observe a quantum system and find correlations that can not be simulated classically , then we usually attribute them to quantum effects , and hence label them _ quantum correlations _so suppose that we have a noisy quantum state , and we process it using locc operations . if in this process we obtain a state that can be used for some task that can not be simulated by classical correlations , such as violating a bell inequality , then we must not attribute these effects to the locc processing that we have performed , but to quantum correlations that were _ already present _ in the initial state , even if the initial state was quite noisy .this is an extremely important point that is at the heart of the study of entanglement .it is the constraint to locc - operations that elevates entanglement to the status of a resource . using locc - operations as the only other tool, the inherent quantum correlations of entanglement are required to implement general , and therefore nonlocal , quantum operations on two or more parties . as locc - operations alone are insufficient to achieve these transformations , we conclude that entanglement may be defined as the sort of correlations that may not be created by locc alone .allowing classical communication in the set of locc operations means that they are not completely local , and can actually have quite a complicated structure . in order to understand this structure more fully , we must first take a closer look at the notion of general quantum operations and their formal description . * _ quantum operations _ * in quantum information science much use is made of so - called ` generalised measurements ' ( see for a more detailed account of the following basic principles ) .it should be emphasized that such generalised measurements do not go beyond standard quantum mechanics . in the usual approach to quantum evolution ,a system is evolved according to unitary operators , or through collapse caused by projective measurements .however , one may consider a more general setting where a system evolves through interactions with other quantum particles in a sequence of three steps : ( 1 ) first we first add ancilla particles , ( 2 ) then we perform joint unitaries and measurements on both the system and ancillae , and finally ( 3 ) we discard some particles on the basis of the measurement outcomes .if the ancillae used in this process are originally uncorrelated with the system , then the evolution can be described by so - called _kraus operators_. if one retains total knowledge of the outcomes obtained during any measurements , then the state corresponding to measurement outcomes occurs with probability and is given by where is the initial state and the are matrices known as _operators ( see part ( a ) of fig . [ fig1 ] for illustration ) .the normalisation of probabilities implies that kraus operators must satisfy . in some situations , for example when a system is interacting with an environment , all or part of the measurement outcomes might not be accessible . in the most extreme case this corresponds to the situation where the ancilla particles are being traced out .then the map is given by which is illustrated in part ( b ) of fig .( [ fig2 ] ) . schematic picture of the action of quantum operations with and without sub - selection ( eqs .( [ eq1 ] ) and ( [ eq2 ] ) respectively ) shown in part ( a ) and part ( b ) respectively . , width=283 ] such a map is often referred to as a _ trace preserving _ quantum operation , whereas operations in which measurement outcomes are retained are sometimes referred to as _ measuring_ quantum operations ( or sometimes also _ selective _ quantum operations , or _ stochastic _ quantum operations , depending upon the context ) .conversely , it can be shown ( see e.g. ) that for _ any _ set of linear operators satisfying we can find a process , composed of the addition of ancillae , joint unitary evolution , and von - neumann measurements , that leads to eq .( [ eq1 ] ) . in tracepreserving operations the should strictly all be matrices of the same dimensions , however , if knowledge of outcomes is retained , then different may have different dimensions .having summarized the basic ingredients of generalised quantum operations , we are in a position to consider approaches that may be taken to determine which operations are implementable by locc . the locc constraint is illustrated in figure [ fig2 ] . in general this set of operations is quite complicated .alice and bob may communicate classically before or after any given round of local actions , and hence in any given round their actions may depend upon the outcomes of previous measuring operations . as a consequence of this complexity ,there is no known simple characterisation of the locc operations .this has motivated the development of larger classes of operations that can be more easily characterised , while still retaining a considerable element of locc - ality .one of the most important such classes is the set of _separable operations_. these are the operations that can be written in terms of kraus operators with a _ product _decomposition : such that .clearly , any locc operation can be cast in the form of separable operation , as the local kraus operators corresponding to the individual actions of alice and bob can be joined into product kraus operators .however , it is remarkable that the converse is _ not _ true .this was first demonstrated in , where an example task of a separable operation is presented that can not be implemented using locc actions - the example presented there requires a finite amount of quantum communication to implement it , even though the operation is itself separable .it is nevertheless convenient from a mathematical point of view to work with separable operations , as optimising a given task using separable operations provides strong bounds on what may be achieved using locc .sometimes this process can even lead to tight results - one may try to show whether the optimal separable operation may in fact be also implemented using locc , and this can often , but not always , be guaranteed in the presence of symmetries ( see e.g. and refs . therein ) .even more general classes of operations such as positive partial transpose preserving operations ( ppt ) such that is also completely positive , where corresponds to transposition of _ all _ of bob s particles , including ancillas .one can also consider transposition only of those particles belonging to bob that undergo the operation .however , we believe that this does not affect the definition .it is also irrelevant whether the transposition is taken over alice or bob , and so one may simply assert that must be completely positive , where is the transposition of one party .it can be shown that the ppt operations are precisely those operations that preserve the set of ppt states .hence the set of non - ppt operations includes any operation that creates a free ( non - bound ) entangled state out of one that is ppt .hence ppt operations correspond to some notion of locality , and in contrast to separable operations it is relatively easy to check whether a quantum operation is ppt . ]may also be used in the study of entanglement as they have the advantage of a very compact mathematical characterization .after this initial discussion of quantum operations and the locc constraint we are now in a position to consider in more detail the basic properties of entanglement .* _ basic properties of entanglement _ * following our discussion of quantum operations and their natural constraint to local operations and classical communication , we are now in a position to establish some basic facts and definitions regarding entangled states . given the wide range of tasks that exploit entanglement one might try to define entanglement as ` that property which is exploited in such protocols ' .however , there is a whole range of such tasks , with a whole range of possible measures of success .this means that situations will almost certainly arise where a state is better than another state for achieving one task , but for achieving a different task is better than .consequently using a task - based approach to quantifying entanglement will certainly not lead to a single unified perspective .however , despite this problem , it is possible to assert some general statements which are valid regardless of what your favourite use of entanglement is , as long as the key set of ` allowed ' operations is the locc class .this will serve us a guide as to how to approach the quantification of entanglement , and so we will discuss some of these statements in detail : _ separable states contain no entanglement . _a state of many parties is said to be _ separable _ , if it can be written in the form where is a probability distribution. these states can trivially be created by locc - alice samples from the distribution , informs all other parties of the outcome , and then each party locally creates and discards the information about the outcome .as these states can be created from scratch by locc they trivially satisfy a local hidden variables model and all their correlations can be described classically .hence , it is quite reasonable to state that separable states contain no entanglement . _ all non - separable states allow some tasks to be achieved better than by locc alone , hence all non - separable states are entangled . _for a long time the quantum information community has used a ` negative ' characterization of the term entanglement essentially defining entangled states as those that can not be created by locc alone . on the other hand, it can be shown that a quantum state may be generated perfectly using locc if and only if it is separable .of course this is a task that becomes trivially possible by locc when the state has been provided as a non - local resource in the first place .more interestingly , it has been shown recently that for any non - separable state , one can find another state whose teleportation fidelity may be enhanced if is also present .this is interesting as it allows us to positively characterize non - separable states as those possessing a useful resource that is not present in separable states .this hence justifies the synonymous use of the terms _ non - separable _ and _entangled_. _ the entanglement of states does not increase under locc transformations ._ given that by locc we can only create separable , ie non - entangled states , this immediately implies the statement that locc can not create entanglement from an unentangled state .indeed , we even have the following stronger fact .suppose that we know that a quantum state can be transformed with certainty to another quantum state using locc operations .then anything that we can do with and locc operations we can also achieve with and locc operations . hence the utility of quantum states can not increase under locc operations , and one can rightfully state that is at least as entangled as . _ entanglement does not change under local unitary operations ._ this property follows from the previous one because local unitaries can be inverted by local unitaries .hence , by the non - increase of entanglement under locc , two states related by local unitaries have an equal amount of entanglement . _ there are maximally entangled states . _ now we have a notion of which states are entangled and are also able , in some cases , to assert that one state is more entangled than another .this naturally raises the question whether there is a _ maximally entangled state _, i.e. one that is more entangled than all others . indeed , at least in two - party systems consisting of two fixed -dimensional sub - systems ( sometimes called qudits ) ,such states exist .it turns out that any pure state that is local unitarily equivalent to is maximally entangled .this is well justified , because as we shall see in the next subsection , any pure or mixed state of two -dimensional systems can be prepared from such states with certainty using only locc operations .we shall later also see that the non - existence of an equivalent statement in multi - particle systems is one of the reasons for the difficulty in establishing a theory of multi - particle entanglement .+ the above considerations have given us the extremes of entanglement - as long as we consider locc as our set of available operations , separable states contain zero entanglement , and we can identify certain states that have maximal entanglement .they also suggest that we can impose some form of ordering - we may say that state is more entangled than a state if we can perform the transformation using locc operations .a key question is whether this method of ordering gives a partial or total order ? to answer this question we must try and find out when one quantum state may be transformed to another using locc operations .before we move on to the discussion of entanglement measures we will consider this question in more detail in the next part .note that the notion that ` _ entanglement does not increase under locc _ ' is implicitly related to our restriction of quantum operations to locc operations - if other restrictions apply , weaker or stronger , then our notion of ` more entangled ' is likely to also change ._ * manipulation of single bi - partite states * _ in the previous section we indicated that for bi - partite systems there is a notion of maximally entangled states that is independent of the specific quantification of entanglement .this is so because there are so - called _ maximally entangled states _ from which all others can be created by locc only ( at least for bipartite systems of fixed maximal dimension ) .we we will show this explicitly here for the case of two qubits and leave the generalization as an exercise to the reader . in the case of two qubits , we will see that the maximally entangled states are those that are local - unitarily equivalent to the state our aim is now to justify this statement by showing that for any bipartite pure state written in a schmidt decomposed form ( see discussion around equation ( [ schmidt ] ) for an explanation of the schmidt decomposition ) : we can find a locc map that takes to with certainty . to this end we simply need to write down the kraus operators ( see eq .( [ eq1 ] ) of a valid quantum operation .it is easy to show that the kraus operators defined by satisfy and , so that .it is instructive to see how one can construct this operation physically using only locc transformations .let us first add an ancilla in state to alice which results in the state if we then perform the local unitary operation on alice s two particles , we arrive at finally , a local measurement on alice s ancilla particle now yields two outcomes . if alice finds then bob is informed and does not need to carry out any further operation; if alice finds then bob needs to apply a operation to his particle . in both casesthis results in the desired state .given that we can obtain with certainty any arbitrary pure state starting from , we can also obtain any mixed state .this is because any mixed state can always be written in terms of its eigenvectors as for some set of unitaries and ( this in turn is simply a consequence of the schmidt decomposition ) .it is an easy exercise , left to the reader , to construct the operation that takes to .a natural generalisation of this observation would be to consider locc transformations between general pure states of two parties .although this question is a little more difficult , a complete solution has been developed using the mathematical framework of the theory of _majorization_. the results that have been obtained not only provide necessary and sufficient conditions for the possibility of the locc interconversion between two pure states , they are also constructive as they lead to explicit protocols that achieve the task .these conditions may be expressed most naturally in terms of the _ schmidt coefficients _ of the states involved .it is a useful fact that any bi - partite pure quantum state may be written in the form where the positive real numbers are the _ schmidt - coefficients _ of the state .. the amplitudes can be considered as the matrix elements of a matrix .this matrix hence completely represents the state ( as long as a local basis is specified ) .if we perform the local unitary transformation then the matrix gets transformed as .it is a well established result of matrix analysis - the _ singular value decomposition _ - that any matrix can be _ diagonalised _ into the form by a suitable choice of ( ) , even if is not square .the coefficients are the so - called _ singular values _ of , and correspond to the schmidt coefficients . ]the local unitaries do not affect the entanglement properties , which is why we now write the initial state vector and final state vector in their schmidt - bases , where denotes the dimension of each of the quantum systems .we can take the schmidt coefficients to be given in decreasing order , i.e. , and .the question of the interconvertibility between the states can then be decided from the knowledge of the real schmidt coefficients only , as any two pure states with the same schmidt coefficients may be interconverted straightforwardly by local unitary operations .in it has been shown that a locc transformation converting to with unit probability exists if and only if the are _ majorized _ by , i.e. if for all we have that and , where denotes the number of nonzero schmidt - coefficients .various refinements of this result have been found that provide the largest success probabilities for the interconversion between two states by locc , together with the optimal protocol ( according to certain figures of merit ) where such a deterministic interconversion is not possible .these results allow us in principle to decide any question concerning the locc interconversion of pure states by employing techniques from linear programming .it is a direct consequence of the above structures that there are _ incomparable _ states , i.e. pairs of states such that neither can be converted into the other with certainty .these states are called incomparable as neither can be viewed as more entangled than the other .note that borrowed entanglement can make some pairs of incomparable states comparable again .indeed , there are known examples where the locc transformation of is not possible with probability one , but where given a suitable entangled state the locc transformation of is possible with certainty .this phenomenon is now called _ entanglement catalysis _ , as the state is returned unchanged after the transformation , and acts much like a catalyst .the majorization condition also reveals another disadvantageous feature of the single copy setting - there can be _discontinuities_. for instance , it can be shown that the maximal probability of success for the locc transformation from to is unity , while the probability for the transformation to , i.e. even if the target states in the two examples are arbitrarily close . that the probability of success for the later transformation is zerocan also be concluded easily from the fact that the schmidt - number , i.e. the number of non - vanishing schmidt - coefficients , can not be increased in an locc protocol , even probabilistically .the key problem is that we are being too restrictive in asking for _ exact _ state transformations .physically , we should be perfectly happy if we can come very close to a desired state .indeed , admitting a small but finite value of there will be a finite probability to achieve the transformation .this removes the above discontinuity , but the success probability will now depend on the size of the imprecision that we allow .the following subsection will serve to overcome this problem for pure states by presenting a natural definition of interconvertibility in the presence of vanishing imprecisions , a definition that will constitute our first entanglement measure ._ * state manipulation in the asymptotic limit * _ the study of the locc transformation of pure states has so far enabled us to justify the concept of maximally entangled states and has also permitted us , in some cases , to assert that one state is more entangled than another .however , we know that exact locc transformations can only induce a partial order on the set of quantum states .the situation is even more complex for _ mixed _ states , where even the question of when it is possible to locc transform one state into another is a difficult problem with no transparent solution at the time of writing .all this means that if we want to give a definite answer as to whether one state is more entangled than another for any pair of states , it will be necessary to consider a more general setting . in this contexta very natural way to compare and quantify entanglement is to study locc transformations of states in the so called _asymptotic regime_. instead of asking whether for a single pair of particles the initial state may be transformed to a final state by locc operations , we may ask whether for some large integers we can implement the ` wholesale ' transformation .the largest ratio for which one may achieve this would then indicate the relative entanglement content of these two states . considering the many - copy settingallows each party to perform collective operations on ( their shares of ) many copies of the states in question .such a many copy regime provides many more degrees of freedom , and in fact paves part of the way to a full classification of pure entangled states . to pave the rest of the route we will also need to discusswhat kind of approximations we might admit for the output of the transformations .there are two basic approaches to this problem - we can consider either _ exact _ or _ asymptotically exact _ transformations .the distinction between these two approaches is important , as they lead to different scenarios that yield different answers . in the _ exact _ regimewe allow no errors at all - we must determine whether the transformation can be achieved perfectly and with success probability for a given value of and .the supremum of all achievable rates is denoted by , and carries significance as a measure of the exact locc ` exchange rate ' between states .this quantity may be explored and gives some interesting results . however , from a physical point of view one may feel that the restriction to exact transformations is too stringent .after all , it should be quite acceptable to consider approximate transformations that become arbitrarily precise when going to the asymptotic limit .asymptotically vanishing imperfections , as quantified by the trace norm ( i.e. tr ) , will lead to vanishingly small changes in measurements of bounded observables on the output .this leads to the second approach to approximate state transformations , namely that of _ asymptotically exact _ state transformations , and this is the setting that we will consider for the remainder of this work . in thissetting we consider imperfect transformations between large blocks of states , such that in the limit of large block sizes the imperfections vanish .for example , for a large number of copies of , one transforms to an output state that approximates very well for some large .if , in the limit of and for fixed , the approximation of by becomes arbitrarily good , then the rate is said to be _achievable_. one can use the optimal ( supremal ) achievable rate as a measure of the relative entanglement content of in the asymptotic setting .this situation is reminiscent of shannon compression in classical information theory - where the compression process loses all imperfections in the limit of infinite block sizes as long as the compression rate is below a threshold .clearly the asymptotically exact regime is less strongly constrained than the exact regime , so that . given that we are considering two limiting processes it is not clear whether the two quantities are actually equal and it can be rigorously demonstrated that they are different in general , see e.g. . such an asymptotic approach will alleviate some of the problems that we encountered in the previous section .it turns out that the asymptotic setting yields a unique total order on bi - partite pure states , and as a consequence , leads to a very natural measure of entanglement that is essentially unique . to this endlet us start by defining our first entanglement measure , which happens also to be one of the most important measures - the _ entanglement cost _, . for a given state measure quantifies the maximal possible rate at which one can convert blocks of _ 2-qubit _ maximally entangled states into output states that approximate many copies of , such that the approximations become vanishingly small in the limit of large block sizes .if we denote a general trace preserving locc operation by , and write for the density operator corresponding to the maximally entangled state vector in dimensions , i.e. , then the entanglement cost may be defined as = 0 \right\}\ ] ] where d is a suitable measure of distance .a variety of possible distance measures may be considered .it has been shown that the definition of entanglement cost is independent of the choice of distance function , as long as these functions are equivalent to the trace norm in a way that is sufficiently independent of dimension ( see for further explanation ) .hence we will fix the trace norm distance , i.e. , as our canonical choice of distance function .it may trouble the reader that in the definition of we have not actually taken input states that are blocks of copies of 2-qubit maximally entangled states , but instead have chosen as inputs single maximally entangled states between subsystems of increasing dimensions .however , these two approaches are equivalent because ( for integral ) is local unitarily equivalent to .the entanglement cost is an important measure because it quantifies a wholesale ` exchange rate ' for converting maximally entangled states to by locc alone .maximally entangled states are in essence the ` gold standard currency ' with which one would like to compare all quantum states .although computing is extremely difficult , we will later discuss its important implications for the study of channel capacities , in particular via another important and closely related entanglement measure known as the _ entanglement of formation _, .just as measures how many maximally entangled states are required to create copies of by locc alone , we can ask about the reverse process : at what rate may we obtain maximally entangled states ( of two qubits ) from an input supply of states of the form .this process is known in the literature either as _ entanglement distillation _ , or as _ entanglement concentration _( usually reserved for the pure state case ) . the efficiency with which we can achieve this process defines another important basic asymptotic entanglement measure which is the _ distillable entanglement _ , . again we allow the output of the procedure to _ approximate _ many copies of a maximally entangled state , as the exact transformation from to even one pure maximally entangled state is in general impossible . in analogy to the definition of , we can then make the precise mathematical definition of as = 0 \right\}.\ ] ] is an important measure because if entanglement is used in a two party quantum information protocol , then it is usually required in the form of maximally entangled states .so tells us the rate at which noisy mixed states may be converted back into the ` gold standard ' singlet state by locc . in defining have ignored a couple of important issues .firstly , our locc protocols are always taken to be trace preserving .however , one could conceivably allow probabilistic protocols that have varying degrees of success depending upon various measurement outcomes .fortunately , a thorough analysis by rains shows that taking into account a wide diversity of possible success measures still leads to the same notion of distillable entanglement .secondly , we have always used 2-qubit maximally entangled states as our ` gold standard ' .if we use other entangled _ pure _ states , perhaps even on higher dimensional hilbert spaces , do we arrive at significantly altered definitions ?we will very shortly see that this is not the case so there is no loss of generality in taking singlet states as our target .given these two entanglement measures it is natural to ask whether , i.e. whether entanglement transformations become _ reversible _ in the asymptotic limit .this is indeed the case for pure state transformations where and are identical and equal to the _ entropy of entanglement _ .the entropy of entanglement for a pure state is defined as where denotes the von - neumann entropy ] and ( ) are states proportional to the projectors onto the anti - symmetric ( symmetric ) subspace .it can be shown that where .it is notable that while this expression is continuous in it is not differentiable for .these results can be extended to the more general class of states that is invariant under the action of , where is an orthogonal transformation ._ other distance based measures _ in eq .( [ distance ] ) one may consider replacing the quantum relative entropy by different distance measures to quantify how far a particular state is from a chosen set of disentangled states .many interesting examples of other functions that can be used for this purpose may be found in the literature ( see e.g. ) .it is also worth noting that the relative entropy functional is _ asymmetric _ , in that .this is connected with asymmetries that can occur in the discrimination of probability distributions .one can consider reversing the arguments and tentatively define an locc monotone .the resulting function has the advantage of being additive , but unfortunately it has the problem that it can be infinite on pure states .an additive measure that does not suffer from this deficiency will be presented later on in the form of the ` squashed ' entanglement . _ the distillable secret key _ the distillable secret key , , quantifies the asymptotic rate at which alice and bob may distill secret classical bits from many copies of a shared quantum state .alice and bob may use a shared quantum state to distribute a classical bit of information - for instance if they share a state , then they may measure it in the basis to obtain an identical classical bit , which could form the basis of a cryptographic protocol such as one - time pad ( see e.g. for a description of one - time pad ) . however ,if we think of a given bipartite mixed state as the reduction of a pure state held between alice , bob , and a malicious third party eve , then it is possible that eve could obtain information about the secret bit from measurements on her subsystem .in defining it is assumed that each copy of is purified _ independently _ of the other copies .if we reconsider the example of the state , we can easily see that it is not secure .for instance , it could actually be a reduction of a ghz state held between alice , bob and eve , in which case eve could also have complete information about the ` secret ' bit .the quantity is hence zero for this state , and is in fact zero for all separable states .one way of getting around the problem of eve is to use entanglement distillation .if alice and bob distill bipartite pure states , then because pure states must be uncorrelated with any environment , any measurements on those pure states will be uncorrelated with eve . moreover , if the distilled pure states are epr pairs , then because each local outcome occurs with equal probability , each epr pair may be used to distribute exactly 1 secret bit of information .this means that . however , entanglement distillation is not the only means by which a secret key can be distributed , it examples of ppt states are known where , even though for all ppt states .it has also been shown that the regularized relative entropy with respect to separable states is an upper bound to the distillable secret key , . _ logarithmic negativity _the partial transposition with respect to party of a bipartite state expanded in a given local orthonormal basis as is defined as the spectrum of the partial transposition of a density matrix is independent of the choice of local basis , and is independent of whether the partial transposition is taken over party or party .the positivity of the partial transpose of a state is a necessary condition for separability , and is sufficient to prove that for a given state .the quantity known as the _ negativity _ , , is an entanglement monotone that attempts to quantify the negativity in the spectrum of the partial transpose .we will define the negativity as where is the trace norm . while being a convex entanglement monotone , the negativity suffers the deficiency that it is not additive . a more suitable choice for an entanglement monotonemay therefore be the so called _ logarithmic negativity _ which is defined as the monotonicity of the negativity immediately implies that is an entanglement monotone that can not increase under the more restrictive class of deterministic locc operations , ie . while this is not sufficient to qualify as an entanglement monotone it can also be proven that it is a monotone under probabilistic locc transformations .it is additive by construction but fails to be convex .although is manifestly continuous , it is not asymptotically continuous , and hence does not reduce to the entropy of entanglement on all pure states .the major practical advantage of is that it can be calculated very easily .in addition it also has various operational interpretations as an upper bound to , a bound on teleportation capacity , and an asymptotic entanglement cost for exact preparation under the set of ppt operations . _ the rains bound _ the logarithmic negativity , , can also been combined with a relative entropy concept to give another monotone known as the _ rains bound _ , which is defined as \ , .\ ] ] the function that is to be minimized is not convex which suggests the existence of local minima making the numerical minimization infeasible .nevertheless , this quantity is of considerable interest as one can observe immediately that is a lower bound to as vanishes for states that have a positive partial transpose .it can also be shown that is an upper bound to the distillable entanglement .it is interesting to observe that for werner states happens to be equal to , a connection that has been explored in more detail in . _ squashed entanglement _another interesting entanglement measure is the squashed entanglement ( see also ) which is defined as \nonumber\\ & ~ & { \rm where : } \nonumber \\ & ~ & i(\rho_{abe } ) : = s(\rho_{ae})+s(\rho_{be})-s(\rho_{abe})-s(\rho_{e})\ , .\nonumber\end{aligned}\ ] ] in this definition is the _ quantum conditional mutual information _, which is often also denoted as .the motivation behind comes from related quantities in classical cryptography that determine correlations between two communicating parties and an eavesdropper .the squashed entanglement is a convex entanglement monotone that is a lower bound to and an upper bound to , and is hence automatically equal to on pure states .it is also additive on tensor products , and is hence a useful non - trivial lower bound to .it has furthermore been proven that the squashed entanglement is continuous , which is a non - trivial statement because in principle the minimization must be carried out over _ all _ possible extensions , including infinite dimensional ones . note that despite the complexity of the minimization task one may find upper bounds on the squashed entanglement from explicit guesses which can be surprisingly sharp . for the totally anti - symmetric state for two qutrits one obtains immediately ( see example 9 in ) that which is very close to the sharpest known upper bound on the distillable entanglement for this state which is .the squashed entanglement is also known to be lockable , and is an upper bound to the secret distillable key . _ robustness quantities and norm based monotones _this paragraph discusses various other approaches to entanglement measures and then moves on to demonstrate that they and some of the measures discussed previously can actually be placed on the same footing . _robustness of entanglement _ another approach to quantifying entanglement is to ask how much noise must be mixed in with a particular quantum state before it becomes separable . for example measures the minimal amount of _ global _ state that must be mixed in to make separable . despite the intuitive significance of equation ( [ lam ] ) , for mathematical reasonsit is more convenient to parameterize this noise in a different way : this quantity , , is known as the _ global robustness _ of entanglement , and is monotonically related to by the identity .however , the advantage of using rather than is that the first quantity has very natural mathematical properties that we shall shortly discuss .the global robustness mixes in arbitrary noise to reach a separable state , however , one can also consider noise of different forms , leading to other forms of robustness quantity .for instance the earliest such quantity to be defined , which is simply called the _ robustness _ , , is defined exactly as except that the noise must be drawn from the set of separable states .one can also replace the set of separable states in the above definitions with the set of ppt states , or the set of non - distillable states .the robustness monotones can often be calculated or at least bounded non - trivially , and have found applications in areas such as bounding fault tolerance ._ best separable approximation _ rather than mixing in quantum states to destroy entanglement one may also consider the question of how much of a separable state is contained in an entangled state .the ensuing monotone is known as the _ best separable approximation _ , which we define as this measure is not easy to compute analytically or numerically .note however , that replacing the set sep by the set ppt allows us to write this problem as a semidefinite programme for which efficient algorithms are known ._ one shape fits all _ it turns out that the robustness quantities , the best separable approximation as well as the negativity are all part of a general family of entanglement monotones .such connections were first observed in , where it was noted that the negativity and robustness are part of a general family of monotones that can be constructed via a concept known as a _ base norm _ .we will explain this connection in the following .however , our discussion will deviate a little from the arguments presented in , as this will allow us to include a wider family of entanglement monotones such as and . to construct this family of monotoneswe require two sets of operators satisfying the following conditions : ( a ) are closed under locc operations ( even measuring ones ) , ( b ) are convex cones ( i.e. also closed under multiplication by non - negative scalars ) , ( c ) each member of can be written in the form - semidefinite operator , where are fixed real constants , and ( d ) any hermitian operator may be expanded as : where are normalised to have trace respectively , and .given two such sets and any state we may define an entanglement monotone as follows : note that if are also constrained to be quantum states ( i.e. ) , then we may rewrite this equation : hence equation ( [ rdef ] ) defines a whole family of quantities that have a similar structure to robustness quantities . in the more general case where , the quantities will not be robustness measures , but they will still be entanglement monotones . this can be shown as follows , where we will suppress the subscripts for clarity .suppose that a locc operation acts on to give output with probability .suppose also that the optimum expansion of the initial state is : then the output ensemble can be written as : where now because of the structure of each operator in , we have that , and hence for each outcome the expansion in ( [ expansion ] ) is a valid decomposition .this means that the average output entanglement satisfies : and hence the give entanglement monotones .it is also not difficult to show that the are convex functions . in the casethat the two sets and are identical , then the quantity can be shown to be a norm , and in fact it is a norm of the so - called _ base norm _ kind .as can be written as a simple function of the corresponding , this gives the robustness quantities a further interesting mathematical structure .all the monotones mentioned at the beginning of this subsection fit into this family - the ` _ robustness _ ' arises when both are the set of separable operators ; the ` _ best separable approximation _ 'arises when is the set of separable operators , is the set semi - definite operators ; the global robustness arises when is the set of separable operators , is the set of all positive semidefinite operators ; the negativity arises when where both are the set of normalised hermitian matrices with positive partial transposition .note that the ` _ random robustness _ ' is not a monotone and so does not fit into this scheme , for definition and proof of non - monotonicity see . _ the greatest cross norm monotone _another form of norm based entanglement monotone is the _ cross norm _ monotone proposed in .the _ greatest cross norm _ of an operator is defined as : \label{rud}\ ] ] where is the trace norm , and the infimum is taken over all decompositions of into finite sums of product operators . for finite dimensionsit can be shown that a density matrix is separable iff =1 , and that the quantity : is an entanglement monotone . as it is expressed as a complicated variational expression , can be difficult to calculate . however , for pure states and cases of high symmetry it may often be computed exactly .although does not fit precisely into the family of base norm monotones discussed above , there is a relationship .if the sum in ( [ rud ] ) is restricted to _ hermitian _ and ( which is of course only allowed if is hermitian ) , then we recover precisely the base norm , where are taken as the set of separable states .hence is an upper bound to the robustness . _ entanglement witness monotones _ entanglement witnesses are tools used to try to determine whether a state is separable or not . a hermitian operator is defined as an entanglement witness if : hence acts as a linear hyperplane separating some entangled states from the convex set of separable ones .an entanglement witness is a hermitean operator defining a hyperplane in the space of positive operators such that for all separable states we have and there is a for which . ]many entanglement witnesses are known , and in fact the chsh inequalities are well known examples .one can take a suitable entanglement witness ( ew ) and use the amount of ` violation ' as a measure of the non - separability of a given state .many entanglement monotones can be constructed by choosing ( bounded ) sets of of ews and defining monotones as the minimal violation over all witnesses taken from the chosen set - see e.g. .it turns out that this approach also offers another unified way of understanding the robustness and negativity measures discussed in the previous item .this concludes our short survey of basic entanglement measures .our review has mostly been formulated for two - party systems with finite dimensional constituents . in the remaining two subsectionswe will briefly summarize the problems that we are faced with in more general settings - where we are faced with more parties and infinite dimensional systems .we will present some of the results that have been obtained so far , and highlight some unanswered questions .in the preceding sections we have explicitly considered only finite dimensional systems . however , one may also develop a theory of entanglement for the infinite dimensional setting .this setting is often also referred to as the _ continuous variable _ regime , as infinite dimensional pure states are usually considered as wavefunctions in continuous position or momentum variables .the quantum harmonic oscillator is an important example of a physical system that needs to be described in an infinite dimensional hilbert space , as it is realized in many experimental settings , e.g. as modes of quantized light . _general states _a naive approach to infinite dimensional systems encounters several complications , in particular with regards to continuity .firstly , we will need to make some minimal requirements on the hilbert space , namely that the system has the property that tr to avoid pathological behaviour due to limit points in the spectrum .the harmonic oscillator is an example of a system satisfying this constraint .even so , without further constraints , entanglement measures can not be continuous because by direct construction one may demonstrate that in any arbitrarily small neighborhood of a pure product state , there exist pure states with _ arbitrarily strong _ entanglement as measured by the entropy of entanglement .the following example makes this explicit . chose where defined by where and are orthonormal bases .then converges to in trace - norm , i.e. , while .obviously , is not continuous around the state .however , this perhaps surprising feature can only occur if the mean energy of the states grows unlimited in . if one imposes additional constraints such as restricting attention to states withbounded mean energy then one finds that the continuity of entanglement measures can be recovered .more precisely , given the hamiltonian and the set \le m\} ] , where we define the _ symplectic _ matrix as follows : .\ ] ] states may now also be characterized by functions that are defined on phase space . given a vector , the weyl or glauber operator is defined as : these operators generate displacements in phase space , and are used to define the _ characteristic function _ of : .\ ] ] this can be inverted by the transformation : and hence the characteristic function uniquely specifies the state .gaussian states are now defined as those states whose characteristic function is a gaussian , i.e. , where is a -matrix and is a vector . in defining gaussian states in this wayit is easy to see that the reduced density matrix of any gaussian state is also gaussian - to compute the characteristic function of a reduced density matrix we simply set to zero any components of corresponding to the modes being traced out . as a consequence of the above definition, a gaussian characteristic function can be characterized via its first and second moments only , such that a gaussian state of modes requires only real parameters for its full description , which is polynomial rather than exponential in .the first moments form the displacement vector ] which is exactly the case if satisfies this condition is satisfied by the real matrices that form the so - called real symplectic group .its elements are called symplectic or canonical transformations .it is useful to know that any orthogonal transformation is symplectic . to any symplectic transformation also are symplectic .the inverse of is given by and the determinant of every symplectic matrix is = 1 ] is only 2-entangled .however , given two copies of this state the ` classical flag ' particle can enable alice to obtain ( with some probability ) one epr pair with bob , and one with charlie .she can then use these epr pairs and teleportation to distribute any three party entangled state she chooses .states of three qubits displaying a similar phenomenon can also be constructed .hence we are faced with a subtle dilemma - either this notion of ` k - entanglement ' is not closed under locc , or it is not closed under taking many copies of states .note however that these states may still have relevance for example in the study of fault - tolerant quantum computation ._ quantifying multi - partite entanglement _ already in the bi - partite setting it was realized that there are many non - equivalent ways to quantify entanglement . this concerned mainly the mixed state case , while in the pure state case the entropy of entanglement is a distinguished measure of entanglement . in the multipartitesetting this situation changes .as was discussed above it appears difficult to establish a common currency of multipartite entanglement even for pure states due to the lack of asymptotically reversible interconversion of quantum states .the possibility to define k - entangled states and the ensuing ambiguities lead to additional difficulties in the definition of entanglement measures in multi - partite systems .owing to this there are many ways to go about quantifying multipartite entanglement. some of these measures will be natural generalizations from the bi - partite setting while others will be specific to the multi - partite setting .these measures and their known properties will be the subject of the remainder of this section ._ entanglement cost and distillable entanglement _ in the bi - partite setting it was possible to define unambiguously the entanglement of pure states establishing a common currency for entanglement .this then formed the basis for unique definitions of the entanglement cost and the distillable entanglement .the distillable entanglement determined the largest rate , in the asymptotic limit , at which one may obtain pure maximally entangled states from an initial supply of mixed entangled states using locc only .however , in the multi - particle setting there is no unique target state that one may aim for .one may of course provide a target state specific definition of distillable entanglement , for example the largest rate at which one may prepare ghz states , cluster states or any other class that one is interested in . as these individual resourcesare not asymptotically equivalent each of these measures will capture different properties of the state in question .one encounters similar problems when attempting to define the entanglement cost .again , one may use singlet states as the resource from which to construct the state by locc but one may also consider other resources such as ghz or w states . for each of these settingsone may then ask for the best rate at which one can create a target state using locc in the asymptotic limit .therefore we obtain a variety of possible definitions of entanglement costs . while the interpretation of each of these measures is clear it is equally evident that it is not possible to arrive at a unique picture from abstract considerations alone .the operational point of view becomes much more important as different resources may be readily available in different experimental settings and then motivating different definitions of the entanglement cost and the distillable entanglement . _ relative entropic measures .distance measures _ in the bipartite setting we have discussed various distance based measures in which one minimizes the distance of a state with respect to a set of states that does not increase in size under locc .one such set was that of separable states and a particularly important distant measure is the relative entropy of entanglement .this lead to the relative entropy of entanglement . as we discussed in the first part of this section the most natural extension of the definition of separable states in the multipartite settingis given by where the label different parties .in analogy with the bipartite definition one can hence define a multipartite relative entropy measure : where is now the set of multipartite separable states . as in the bipartite casethe resulting quantity is an entanglement monotone which , for pure states , coincides with the entropy of entanglement .therefore , on pure states , this measure is additive while it is known to be sub - additive on mixed states .remarkably , the multipartite relative entropy of entanglement is _ not _ even additive for pure states - a counterexample is provided by the totally anti - symmetric state where is the totally anti - symmetric tensor .one can also compute the relative entropy of entanglement for some other tri - partite states .examples of particular importance in this respect are the w - state for which we find and the states more examples can be found quite easily . also in our discussion of multi - partite entanglement we introduced the notion of k - entangled states .let us denote the set of k - entangled state of an n - partite system by .if ew explicitly consider the single copy setting , then it is clear that that the set does not increase under locc . as a consequenceit can be used as the basis for generalizations of the relative entropy of entanglement simply replacing the set above by .we have learnt however that the set may grow when allowing for two or more copies of the state .this immediately implies that the so constructed measure will exhibit sub - additivity again .given that even the standard definition for the multi - partite relative entropy of entanglement is sub - additive this should not be regarded as a deficiency .indeed , this subadditivity may be viewed as a strength as it could lead to particularly strong bounds on the associated distillable entanglement .exactly the same principle may be used to extend any of the distance based entanglement quantifiers to multi - party systems - one simply picks a suitable definition of the ` unentangled ' set ( i.e. a set which is closed under locc operations , and complies with some notion of locality ) , and then defines the minimal distance from this set as the entanglement measure . as stated earlier , one may also replace the class of separable states with other classes of limited entanglement - e.g. states containing only bipartite entanglement .such classes are _ not _ in general closed under locc in the many copy setting and so the resulting quantities may exhibit strong subadditivity and their entanglement monotonicity needs to be verified carefully . _ robustness measures . norm based measures . _ the robustness measures discussed in the bipartite case extend straightforwardly to the multiparty case . in the bipartite case we constructed the robustness monotones from two sets of operators that were closed under locc operations , and in addition satisfied certain convexity and ` basis ' properties . to define analogous monotones in the multiparty case we must choose sets of multiparty operators that have these properties .one could for example choose the sets to be the set of k - separable positive operators , for any integer . _ entanglement of assistance .localizable entanglement .collaborative localizable entanglement ._ one way of characterizing the entanglement present in a multiparty state is to understand how local actions by the parties may generate entanglement between two distinguished parties .for example , in a ghz state of three parties , it is possible to generate an epr pair between any two parties using only locc operations - if one party measures in the basis , then there will be a residual epr pair between the remaining two parties .this is the case even though the reduced state of the two parties is by itself unentangled .the first attempt to quantify this phenomenon was the _ entanglement of assistance _ proposed by .the _ entanglement of assistance _ is a property of 3-party states , and quantifies the maximal bipartite entanglement that can be generated on average between two parties if party c measures her particle and communicates the result to .a related measure known as the _ localizable entanglement _ was proposed and investigated in for the general multiparty case - this is defined as the maximum entanglement that can be generated between two parties if all _ remaining _ parties act using locc on the particles that they possess .both these measures require an underlying measure of bipartite entanglement to quantify the the entanglement between the two singled - out parties . in the original articles the pure state entropy of entanglementwas used , however , one can envisage the use of other entanglement measures .the localizable entanglement has been shown to have interesting relations to correlation functions in condensed matter systems .as multiparty entanglement quantifiers , both the entanglement of assistance and the localizable entanglement have the drawback that they can deterministically _ increase _ under locc operations between all parties .this phenomenon occurs because these measures are defined under the restriction that alice and bob can not be involved in classical communication with any other parties - it turns out that in some situations allowing this communication can increase the entanglement that can be obtained between alice and bob .this observation lead the authors of to define the _ collaborative _ localizable entanglement as the maximal bipartite entanglement ( according to some chosen measure ) that may be obtained ( on average ) between alice and bob using locc operations involving _ all _ parties .it is clear that by definition these collaborative entanglement measures are entanglement monotones .it is interesting to note that although the bare localizable entanglement is not a monotone , its regularised version _ is _ a monotone for multiparty pure states . in is shown that the regularised version of the localizable entanglement reduces to the minimal entropy of entanglement across any bipartite cut that divides alice and bob , which is clearly a locc monotonous quantity by the previous discussion of bipartite entanglement measures . _ geometric measure ._ in the case of pure multiparty states one could try to quantify the ` distance ' from the set of separable states by considering various functions of the maximal overlap with a product state .one interesting choice of function is the logarithm .this was used in to define the following entanglement quantifier : where the supremum is taken over all pure product states .this quantity is non - negative , equals zero iff the state is separable , and is manifestly invariant under local unitaries .one can extend this quantity to mixed states using a convex roof construction .however is not an entanglement monotone , and it is _ not _ additive for multiparty pure states .nevertheless , is worthy of investigation as it has useful connections to other entanglement measures , and also has an interesting relationship with the question of channel capacity additivity .we could also have described as a norm based measure , as the quantity is a norm ( of vectors ) known to mathematicians as the _ injective tensor norm _ . _ ` tangles ' and related quantities. entanglement quantification by local invariants . _an interesting property of bipartite entanglement is that it tends to be _ monogamous _ , in the sense that if three parties have the same dimensions , and if two of the parties and are very entangled , then a third party can only be weakly entangled with either or .if are in a singlet state then they can not be entangled with c at all . in idea was put into the form of a rigorous inequality for three qubit states using a entanglement quantifier known as the _ tangle _ , . for a dimensional systemsthe tangle is defined as where is the square of the concurrence of pure state and the infimum is taken over all pure state decompositions .the concurrence can be used in this way as any pure state of a system is equivalent to a two qubit pure state .it has been shown that satisfies the inequality where the notation means that is computed across the bipartite splitting between party and parties .this shows that the amount of bipartite entanglement between party and several individual parties is bounded from above by the amount of bipartite entanglement between party and parties collectively . in the case of three qubit pure states the _ residual tangle _ is a local - unitary invariant that is independent of which qubit is selected as party , and might be proposed as a ` quantifier ' of three party entanglement for pure states of 3-qubits .however , there are states with genuine three party entanglement for which the residual tangle can be zero ( the w - state serves as an example ). however , the residual tangle can only be non - zero if there is genuine tripartite entanglement , and hence can be used as a indicator of three party entanglement .another way to construct multiparty entanglement measures for multi - qubit _ pure _ systems is simply to single out one qubit , compute the entanglement between that qubit and the rest of the system , and then average over all possible choices of the singled out qubit . as any _ pure _ bipartite system of dimensions can be written in terms of two schmidt coefficients , one can apply all the formalism of two - qubit entanglement .this approach has been taken , for example , in the paper by meyer and wallach . that the quantity proposed in is essentially only a measure of the bipartite entanglement across various splittings was shown by brennen .extensions of this approach are presented in ._ local unitary invariants : _ the residual tangle is only one of many _ local _ unitary invariants that have been developed for multiparty systems .such local invariants are very important for understanding the structure of entanglement , and have also been used to construct prototype entanglement measures .examples of local invariants that we have already mentioned are the schmidt coefficients and the geometric measure . in the multiparty casewe may define the _ local _ invariants as those functions that are invariant under a _ local _ group transformation of fixed dimensions .if each particle is assumed for simplicity to have the same dimension , then these local groups are of the form where are taken from a particular -dimensional group representation such as the unitary group or the group of invertible matrices .the physical significance of the local invariants is that if two states have different values for such an invariant then they can not even be inter - converted probabilistically using stochastic locc ( ` slocc ' ) operations . in the case of local unitary groupsone typically only need consider invariants that are _ polynomial _ functions of the density matrix elements - this is because it can be shown that two states are related by a local unitary iff they have the same values on the set of polynomial invariants .for more general groups a complete set of polynomial invariants can not always be constructed , and one must also consider local invariants that are not polynomial functions of states - one example is a local invariant called the ` schmidt rank ' , which is the minimal number of product state - vector terms in which a given multiparty pure state may be coherently expanded . it can be shown that one can construct an entanglement monotone ( the ` schmidt measure ' ) as the convex - roof of the logarithm of this quantity . finding non - trivial local invariants is quite challenging in general and can require some sophisticated mathematics . however , for pure states of some dimensions it is possible to use such invariants to construct a variety of entanglement quantifiers in a similar fashion to the tangle .these quantifiers are useful for identifying different types of multiparty entanglement .we refer the reader to articles and references therein for further details .quantum entanglement is a rich field of research . in recent years considerable efforthas been expended on the characterization , manipulation and quantification of entanglement . the results and techniques that have been obtained in this researchare now being applied not only to the quantification of entanglement in experiments but also , for example , for the assessment of the role of entanglement in quantum many body systems and lattice field theories . in this articlewe have surveyed many results from entanglement theory with an emphasis on the quantification of entanglement and basic theoretical tools and concepts .proofs have been omitted but useful results and formulae have been provided in the hope that they prove useful for researchers in the quantum information community and beyond . it is the hope that this article will be useful for future research in quantum information processing , entanglement theory and its implications for other areas such as statistical physics . _ multiparty entanglement : _ the general characterisation of multiparty entanglement is a major open problem , and yet it is particularly significant for the study of quantum computation and the links between quantum information and many - body physics .particular unresolved questions include : * _ finiteness of mregs for three qubit states _ in an attempt to achieve a notion of reversibility in the multi - partite setting , the concept of mregs was introduced .this was a set of n - partite states for fixed local dimension from which all other such states may be obtained asymptotically reversibly .it was hoped for that such a set may contain only a finite number of states. however , there are suggestions that this may not the case . * _ distillation results for specific target states _ in the bi - partite setting the uniqueness of maximally entangled states led to clear definitions for the distillable entanglement .as outlined above this is not so in the multi - party setting .given a specific interesting multiparty target state ( e.g. ghz states , cluster states etc . ) , or set of multiparty target states , what are the best possible distillation protocols that we can construct ?are there good bounds that can be derived using multiparty entanglement measures ?some specific examples have been considered but more general results are still missing ._ additivity questions : _ of all additivity problems , deciding whether the entanglement of formation is additive is perhaps the most important unresolved question .if is additive this would greatly simplify the evaluation of the entanglement cost .it would furthermore imply the additivity of the classical capacity of a quantum channel .related to the additivity question is the question of the monotonicity of the entanglement cost under general locc .this may be proven reasonably straightforwardly if the entanglement cost itself is fully additive . however , without this assumption no proof is known to the authors , and in fact a recent argument seems to show that full additivity of the entanglement cost is equivalent to its monotonicity .in addition to , there are many other measures for which additivity is unknown .examples include the distillable entanglement and the distillable key . _ distillable entanglement _ distillable entanglement is a well motivated entanglement measure of significant importance .its computation is however supremely difficult in general and even the determination of the distillability of a state is difficult . indeed ,good techniques or algorithms for deciding whether a bipartite state is distillable or not , and for bounding the distillable entanglement , are still largely missing .* _ are there npt bound entangled states ? _ in the bi - partite setting there are currently three known distinct classes of states in terms of their entanglement properties under locc .these are the separable states , the non - separable states with positive partial transpose ( which are also non - distillable ) , and finally the distillable states .some evidence exists that there is another class of states that do not possess a positive partial transpose but are nevertheless non - distillable . *_ bounds on the distillable entanglement . _any entanglement measure provides an upper bound on the distillable entanglement .various bounds have been provided such as the squashed entanglement , the rains bound and asymptotic relative entropy of entanglement .the last two of these coincide for werner states and it is an open question whether they always coincide , and whether they are larger or smaller than the squashed entanglement . _ entanglement measures _ the present article has presented a host of entanglement measures .many of their properties are known but crucial issues remain to be resolved . amongst these are the following . * _ operational interpretation of the relative entropy of entanglement _ while the entanglement cost and the distillable entanglement possess evident operational interpretations no such clear interpretation is known for the relative entropy of entanglement .a possible interpretation in terms of the distillation of local information has been conjectured and partially proven in . *_ calculation of various entanglement measures _ there are very few measures of entanglement that can be computed exactly and possess or are expected to possess an operational interpretation . a notable exception is the entanglement of formation for which a formula exists for the two qubit case . is it possible to compute , or at least derive better bounds , for the other variational entanglement measures ?one interesting possibility is the 2-qubit case - in analogy to , is there a closed form for the relative entropy of entanglement or the squashed entanglement ? * _ squashed entanglement _ as an additive , convex , and asymptotically continuous entanglement monotone the squashed entanglement is known to possess almost all potentially desirable properties as an entanglement measure .nevertheless , there are a number of open interesting questions - in particular : ( 1 ) is the squashed entanglement strictly non - zero on inseparable states , and ( 2 ) can the squashed entanglement be formulated as a finite dimensional optimisation problem ( with eve s system of bounded dimension ) ? * _ asymptotic continuity and lockability questions _ it is unknown whether measures such as the distillable key , the distillable entanglement , and the entanglement cost are asymptotically continuous , and it is unknown whether the distillable entanglement or distillable key are lockable .this is important to know as lockability quantifies ` continuity under tensor products ' , and so is a physically important property - if a system is susceptible to loss of particles , then any characteristic quantified by a lockable measure will tend to be very fragile in the presence of such noise . _entanglement manipulation _ entanglement can be manipulated under various sets of operations , including locc and ppt operations .while some understanding of what is possible and impossible has been obtained , a complete understanding has not been reached yet .* _ characterization of entanglement catalysis _ for a single copy of bi - partite pure state entanglement the locc transformations are fully characterized by the theory of majorization .it was discovered that there are transformations such that its success probability under locc is but for which an entangled state exists such that can be achieved with certainty under locc .a complete characterization for states admitting entanglement catalysis is currently not known . * _ other classes of non - global operation .reversibility under ppt operations _ it is well established that even in the asymptotic limit locc entanglement transformations of mixed states are irreversible .however in it was shown that that the antisymmetric werner state may be reversibly interconverted into singlet states under ppt operations .it is an open question whether this result may be extended to all werner states or even to all possible states .in addition to questions concerning ppt operations , are there other classed of non - global operation that can be useful ? if reversibility under ppt operations does not hold , do any other classes of non - global operations exhibit reversibility ? more open problems in quantum information science can be found in the braunschweig webpage of open problems . we hope that this list will stimulate some of the readers of this article into attacking some of these open problems and perhaps report solutions , even partial ones . _acknowledgments _ we have benefitted greatly from discussions on this topic with numerous researchers over several years . in relation to this specific article , we must thank k. audenaert , f. brando , m. horodecki , o. rudolph , and a. serafini for helping us to clarify a number of issues , as well as g.o .myhr , p. hyllus , and a. feito - boirac for careful reading and helpful suggestions .we would also like to thank m. christandl , j. eisert , j. oppenheim , a. winter , and r.f .werner for sharing with us their thoughts on open problems .this work is part of the qip - irc ( www.qipirc.org ) supported by epsrc ( gr / s82176/0 ) , the eu integrated project qubit applications ( qap ) funded by the ist directorate as contract no .015848 , the eu thematic network quprodis ( ist-2001 - 38877 ) , the leverhulme trust , the royal commission for the exhibition of 1851 and the royal society .99 m. bell , k. gottfried and m. veltmann , _ john s. bell on the foundations of quantum mechanics _ , world scientific publishing , singapore .bell , physica * 1 * , 195 ( 1964 ) .l. hardy , contemp . phys . *39 * , 419 ( 1998 ) .plenio and v. vedral , contemp . phys . *39 * , 431 ( 1998 ) .b. schumacher and m. d. westmoreland , e - print arxiv quant - ph/0004045 .m. horodecki , quant .inf . comp .* 1 * , 3 ( 2001 ) .m. christandl , phd thesis , quant - ph/0604183 .p. horodecki and r. horodecki , quant .inf . comp . * 1 * , 45 ( 2001 ) .j. eisert and m. b. plenio , int .j. quant . inf .* 1 * , 479 ( 2003 ). m. nielsen and i. chuang , _ quantum information and computation _ c.u.p .2000 however , it is important to realise that strong correlations can be obtained by non - local theories that in other respects might be considered quite classical .hence it requires much more discussion than we will present here to decide whether certain correlations are really _ quantum _ , or _ classically non - local_. n. j. cerf , n. gisin , s. massar , and s. popescu , phys .lett . * 94 * , 220403 ( 2005 ) .j. eisert , k. jacobs , p. papadopoulos and m.b .plenio , phys . rev . a * 62 * , 052317 ( 2000 ) .d. collins , n. linden and s. popescu , phys .a * 64 * , 032302 ( 2001 ) .s. virmani and m.b .plenio , phys .a * 67 * , 062308 ( 2003 ) . c. h. bennett , d. p. divincenzo , c. a. fuchs , t. mor , e. rains , p. w. shor , j. smolin , and w. k. wootters , phys .a * 59 * , 1070 ( 1999 ) .rains , ieee trans .theory * 47 * , 2921 ( 2001 ) .t. eggeling , k. g. h. vollbrecht , r. f. werner , and m. m. wolf , phys .* 87 * , 257902 ( 2001 ) .k. audenaert , m. b. plenio , and j. eisert , phys .lett . * 90 * , 027901 ( 2003 ) .werner , phys .a * 40 * , 4277 ( 1989 ) .l. masanes , phys .* 96 * , 150501 ( 2006 ) .brando , e - print arxiv quant - ph/0510078 l. masanes , e - print arxiv quant - ph/0510188 c. h. bennett , h. bernstein , s. popescu and b. schumacher , phys .a * 53 * , 2046 ( 1996 ) .v. vedral , m.b .plenio , m.a .rippin , and p.l .knight , phys .lett . * 78 * ,2275 ( 1997 ) . v. vedral and m.b .plenio , phys .a * 57 * , 1619 ( 1998 ) .lo and s. popescu , phys .a * 63 * , 022301 ( 2001 ) .nielsen , phys .lett . * 83 * , 436 ( 1999 ) .g. vidal , phys .lett . * 83 * , 1046 ( 1999 ) .d. jonathan and m.b .plenio , phys .lett . * 83 * , 1455 ( 1999 ) .l. hardy , phys .a * 60 * , 1912 ( 1999 ) .r. bhatia , d. jonathan and m.b .plenio , phys .lett . * 83 * , 3566 ( 1999 ) . g. vidal , d. jonathan and m. a. nielsen , phys . rev .a * 62 * , 012304 ( 2000 ) .t.m . cover and j.a .thomas , _ elements of information theory _ ( wiley interscience , new york , 1991 ) common synonyms for two - qubit maximally entangled states include _ ` singlet states ' , ` bell pairs ' _ or _ ` epr pairs'_. even though these terms strictly mean different things , we will follow this widespread abuse of terminology .rains , e - print arxiv quant - ph/9707002 p. hayden , m. horodecki , and b.m .terhal , , 6891 ( 2001 ) .a. kent , phys .lett . * 81 * , 2839 ( 1998 ) .e. rains , phys .a * 60 * , 173 ( 1999 ) ; ibid * 60 * , 179 ( 1999 ) .g. vidal and j.i .cirac , phys .86 * , 5803 ( 2001 ) .m. horodecki , a. sen(de ) and u. sen , phys .a * 67 * , 062314 ( 2003 ) .bennett , d.p .divincenzo , j.a .smolin , and w.k .wootters , , 3824 ( 1996 ) .wootters , , 2245 ( 1998 ) .wotters , quant .inf . comp . * 1 * , 27 ( 2001 ) .v. vedral , m.b .plenio , k. jacobs and p.l .knight , phys .a * 56 * , 4452 ( 1997 ) .m. christandl and a. winter , j. math .phys * 45 * , 829 ( 2004 ) .shor , comm .phys . * 246 * , 453 ( 2004 ) .k.m.r . audenaert and s.l .braunstein , comm .phys . * 246 * , 443 ( 2003 ) .a. a. pomeransky , phys .a * 68 * , 032317 ( 2003 ) .j. smolin , f. verstraete and a. winter , phys .a * 72 * , 052317 ( 2005 ) f. verstraete , m .- a .martin - delgado and j.i .cirac , phys .92 * , 087201 ( 2004 ) f. verstraete , m. popp , and j. i. cirac , phys .. lett . * 92 * , 027901 ( 2004 ) m. popp , f. verstraete , m. a. martin - delgado and j. cirac , phys .a. * 71 * , 042306 ( 2005 ) .pachos and m.b .plenio , phys .lett . * 93 * , 056402 ( 2004 ) a. key , d.k.k .lee , j.k .pachos , m.b .plenio , m. e. reuter , and e. rico , optics and spectroscopy * 99 * , 339 ( 2005 ) .a. harrow and m. nielsen , phys .a * 68 * , 012308 ( 2003 ) s. virmani , s.f .huelga , and m.b .plenio , phys .a. * 71 * , 042328 ( 2005 ) m. murao , m.b .plenio , s. popescu , v. vedral and p.l .knight , phys .a * 57 * , 4075 ( 1998 ) .h. aschauer , w. dr and h.j .briegel , phys .a * 71 * , 012319 ( 2005 ) .k. goyal , a. mccauley and r. raussendorf , e - print arxiv quant - ph/0605228 k. chen and h - k lo , quant - ph/0404133 .donald , m. horodecki , and o. rudolph , j. math .phys . , 4252 ( 2002 ) .plenio and v. vitelli , contemp . phys . *42 * , 25 ( 2001 ) .plenio , phys .lett . * 95 * , 090503 ( 2005 ) .m. horodecki , open syst .* 12 * , 231 ( 2005 ) .shor , j.a .smolin and b.m .terhal , phys .lett . * 86 * , 2681 ( 2001 ) .d. divincenzo , m. horodecki , d. leung , j. smolin , and b. terhal phys .lett . * 92 * , 067902 ( 2004 ) .k. horodecki , m. horodecki , p. horodecki and j. oppenheim , phys .lett . * 94 * , 200501 ( 2005 ) .b. synak - radtke and m. horodecki , e - print arxiv quant - ph/0507126 m. horodecki , p. horodecki , and r. horodecki , phys .rev.lett . * 84 * , 2014 ( 2000 ) .s. virmani and m.b .plenio , phys .a * 288 * , 62 ( 2000 ) .j. eisert and m.b .plenio , j. mod .opt . * 46 * , 145 ( 1999 ) .k. zyczkowski and i. bengtsson , ann . phys . *295 * , 115 ( 2002 ) .a. miranowicz and a. grudka , j. opt .b : quantum semiclass .optics * 6 * , 542 ( 2004 ) .d. yang , m. horodecki , r. horodecki and b. synak - radtke , phys .lett . * 95 * , 190501 ( 2005 ) .m. horodecki , p. horodecki , r. horodecki , j. oppenheim , a. sen(de ) , u. sen and b. synak , phys .a * 71 * , 062307 ( 2005 ) .plenio , s. virmani , and p. papadopoulos , j. phys .a * 33 * , l193 ( 2000 ) .k.g.h . vollbrecht and f. verstraete , phys .a * 71 * , 062325 ( 2005 ) . i. devetak and a. winter , proc .a * 461 * , 207 ( 2005 ) .a. s. holevo , ieee trans.info.theor .* 44 * , 269 ( 1998 ) .k. audenaert , f. verstraete and b. de moor , phys .a * 64 * , 052304 ( 2001 ) .vollbrecht and r.f .werner , phys .a * 64 * , 062307 ( 2001 ) .j. eisert , t. felbinger , p. papadopoulos , m.b .plenio and m. wilkens , phys .* 84 * , 1611 ( 2000 ) .b. terhal and k.g.h .vollbrecht , phys .* 85 * , 2625 ( 2000 ) .g. vidal , j. mod . opt . * 47 * , 355 ( 2000 ) .plenio and v. vedral , j. phys .a * 34 * , 6997 ( 2001 ) .m.j . donald and m. horodecki , physics letters a * 264 * , 257 ( 1999 ). k. audenaert , j. eisert , e. jan , m.b .plenio , s. virmani , and b. demoor , phys .lett . * 87 * , 217902 ( 2001 ) .n. linden , s. popescu , b. schumacher and m. westmoreland , e - print arxiv quant - ph/9912039 e. galvo , m.b .plenio and s. virmani , j. phys . a * 33 * , 8809 ( 2000 ) .s. wu and y. zhang , phys .a * 63 * , 012308 ( 2001 ) .s. ishizaka , phys .lett . * 93 * , 190501 ( 2004 ) .s. ishizaka and m.b .plenio , phys .a * 71 * , 052303 ( 2005 ) s. ishizaka and m.b .plenio , phys .a * 72 * , 042325 ( 2005 ) .s. boyd and l. vandenberghe , _ convex optimization _ , cambridge university press 2004 .k. audenaert , b. demoor , k.g.h .vollbrecht and r.f.werner , phys .rev a * 66 * , 032310 ( 2002 ) .t. wei and p. goldbart , phys .a. * 68 * , 042307 ( 2003 ) .j. eisert , k. audenaert , and m.b .plenio , j. phys .a * 36 * , 5605 ( 2003 ) .k. horodecki , m. horodecki and p. horodecki , and j. oppenheim , phys .94 * , 160502 ( 2005 ) .a. peres , phys .lett . * 77 * , 1413 ( 1996 ) .horodecki , phys .a * 232 * , 333 ( 1997 ) .m. horodecki , p. horodecki and r. horodecki , phys .lett . * 80 * , 5239 ( 1998 ) . k. zyczkowski , p. horodecki , a. sanpera , and m. lewenstein , phys .a * 58 * , 883 ( 1998 ) .j. lee , m.s .kim , y.j .park , and s. lee , j. mod . opt . * 47 * , 2151 ( 2000 ) .j. eisert , g. vidal and r.f .werner , , 32314 ( 2002 ) .r. alicki and m. fannes , e - print arxiv quantum - ph/0312081 s. ishizaka , phys .rev . a * 69 * , 020301(r ) ( 2004 ) .r. r. tucci , quant - ph/9909041 ; r. r. tucci , quant - ph/0202144 .g. vidal and r. tarrach , phys .a. * 59 * , 141 ( 1999 ) .m. steiner , phys .a * 67 * , 054305 ( 2003 ) .m. lewenstein and a. sanpera * 80 * , 2261 ( 1998 ) .r. nagel ` order unit and base norm spaces ' , in _ foundations of quantum mechanics and ordered linear spaces _ , eds .a. hartkmper and h. neumann , springer - verlag ( 1974 ) .o. rudolph , j. phys .a : math gen * 33 * , 3951 ( 2000 ) .o. rudolph , j. math . phys . * 42 * , 2507 ( 2001 ) .o. rudolph , quant - ph/0202121 .brando , phys .a * 72 * , 040303(r ) ( 2005 ) .this excludes for example the hydrogen atom from our considerations as it has a limit point in the spectrum at the ionization level .j. eisert , ch .simon and m.b .plenio , j. phys .a * 35 * , 3911 ( 2002 ) s. parker , s. bose and m.b .plenio , phys .a * 61 * , 032305 ( 2000 ) .wolf , g. giedke and j.i .cirac , phys .lett . * 96 * , 080502 ( 2006 ) .k. cahill and r. glauber , phys . rev . * 177 * , 1857 ( 1969 ) .see e.g. h. p. robertson , phys .rev . * 46 * , 794 ( 1934 ) ; a. serafini , phys . rev .lett . * 96 * , 110402 ( 2006 ) .r. simon , n. mukunda , b. dutta , phys .a. * 49 * , 1567 ( 1994 ) r. simon , phys .lett . * 84 * , 2726 ( 2000 ) .duan , g. giedke , j.i . cirac and p. zoller ,lett . * 84 * , 2722 ( 2000 ) r.f .werner and m.m .wolf , phys .86 * , 3658 ( 2001 ) .g. giedke , b. kraus , m. lewenstein , and j.i .cirac , phys .lett . * 87 * , 167904 ( 2001 ) g. giedke , l .- m .duan , j.i .cirac and p. zoller , quantum .* 1 * , 79 ( 2001 ) .j. eisert and m.b .plenio , phys .lett . * 89 * , 097901 ( 2002 ) .j. eisert , s. scheel , and m.b .plenio , phys .lett . * 89 * , 137903 ( 2002 ) .g. giedke and j. i. cirac , phys .a * 66 * , 032316 ( 2002 ) .the no - go theorem for gaussian entanglement distillation is actually quite subtle , as even defining gaussian distillation is non - trivial .consider for example the version of the no - go theorem proven in - there entanglement distillation is defined as a process which allows you to reach a two - mode squeezed state with arbirarily high squeezing .although show that this is not possible , from an operational perspective one can use gaussian operations to improve the quality of the entanglement present . for instance , consider a bi - partite four - mode state which consists of a two - mode highly entangled state tensored with a product pure state . given two copies of , alice and bobcan merely throw away the product components and are left with a single four - mode state that possesses greater utility for quantum information processing .hence according to an operational definition of entanglement distillation ( say improvement of teleportation fidelity ) , it is possible to distill gaussian entanglement using gaussian operations from gaussian states of 2 2 modes .this apparent contradiction arises because there are different ways in which one can quantify the increase of entanglement of quantum states - in principle this subtle distinction may also be an issue in the finite dimensional regime .g. giedke , j. eisert , j.i .cirac and m.b .plenio , quantum .comput . * 3 * , 211 ( 2003 ) .j. fiurek , phys .lett . * 89 * , 137904 ( 2002 ) .r. simon , e.c.g .sudarshan , and n. mukunda , phys .a * 36 * , 3868 ( 1987 ) .arvind , b. dutta , n. mukunda , and r. simon , pramana * 45 * , 471 ( 1995 ) .j. williamson , am . j. math . * 58 * , 141 ( 1936 ) .g. giedke , m.m .wolf , o. krger , r.f .werner , and j.i .cirac , phys .lett . * 91 * , 107901 ( 2003 ) m.m .wolf , g. giedke , o. krger , r.f .werner and j.i .cirac , phys .a * 69 * , 052320 ( 2004 ) .shirokov , e - print quant - ph/0411091 .k. audenaert , j. eisert , m.b .plenio and r.f .werner , phys .a * 66 * , 042327 ( 2002 ) .plenio , j. eisert , j. dreiig , m. cramer , phys .lett . * 94 * , 060503 ( 2005 ) .m. cramer , j. eisert , m.b .plenio and j. dreissig , phys .a * 73 * , 012309 ( 2006 ) r. raussendorf and h .- j .briegel , phys .lett . * 86 * , 5188 ( 2001 ) .d. j.wineland , j. j. bollinger , w.m .itano , f. l. moore , and d. j. heinzen , phys .a * 46 * , r6797 ( 1992 ) .s.f huelga , c. macchiavello , t. pellizzari , a.k .ekert , m. b. plenio , j.i .cirac , phys .lett . * 79 * , 3865 ( 1997 ) .m. murao , d. jonathan , m.b .plenio and v. vedral , phys .a * 59 * , 156 ( 1999 ) .m. hillery , v. buzek , and a. berthiaume , phys .a * 59 * , 1829 ( 1999 ) .a. karlsson , m. koashi , and n. imoto , phys .a * 59 * , 162 ( 1999 ) m. murao , d. jonathan , m.b .plenio and v. vedral , phys .a * 61 * , 032311 ( 2000 ) .w. dr , g. vidal , and j. i. cirac phys .a * 62 * , 062314 ( 2000 ) .f. verstraete , j. dehaene , b. de moor , and h. verschelde , phys .a * 65 * , 052112 ( 2002 ) .m. horodecki , j. oppenheim , and a. winter , nature * 436 * , 673 ( 2005 ) .t. wei , m. ericsson , p. goldbart and w. munro , quant .inf . comp . * 4 * , 252 ( 2004 ) .r. werner and a. holevo , j. math .phys . * 43 * , 4353 ( 2002 ) .see e.g. a. defant and k. floret , ` _ tensor norms and operator ideals _ ' , north - holland ( 1993 ) .v. coffman , j. kundu , and w. wootters , phys .a. * 61 * , 052306 ( 2000 ) .t. osborne and f. verstraete , quant - ph/0502176 .the web site at the universitt braunschweig _ http://www.imaph.tu-bs.de/qi/problems/3.html _ contains an interesting review of this problem and further references .j. eisert and h. j. briegel , phys .a * 64 * , 022306 ( 2001 ) .a. miyake , phys .a. * 67 * , 012108 ( 2003 ) .p. levay , journal of physics a * 38 * , 9075 ( 2005 ) .m. horodecki , private communication .divincenzo , r. jozsa , p.w .shor , j.a .smolin , b.m .terhal , and a.v .thapliyal , phys .a * 61 * , 062312 ( 2000 ) .w. dr , j.i .cirac , m. lewenstein and d. bru , phys .a * 61 * , 062313 ( 2000 ) . | we review the theory of entanglement measures , concentrating mostly on the finite dimensional two - party case . topics covered include : single - copy and asymptotic entanglement manipulation ; the entanglement of formation ; the entanglement cost ; the distillable entanglement ; the relative entropic measures ; the squashed entanglement ; log - negativity ; the robustness monotones ; the greatest cross - norm ; uniqueness and extremality theorems . infinite dimensional systems and multi - party settings will be discussed briefly . |
synchronization phenomena have been intensely studied for decades , in part because of the roles such phenomena play in chemical systems , laser arrays , cellular biology models , and neural networks to name just a few ( see refs . for extensive reviews ) .one of the most extensively studied models is that proposed by kuramoto in 1975 , a model that has become paradigmatic for the description of many synchronization phenomena .originally the model was applied to an interacting population of oscillators with randomly distributed frequencies .when the interaction is sufficiently strong , most of the units in the array synchronize their dynamics to a single frequency which may differ from the natural frequency of any one of the synchronized oscillators , and also to equal phases .many variants of the original model have been introduced over the years to study different effects in different physical and biological systems , too many to list here ( for an extensive review , see ref .we specifically mention the inclusion of fluctuations , because of their central role in our studies .noise leads to disorder , so in the presence of noise the interactions that in its absence may be strong enough to lead to frequency or to phase synchronization must in general be stronger for synchronization to occur . in all of these models ,the form and range of the interactions has varied greatly in the literature . beyond the kuramoto model ,many different models for synchronization have been proposed , ranging from arrays of continuous oscillatory and excitable units to discrete models .for instance , over the past decade coupled maps have attracted a great deal of attention .recently , arrays of coupled stochastic units each with a discrete set of states but , in contrast with maps , with continuous time have increased in popularity as a simpler paradigm for synchronization .even though these discrete - state oscillator models may be motivated by discrete processes ( for example , protein degradation ) , it has been claimed that they can also be used to model a coarse - grained phase space of continuous noisy oscillators .for instance , prager et al . established a link between a globally coupled ensemble of excitable units described by the fitzhugh - nagumo equations with additive white noise , and a coupled array of 3-state non - markovian stochastic oscillators .our own work has focused on arrays of 2-state and of 3-state stochastic oscillators .the transitions between the states of individual units are governed by a rate process .this rate process might be markovian or might involve distributed delays ( such as , for instance , a refractory period ) .interactions among units in our model appear as a dependence of the transition rates of a particular unit on the states of the other units to which it is coupled .the goal of the work presented herein is to address the following two questions : ( 1 ) under what conditions can we describe the dynamics of kuramoto - like coupled noisy oscillators as periodic continuous - time markov chains ?in other words , when can we model continuous - phase stochastic dynamics as discrete - phase models in which the transitions between the discrete states are governed by memoryless rate processes ? ( 2 ) is there a lower limit to the discretization of the continous noisy oscillators ?in other words , how many discrete states are necessary to capture the essential synchronization features of the continuous system ?the popularity of three - state models leads us to explore whether the synchronization properties of coupled three - state markovian units in any way capture those of the continuous oscillator system .to arrive at some answers to these questions , in sec .[ sec : review ] we present the continuous phase model that is the starting point of our analysis .it is an array of kuramato - like oscillators with additive noise and a generalized nonlinear interaction .we start with the full amplitude equations , but will always work in the limit where the phase equations alone provide a valid description of the important dynamics . forthe sake of simplicity we consider a globally coupled ensemble of identical oscillators ( all with the same natural frequency ) , and thus focus on the phase synchronization phenomenon . in sec .[ sec : coarsegraining ] we perform the coarse - graining of the phase space and discuss the conditions under which the dynamics can be modeled as a periodic markov chain .here we also discuss the questions associated with the three - state systems . finally , in sec .[ sec : summary ] we present our concluding remarks .we also include two appendices with technical details of our calculations .our starting point is an ensemble of identical noisy oscillators described by the complex time - dependent dimensionless amplitudes , with .these amplitudes are governed by the equations of motion the overdot indicates a derivative with respect to time . is a real positive parameter that governs the internal dynamics of each oscillator . for the function , which describes this internal dynamics ,we take the normal form of a supercritical hopf bifurcation , for simplicity , we take all parameters to be real and have scaled out irrelevant constants .the oscillators are identical , and we have removed the natural frequency of oscillation of each unit , that is , we are working in a moving framework . in the usual language of the kuramoto model ,the frequency distribution of the oscillators is , where the -function is appropriate for the continuous variable ( and below also for the continuous time ) .therefore , the internal dynamics of each oscillator tends to set , with an arbitrary phase . the second term on the right hand side of eq .( [ amplitudeosc ] ) accounts for the interaction between the oscillators .the coupling strength is quantified by .the interaction is assumed to be global ( all - to - all interaction ) , with the customary kuramoto order parameter given by the average amplitude as a function of time , the original kuramoto model is recovered if we set equal to , so that the global interaction is given by .the function accounts for a nonlinear interaction between the oscillators via .the advantage of including a general nonlinear function in the interaction will be clear when we subsequently perform the coarse - graining operations . the third term on the right hand side of eq .( [ amplitudeosc ] ) is a complex additive noise of intensity .this term models the fluctuations .the noise is of the form where and are independent real gaussian white noises of zero mean and correlation functions here is the kronecker delta appropriate for the discrete variable .we note that the form of eq .( [ amplitudeosc ] ) respects the phase invariance , that is , the equation is invariant under the transformation with constant , but the equation is otherwise quite general .we consider the parameter range and , so that the time scale of the internal dynamics of each oscillator dominates over ( i.e. is shorter than ) that of the interactions between the oscillators .then , after a fast transient defined by the internal dynamics , we have that .after that , the phase of each oscillator varies as a function of time on a slower time scale defined by the interactions ( albeit with very rapid fluctuations ) . on this longer time scalewe can write .the dynamics specified by eq .( [ amplitudeosc ] ) can then be reduced to the phase equation here we have defined a kuramoto order parameter which follows directly from eq .( [ order1 ] ) , from which we extract the real phase variable , and where ] are also real .the noise is again gaussian and white , with zero mean and correlation function that follows directly from eqs .( [ zeta1 ] ) and ( [ zeta2 ] ) , the order parameter can be written as where we have introduced the density of oscillators with phase , .\ ] ] in the thermodynamic limit , where is the probability that the phase of an oscillator lies in the interval ] represents the contribution of order in and .note that , the conservation of probability implies that , for all , }\left|\psi_{0j } \right\rangle = 0 . \label{conserprob}\ ] ] evolves on the slow time scale because we are assuming that the fast contributions have already relaxed .that is , which is small near criticality .for the order parameter we assume the pitchfork bifurcation scaling and check the consistency of this assumption _ a posteriori_. assuming ( [ scalingpitch ] ) and ( [ w1 ] ) , the second order , , leads to }=v_{j}^{\left[2\right ] } = -\tilde{k}_c \sin\left(2\delta\phi\right)\psi_{2j}\left[r^2\right].\label{v2}\ ] ] for , eq . ( [ v2 ] ) does not have solvability problems , leading to } = \psi_{2j}\left[\frac{2\tan\left(\delta\phi/2\right)}{\tan\left(\delta\phi\right)}r^2\right].\ ] ] the third order , , has solvability problems , and the solvability condition ( [ sc1 ] ) leads to the equation \left|r\right|^2 r & = 0 \nonumber.\end{aligned}\ ] ] using eq .( [ tilde ] ) , the above solvability condition takes the form given in eq .( [ normalform-2 ] ) . for , eq .( [ v2 ] ) has no solution , since in this case , which implies = \psi_{1j}\left[(r^{*})^2\right].\ ] ] here the scaling assumption ( [ scalingpitch ] ) does not allow us to impose a suitable solvability condition at second order .hence , in order to ensure the consistency of the expansion ( [ nlexpansion ] ) , we must modify our scaling assumption and adopt the transcritical scaling this scaling allows us to write the solvability condition ( [ sc1 ] ) for eq .( [ o2 ] ) in the form using eq .( [ tilde ] ) , the above solvability condition leads to the normal form eq .( [ normalform-3 ] ) .critical point calculations and normal forms near the transition to synchronization for the continuos - phase kuramoto model and its variants have been extensively documented in the literature ( see ref . for an extensive review of a number of approaches ) . herewe simply point out that in the limit some of the results of appendix a reduce to those appropriate for the continuos - phase oscillators .in particular , in this limit , that is , we obtain the continuous critical point ( [ criticalk-1 ] ) . moreover , from eq .( [ forapb ] ) we find , \nonumber\end{aligned}\ ] ] which leads to the normal form given in eq .( [ normalform-1 ] ) . when comparing our results to those reported in the literature , in addition to the limit we stress that here we are working with identical oscillators [ and that in the literature on the kuramoto model the function .to obtain the analytic estimate ( [ nfapprox ] ) of the steady state distribution , we note that for small , then , retaining the lowest order of the expansion ( [ nlexpansion ] ) , }\right),\ ] ] next , we take the continuos limit or , which implies , with the solution ( [ w1 ] ) .the function $ ] is obtained from the definition ( [ fourier ] ) .therefore , at the steady state , \right),\ ] ] where the steady state value of the order parameter is estimated from the equilibrium value predicted by the normal form ( [ normalform-1 ] ) for . that is , where is an arbitrary phase constant . using the model function ( [ fmodel-1 ] ) for , we find that therefore we obtain eq .( [ nfapprox ] ) . | the theoretical description of synchronization phenomena often relies on coupled units of continuous time noisy markov chains with a small number of states in each unit . it is frequently assumed , either explicitly or implicitly , that coupled discrete - state noisy markov units can be used to model mathematically more complex coupled noisy continuous phase oscillators . in this work we explore conditions that justify this assumption by coarse - graining continuous phase units . in particular , we determine the minimum number of states necessary to justify this correspondence for kuramoto - like oscillators . |
after a meal or a drink , fidgety people sometimes play with the available props , such as empty glasses , bottles , and soda cans .these containers are all nearly axi - symmetric objects with round bottoms .a natural form of play is to roll these containers on their circular bottoms . when a slightly tipped container ( fig .[ fig : ovaltinetippedphoto ] ) is let go , sometimes one sees it fall upright , make a slight banging sound , and then tip up again at another angle , then fall back upright again , and so on for a few rocking oscillations . because these oscillations usually damp quickly its easy to miss the details of the motions . to aid the eye , instead of just letting go of the slightly tilted container , you can flick its top forward with the fingers .this provides an initial righting angular velocity along the axis about which the container was initially tipped ( too large an angular velocity will cause the container to lift off the table as it pivots over ) .the tipping container , as before , falls to a vertical configuration , at which time its bottom circular face bangs on the table .after this bang , the container tips up onto the other side and maybe falls over . when this experiment is performed with a container that is not too tall and too thin, you can see that the container does not fall exactly onto the diametrically opposite side of the bottom rim .that is , point a on the bottom of the container that initially contacted the table and the new contact point b , that the container rocks up onto , are not exactly 180 degrees apart .this experiment ( video available online ) is shown schematically in fig .[ fig : fallsequence ] . fig .[ fig : scatterplotovaltine ] shows a histogram of one particular container s orientations after we repeatedly flicked it .the distribution is strongly bimodal with no symmetric falls over many repeated trials . and [ fig : scatterplotovaltine ] . ]note the apparent symmetry breaking with .is this deviation from symmetric rocking due to imperfect hand release ?here we show that the breaking of apparent symmetry is consistent with the simplest deterministic theories , namely smooth rigid body dynamics .we derive formulas for the `` angle of turn '' , for both the perfect - rolling and for the frictionless - sliding cases given infinitesimal symmetry breaking in the initial conditions .the results here generalize some of the nearly - falling - flat results of cushman and duistermaat ; they considered the special case of the pure - rolling of flat disks . .see fig .[ fig : scatterplotovaltine ] ]if the table was perfectly planar and horizontal , the container s bottom was perfectly circular , and the container was perfectly axisymmetric , then an initial condition with a purely righting angular velocity would result in a collision in which , just as the container becomes vertical , all points on the container s bottom slap the table - top simultaneously .the consequence of such a rigid - container collision is not computable since algebraic rigid - body collision laws are not well - defined for simultaneous multi - point collisions ; for example , the order of the impulse locations is then ill - defined . basing the rocking outcome on such a perfect flat collisionwould be basing the outcome on details of the deformation , that is , leaving the world of rigid - object mechanics .geometric perturbations to the circular rim can similarly lead to two - point collisions , where both points of contact are now on the rim .the second point of contact could be diametrically opposite to the first point , or anywhere else on the rim .we could then compute the consequences of , say , a plastic collision at the new point of contact .however , the consequences of the collision would depend on the location of the geometric imperfection . for a given collision point ,such a theory could predict the energy dissipation at the collision .but such a theory can not be useful in predicting a systematic breaking of symmetry as seen in fig .[ fig : scatterplotovaltine ] . forany given imperfection , the motion is deterministic and depends on the location of the impection , and there is no reason to expect that the location of the imperfection would have a distribution similar to that in fig . [fig : scatterplotovaltine ] .degrees is strongly bimodal .whether the fall is to the left or to the right depends sensitively on initial conditions .the leftward falls have a mean of about degrees and the rightward falls have a mean of about degrees . for this cylinder theorypredicts degrees for frictionless sliding and degrees for rolling with no slip . ]assuming a perfectly flat rigid bottom , the circle - slapping - ground simultaneous collision is essentially impossible . after all , the container is being launched by imperfect human hands that can not provide any exact initial conditions. accounting for various symmetries in the problem , the set of all motions of a container rolling without slip is three dimensional ; the space of solutions could be parameterized by , say , the minimum tip angle , the yaw rate at that position , and the rolling rate at that position .the set of solutions that leads to a face - down collision can be characterized with only two parameters , a set with co - dimension one .so , small generic perturbations of a `` collisional '' initial condition result , with probability 1 , in a non - collisional motion described by the smooth dynamics of the container rolling or sliding on its circular bottom rim .the rest of this paper is about the near - collisional motions .we will see that these near - collisional motions involve rapid rolling or sliding of the container on its bottom rim , so that the contact point appears to have switched by a finite angle that is greater than 180 degrees . for this analysis, we assume a geometrically perfect container ( axisymmetric ) and table ( flat ) .consider a container with mass , bottom radius , and the center of mass at a height from the bottom .the moment of inertia is about it s symmetry axis and is about any axis passing through the center of mass and perpendicular to the symmetry axis . for the disk of , and . in our case , and .the center of mass of the container is at in an inertial frame -- at rest with respect to the table .the reference orientation of the container is vertical as shown in fig .[ fig : eulerangles]a . any other orientation of the container can be obtained from the reference orientation by a sequence of three rotations , defining corresponding euler angles as illustrated in fig .[ fig : eulerangles ] . the first rotation ( ` yaw ' or ` steer ' ) is about by an angle .this rotation also transforms the inertial coordinate axes to -- . the second rotation ( ` pitch ' or ` tilt ' ) by an angle about the axis results in the -- frame and determines the orientation of the container up to a rotation about the body - fixed symmetry axis ( axis ) .the angular velocity of the container in the rotating -- frame is therefore entirely along the symmetry axis and this relative angular velocity magnitude is denoted by .we will mostly consider two simple extremes for the frictional interaction between the table and the container s bottom , namely , sliding without friction and rolling without slip . for pure rolling, the euler angles and their first and second time derivatives determine the center - of - mass position ( relative to the contact point ) , the center - of - mass velocity and acceleration , as well as the container s angular velocity and angular acceleration .the three second order odes that determine the evolution of the orientation , , and follow from angular momentum balance about the contact point : where these equations are derived in appendix [ app : eqmotion ] . because the center - of - mass velocity is determined by the euler angles and their rates , once these are known the center of mass position can be found by integration . for the frictionless sliding of the cylinder on its circular bottom, the equations of motion are also given by eq .[ eq : generalcyl ] but with when the table is frictionless , the horizontal velocity of the center of mass is a constant , so that the horizontal position is independent of the orientation .the vertical position of the center of mass is simply given by .without loss of generality the center - of - mass can be taken as on a fixed vertical line .our initial discovery of the phenomenon described here was found in numerical simulations of the equations above .we used an adaptive time - step , stiff integrator because the solutions of interest have vastly changing time - scales .the singularity of the euler - angle description in the near - vertical configuration also contributes to the stiffness of the equations . andused for the experiments in fig .[ fig : scatterplotovaltine ] : in consistent si units .initial conditions .all the angles are in radians .the step - change in is the angle of turn , and is a little more than as noted in the caption for fig .[ fig : scatterplotovaltine ] . ] for simplicity of presentation , we first describe in detail the results of integration when the container does not slip with respect to the table .first , we integrate the equations with initial conditions that lead exactly to a face - down collision , , and say , . as would be expected , the container lands in a manner that all of its bottom face simultaneously reaches the horizontal table - top .up to this time the motion of the container is identical to that of an inverted pendulum hinged at the contact point a. the integration becomes physically meaningless at that collision point . in the vicinity of these collisional motions , the solutions of the equations of motion heredo not have smooth dependence of solutions on initial conditions . to obtain a near - collisional motion , we set , , , and , where is a small quantity. the results of the integration are shown in fig .[ fig : purerollnumresults ] .the plot of suggests a motion in which the container s bottom face periodically comes close to touching the table ( ) , but then gets `` repelled '' by the floor as if by an elastic collision , so that the container rocks down and up periodically , ad infinitum , without losing any energy , as expected from this dissipation - free system .we notice that when , changes almost discontinuously . also , when is close to zero , both and blow up to very large values , resulting in almost discontinuous changes in the corresponding angles and .note that we are simply simulating the apparently smooth differential equations , and not applying an algebraic transition rule for a collision .as goes to zero , the angle rates and grow without bound , but the magnitude of the angular velocity vector is always bounded , as it must be since energy remains a constant throughout the motion . in particular , while the angle rates and are large , they are very close to being equal and opposite ( , fig . [fig : purerollnumresults]i ) .let us examine the consequences of a rapid finite change in when .the position of the contact point p on the ground relative to the center of mass is given by the following equations : when and is not particularly close to any multiple of , the contact point position is given by given that do not change much during the brief near - collisional phase ( because center of mass velocity is finite ) , we can see from eq .[ eq : contactptcircle ] that a rapid continuous change in corresponds to a rapid continuous change in the contact point in a circle with the center .thus the `` angle of turn '' aob defined earlier is simply the change in . at the singular limit of a near - collisional motion arbitrarily close to a collisional motion, this continuous but steep change in approaches a step change this is the `` limiting angle of turn '' and we denote this by .we consider the near - collisional motions of the no - slip container that can be characterized as being the pasting - together of two qualitatively distinct motions of vastly different time - scales : 1 .inverted pendulum - like motion about an essentially fixed contact point when the tilt angle is large .rapid rolling of the container which accomplishes in infinitesimal time , a finite change in the contact point , and a sign - change in the tilt rate .note that the near - collisional motion for a container rolling without slip requires very high friction forces ( for non - zero , see fig .[ fig : purerollnumresults]g ) .however , plotting the ratio of the required friction forces with the normal reaction , we find that only a finite coefficient of friction is required for preventing slip even in the collisional limit ( fig .[ fig : purerollnumresults]h ) . the other extreme of exactly zero friction is similar . here, the horizontal component of the center of mass velocity may be taken to be zero .the near collisional motions for a container sliding without friction are again characterized as consisting of two qualitatively different phases : 1 . a tipping phase when is not too small , involving the container moving nearly in a vertical plane , the center of mass moving only vertically , and the contact point slipping without friction .a rapid sliding phase in which the sign of is reversed almost discontinuously , and the contact point moves by a finite angle in infinitesimal time .the angle of turn for the frictionless case and the no - slip case are different in general ( when ) . in the next two sections , we derive the formulas for the angle of turn by taking into account the two - phase structure of the near - collisional motion .the rigid body dynamics of disks , containers , and similar objects with special symmetries , have been discussed at length by a number of authors , including distinguished mechanicians such as chaplygin , appell , and korteweg .their works include complete analytical characterizations of the solutions to the relevant equations of motion , typically involving non - elementary functions such as the hyper - geometric .reasonable reviews of such literature can be found , for instance , in and .we will not use these somewhat cumbersome general solutions but will analyze only the special near - collisional motion of interest to us .the calculation below may be called , variously , a boundary layer calculation , a matched asymptotics calculation ( but we are not interested in an explicit matching ) or a singular perturbation calculation . essentially , we take advantage of the presence of two dynamical regimes with vastly different time - scales , each regime simple to analyze by itself . the overall motion can be obtained approximately by suitably pasting together the small- and the large- solutions .but the angle of turn is entirely determined by the small- regime , as will be seen below .first , consider eq .[ eq : generalcyl ] with in the limit of small , so that we can use and , and generally neglect terms of .we obtain now , considering eq .[ eq : generalcyl ] with in the limit of small , we obtain eqs .[ eq : eq1atsmallphi ] and [ eq : eq3atsmallphi ] are linear and homogeneous in , , and their time derivatives .so , positing a linear relation between and , we find that the following two equations are equivalent to eqs .[ eq : eq1atsmallphi ] and [ eq : eq3atsmallphi ] : and note that eq .[ eq : psieqtheta ] agrees with the results of the numerical simulation , as in fig .[ fig : purerollnumresults]i .even though this equation was derived for small , this equation is approximately true at large as well if the initial conditions of the motion at large have and , as is the case for the near - collisional motions we consider . integrating eq .[ eq : noslip1 ] , we obtain thus , when , both and ( ) become very large .now consider eq .[ eq : generalcyl ] with and with . using eq .[ eq : psieqtheta ] in eq .[ eq : eq2atsmallphi ] and ignoring higher order terms in , we obtain using eq .[ eq : psidotandphi ] for in the above equation and neglecting the term ( as it is a constant and therefore , much smaller than 1/ for small ) , we get where the general solution for the differential equation in , eq .[ eq : rollphidotdoteqn ] , can be written as where is the lowest attained by the container before rising back up again , and is when this minimum is attained . substituting this equation in eq .[ eq : psidotandphi ] , we obtain a simple equation for the evolution of when near the surface : we can now compute the change in during a small time interval centered at . from conservation of energy , we can show that , and therefore , must be ( see appendix [ app : b2scaling ] ) .hence , as . keeping a small constant as we let , thus approaching the collisional limit , gives us the following expression for . this is the limiting angle of turn when the container rocks by rolling without slip .earlier numerical integrations agree quite well with this formula near the collisional limit , as they should .we now briefly outline the procedure for deriving the angle of turn for the frictionless case .the procedure closely parallels that described for the no - slip case , except for small differences below .the frictionless equations eq .[ eq : generalcyl ] and eq .[ eq : nofric_eqs ] corresponding to simplifies to here , because the initial conditions satisfy and by assumption , as before . using eq .[ eq : slipangmom ] in the frictionless equation eq .[ eq : generalcyl ] and eq .[ eq : nofric_eqs ] corresponding to , we have , after some simplifications : where is a constant of integration , whose order is estimated in appendix [ app : b2scaling ] . substituting this into the frictionless equation and simplifying by neglecting all higher order terms, we eventually obtain using arguments identical to the pure - rolling case , this results in the following expression for the angle of turn , for the frictionless limit .in this section , we derive the same angle of turn formulas ( eq . [ eq : noslipangleofturn ] and [ eq : nofricangleofturn ] ) without referring back to the complicated full dynamical equations .rather , using heuristic reasoning , we directly derive equations of motion that apply at the small angle limit ( ) .we represent the lean of the cylinder by the vector with magnitude equal to and direction along the axis : .these quantities and are related to each other exactly like and , respectively , in traditional polar coordinates . neglecting any angular velocity component along ,the angular velocity vector for the cylinder is given by .the rate of change of angular momentum about the center of mass g is given by : the angular momentum balance equation is then given by where is the moment of all the external forces about g. depends on whether or not there is friction ; so we treat the two cases in turn .the vertical position of the center of mass is .so .the vertical ground reaction is thus , neglecting gravity in comparison .the moment of this vertical force about the center of mass g is equal to in the direction . substituting this along with eq .[ eq : heuristic_angmom ] in the angular momentum balance equation eq .[ eq : angmombalance ] , we have this vector equation is identical to eqs .[ eq : nofric1 ] and [ eq : nofric2 ] and therefore , lead to the same angle of turn ( eq . [ eq : nofricangleofturn ] ) .the position of the center of mass g with respect to the point p on the cylinder in contact with ground is given by .the velocity of the center of mass g is given by .using the no - slip constraint , we obtain after some simplifications , the acceleration of the center of mass to first order to be the ground reaction force , which now includes a horizontal friction force as well , is simply , neglecting gravity .the moment of this ground reaction force about g is given by equating above to from eq .[ eq : heuristic_angmom ] gives eqs .[ eq : noslip1 ] and [ eq : ddphianddpsi ] from the previous version of the derivation , therefore resulting in the same formula for the angle of turn ( eq . [ eq : noslipangleofturn ] ) .firsly and most significantly , note that the limiting angle of turn does not depend on the initial conditions such as the initial tilt and tip velocity .this means that we do not have to control these accurately in an experiment .this also agrees with the relatively small variance in the histogram of fig .[ fig : scatterplotovaltine ] , in which we did not control the initial conditions . the histogram fig .[ fig : scatterplotovaltine ] was obtained using the cylindrical container shown in fig .[ fig : ovaltinetippedphoto ] with cm , cm , m . using these numbers in the angle of turn formulasgives an angle of turn of about 220 degrees for frictionless sliding and an angle of turn of degrees for rolling without slip .these angles of turn would manifest as a deviation of either degrees or degrees from falling over to exactly the diametrically opposite side . in the toppling experiment of fig .[ fig : scatterplotovaltine ] , the container orientation was 33 degrees on average from 180 degrees in the leftward falls ( suggesting an angle of turn of 213 degrees ) and was about 37.9 degrees on average from 180 degrees in the rightward falls ( suggesting angle of turn of 217.9 degrees ) .the standard deviations were respectively 3.9 and 4.5 degrees respectively .the experimental angle of turn seems better predicted by the asymptotic formula for frictionless sliding in this case .although , neither the frictionless limit nor the no - slip limit is just right , both limits capture the many qualitative aspects of the motion quite well .for tall thin cylinders with and , both equations for the angle of turn , eq . [ eq : noslipangleofturn ] and eq .[ eq : nofricangleofturn ] , tend to radians .that is , very tall cylinders are predicted to have a smaller symmetry - breaking .this prediction agrees with the common experience that when we tip a tall - enough cylinder ( something that looks more like a tall thin beer bottle ) in a manner that its bottom surface nearly falls flat , the cylinder essentially rocks up on a contact point almost diametrically opposite to the initial contact point .thus , this apparently almost - symmetric rocking is accomplished by a rapid asymmetric rolling or sliding of the container over roughly one half of its bottom rim !this result is basically independent of the container - table frictional properties .we may define a disk as a container with zero height : , a good approximation is the euler s disk . if the radius of gyration of a disk is , then and . substituting these into the angle of turn formulas for the no - slip ( eq .[ eq : noslipangleofturn ] ) or the frictionless ( eq . [ eq : nofricangleofturn ] ) cases , we obtain the same angle of turn : indeed , numerical exploration with eqs .[ eq : generalcyl]-[eq : nofric_eqs ] , suitably modified for frictional slip and specialized to disks , shows no dependence of the angle of turn on the form of the friction law or the magnitude of the friction .this lack of dependence on friction could be anticipated from our small angle calculations for pure rolling cylinders , specialized to disks .in particular , we find that the acceleration of the center of mass for a pure rolling cylinder ( eq . [ eq : rgdotdot ] ) with is vertical in the small - angle limit , enabling the no - slip condition to be satisfied even without friction .thus , the rolling solution is obtained with or without friction .for the special case of pure - rolling disks , eq .[ eq : diskangleofturn ] was found in . *a homogeneous disk * has .the corresponding angle of turn is equal to , which is about 41 degrees more than a full rotation of the contact point .this prediction is easily confirmed in casual experimentation with metal caps of large - mouthed bottles or jars on sturdy tables for such caps , we observe that the new contact point is invariably quite close to the old contact point . *a ring * such as the rim of a bicycle wheel has .the corresponding angle of turn is equal to , which is about 48 degrees less than a full rotation of the contact point .thus the apparent near - collisional behavior of a homogeneous disk and a ring will be superficially similar , even though the actual angles of turn differ by about degrees . the angle of turn can be controlled by adjusting .for instance , between a ring and a disk is an object that appears to bounce straight back up the way it falls . andit is possible to increase the theoretical angle of turn without bound by choosing , a disk in which almost all the mass is concentrated at the center .here we analyzed what happens when a cylinder or a disk rocks to an almost flat collision on its bottom surface .we found that the smallest deviation from a perfect face - down collision of a container s bottom results in a rapid rolling and/or sliding motion in which the contact point moves through a finite angle in infinitesimal time .calculations of this finite angle explain certain apparent symmetry breaking in experiments involving rocking or toppled containers . in this system ,the consequences of such a degenerate ` collision ' are a discontinuous dependence on initial conditions ( rolling left or rolling right depend on the smallest deviations in the initial conditions ) . such discontinuous dependence on initial conditions or geometry is a generic feature of systems in the neighborhood of simultaneous collisions .other examples include a pool break or the rolling polygon of .thanks to arend schwab for comments on an early manuscript and mont hubbard for editorial comments .the alternate heuristic derivation of section [ sec : heuristic ] were informed by discussions with anindya chatterjee in the context of the euler s disk .ms was supported by cornell university in the early stages of this work ( 2000 ) .the work was also supported partly by nsf robotics ( cise-0413139 ) and fibr ( ef-0425878 ) grants .the moment of inertia tensor of the cylinder about its center of mass is , in dyadic form , the angular velocity of the cylinder is given by the angular momentum about the center of mass is the rate of change of angular momentum is given by the sum of two terms : 1 ) the rate of change relative to the rotating frame -- , obtained by simply differentiating the components in eq .[ eq : hg ] , and 2 ) the rate of change due to the rotation of the -- frame with an angular velocity . the point s on the cylinder in contact with the ground has zero velocity : , no slip . using ,the velocity of the center of mass is given by the acceleration of the center of mass is given by adding two terms : 1 ) acceleration relative to the rotating -- frame , obtained by differentiating the components in eq .[ eq : vg_noslip ] , and 2 ) an acceleration term due to the rotation of -- frame .we obtain the contact force is given by linear momentum balance : then , the moment of all the external forces about the center of mass is in which , upon simplification , we have equating and gives us the angular momentum balance equations of motion for no - slip rolling in eqs .[ eq : generalcyl],[eq : noslip_eqs ] .no friction implies that the horizontal velocity of the center of mass g is a constant and can be set to zero by appropriate reference frame choice , without loss of generality .further , noting that , we have , by differentiating twice : as before , we compute the contact force as and compute the net moment about the center of mass as again , equating with the before gives us the required equations of motion eqs . [ eq : generalcyl],[eq : nofric_eqs ] .we now establish that , used in obtaining eq .[ eq : noslipangleofturn ] .the total mechanical energy of the cylinder , a constant , is given by : in which p.e .is the potential energy .we consider this total energy at the time of lowest tip angle , , we have , , , and the potential energy p.e . .making use of the usual small approximations ( as used in the main text ) and eq .[ eq : psieqtheta ] , namely , we find that because is a constant , we have or as claimed earlier . the total mechanical energy is now given by again , using small approximations and considering the energy equation at , we obtain , using arguments identical to the no - slip rolling case . | a beer bottle or soda can on a table , when slightly tipped and released , falls to an upright position and then rocks up to a somewhat opposite tilt . superficially this rocking motion involves a collision when the flat circular base of the container slaps the table before rocking up to the opposite tilt . a keen eye notices that the after - slap rising tilt is not generally just diametrically opposite the initial tilt but is veered to one side or the other . cushman and duistermaat ( 2006 ) recently noticed such veering when a flat disk with rolling boundary conditions is dropped nearly flat . here , we generalize these rolling disk results to arbitrary axi - symmetric bodies and to frictionless sliding . more specifically , we study motions that almost but do not quite involve a face - down collision of the round container s bottom with the table - top . these motions involve a sudden rapid motion of the contact point around the circular base . surprisingly , like for the rolling disk , the net angle of motion of this contact point is nearly independent of initial conditions . this angle of turn depends simply on the geometry and mass distribution but not on the moment of inertia about the symmetry axis . we derive simple asymptotic formulas for this `` angle of turn '' of the contact point and check the result with numerics and with simple experiments . for tall containers ( height much bigger than radius ) the angle of turn is just over and the sudden rolling motion superficially appears as a nearly symmetric collision leading to leaning on an almost diametrically opposite point on the bottom rim . |
during messenger s third flyby of mercury , a 290-km - diameter peak - ring ( double - ring ) impact basin , centered at 27.6 n , 57.6 e , was discovered and subsequently named rachmaninoff . in terms of size and morphology ,the rachmaninoff basin closely resembles the 265-km - diameter raditladi peak - ring basin , located at 27 n , 119 e west of the caloris basin , that was discovered during messenger s first flyby .the image - mosaic of rachmaninoff and its ejecta has a spatial resolution of 500 m / pixel and it is derived from images obtained by messenger s mercury dual imaging system ( mdis ) narrow - angle camera , while raditladi basin and surrounding areas were imaged at 280 m / pixel .both basins and surrounding areas were also imaged with a set of 11 filters of the mdis wide - angle camera ( wac ) , whose wavelengths range from 430 to 1020 nm .these images were used to obtain color maps with a resolution of about 5 km / pixel and 2.4 km / pixel for rachmaninoff and raditladi , respectively .+ the two basins appeared to be remarkably young because of the small number of impact craters seen within their rims .for this reason it has been argued that they were likely formed well after the end of the late heavy bombardment of the inner solar system at about 3.8 ga . in particular , for raditladi it has been pointed out that the basin could be as young as 1 ga or less .+ interestingly , both basin floors are partially covered by smooth plains . in the case of rachmaninoff ,an inner floor filled with spectrally distinct smooth plains has been observed and this , combined with the small number of overimposed craters , implies a volcanic origin .the estimate of the temporal extent of the volcanic activity and , in particular , the timing of the most recent activity may represent a key element in our understanding of the global thermal evolution of mercury , and helps to constrain the duration of the geologic activity on the planet in light of the new data provided by messenger .moreover , raditladi may be the youngest impact basin discovered on mercury so far , and therefore it is important to understanding the recent impact history of the planet .+ for all these reasons , the age determination of rachmaninoff and raditladi basins and their geologically different terrains is of a great interest . in this paper , we will present a revised mercury crater chronology , and show how to take into account for the crustal properties of the target ( 3 ) .this chronology will be then applied to rachmaninoff and raditladi basins ( 4 ) .in this paper we date the raditladi and rachmaninoff basin units by means of the the model production function ( mpf ) chronology of mercury .this chronology relies on the knowledge of the impactor flux on mercury and on the computed ratio of impactors between mercury and the moon .the absolute age calibration is provided by the apollo sample radiometric ages .the crater scaling law enables computation of the crater size - frequency distribution ( sfd ) using a combination of the impactor sfd and the inferred physical properties of the target .the computed crater sfd per unit surface and unit time is the so - called mpf .the present model involves several improvements with respect to the model presented in and , thus it will be described in detail in the next sections .+ in the following analysis , we use the present near - earth object ( neo ) population as the prime source of impactors .this assumption is justified by the presumably young ages ( i.e. low crater density ) of the terrains studied in this paper . in particular, we use the neo sfd as modeled by .this neo sfd is in good agreement with the observed neo population , fireballs and bolide events ( see * ? ? ?* for further details ) .+ concerning the crater scaling law , we adopted the so - called pi - scaling law in the formulation by .unlike previous approaches , our methodology explicitly takes into account the crustal properties of the target .in fact , surfaces react differently to impact processes , depending on the bulk density , strength and bulk structure of the target material .these latter parameters are taken into account by the scaling law , and are tabulated for several materials like cohesive soils , hard - rock and porous materials ( e.g. * ? ? ?* ; * ? ? ?* ) . on a planetary body, terrain properties may vary from place to place according to the local geological history and as a function of the depth in the target crust .therefore , impacts of different sizes taking place on a particular terrain may require different estimates of the target properties .+ the pi - scaling law allows computation of the transient crater diameter ( ) as a function of impact conditions and target properties , and reads : ^{-\frac{\mu}{2+\mu } } \label{hh}\ ] ] where is the target gravitational acceleration , is the perpendicular component of the impactor velocity , is the projectile density , and are the density and tensile strength of the target , and depend on the cohesion of the target material and on its porosity and for cohesive soils , while and for rocks . has been set to 0.4 in all cases . ] .therefore , the nature of the terrain affects the crater efficiency and the functional dependence of the crater size with respect to the input parameters ( e.g. impactor size and velocity ) .equation [ hh ] accounts both for the strength and gravity regimes , allowing a smooth transition between the two regimes .the impactor size ( ) for which we have the transition between the two regimes is determined by equating the two additive terms in equ .[ hh ] , therefore : the transient crater diameter is converted into final crater diameter ( ) according to the following expressions : where is the observed simple - to - complex transition crater diameter , which for mercury is 11 km . the conversion between transient crater to final crater is rather uncertain and several estimates are available ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?here we have used the factor 1.3 from transient to final simple craters . for the complex craters , we use the expression proposed by , where the constant factor has been set to 1.40 in order to have continuity with the simple crater regime ( ) .we note that the effects of the material parameters on the transient crater size depend on whether a crater is formed in the strength or gravity regime . for the strength regime , or , while or according to the values for and given above . in the gravity regime , . in all cases ,the dependence of on both and is mitigated by the low exponents .the application of the crater scaling law is not straightforward , since the physical parameters of the terrains are poorly constrained for mercury and the crater scaling law has been derived for idealized uniform target properties .so far , no detailed and systematic study has been performed to develop a crater scaling law for a layered target ( e.g. * ? ? ?* ; * ? ? ?* ) , although numerical modeling of terrestrial craters has shown that the target layering plays an important role in the cratering process ( e.g. * ? ? ?* and references therein ) .on the other hand , we think that it is worth attempting to simulate a more realistic situation instead of using the same average values for craters whose sizes can vary by order of magnitudes and consequently involved different layers of a planetary crust . in this context, geological analysis of the terrains can provide valuable information , at least to constrain the surface properties of the target .+ in this work , we assumed that the density and strength of mercury varies as a function of the depth , in analogy to that inferred for the moon ( * ? ? ? * and references therein ) . in figure [ den_str ] ( left panels ) the assumed density and strength profilesare indicated .these profiles are consistent with the upper lunar structure , and were adopted also for mercury . in particular , we have considered a more or less fractured upper crust on top of a bulk silicic lower crust which in turn overlays a peridotitic mantle .however , it must be emphasized that the depths at which these layers occur may vary from place to place ( see 3 ) .+ for each impactor size , we have assigned average values for the target density and strength . over a wide range of parameters ,the transient crater radius ( ) is about times larger that the impactor radius and the depth of the crater is typically between one - fourth and one - third of the crater size .thus the thickness of the excavated material is roughly between impactor radii .here we have adopted an intermediate value , namely averaging the density and strength up to a depth of ( see fig .[ den_str ] , right panels ) given the limited variation in the density and strength profile , the choice of the actual depth to average the density and strength for a given impactor radius has a low influence ( ) on the scaling law . +in addition to the density and strength profiles , we also consider a transition of the crater scaling law ( from cohesive - soil to hard - rock ) according to the size of the impactor .in fact , the density and strength profiles shown in fig .[ den_str ] describe a material of increasing coherence for increasing depth .this is the result of the continuous bombardment of planetary surfaces that produces comminution and fracturing of the upper crustal layers gradually deacreasing with depth as observed in seismic profiles of the lunar crust underneath the mare cognitum . in this respect ,craters that affect only the upper fractured layers form in the cohesive - soil regime , while larger ones in hard - rock regime .therefore the depth of the transition ( ) from the superficial fractured layer to the unfractured lower crust is an important parameter .the depth ( and therefore the crater size ) at which the transition from one regime to the other occurs can vary from place to place .for instance , the thickness of the cohesive layer may be only of a few meters on recent lunar mare material , while it is expected to be of several kilometers on the highlands . in the examples of fig .[ den_str ] , it is assumed that km . + the details of the transition are not easy to model .a simplified study of impact processes on a layered target was performed by .they simulated a two - layer structure , formed by a loose , granular layer on top a more competent material .it was observed that the craters had the usual shapes for diameters less than about 4 times of the top layer thickness .larger craters developed central mounds , flat floors and concentric rims indicating the presence of the underlying layer . according to these results , we simplify the problem by considering a sharp transition in the crater scaling law , at .this implies a transition as a function of impactor radius at .the effect of the transition in the scaling law is reported in fig .[ sl ] for two depths of transitions .note that the position of the sharp transition varies according to the depth assumed . in this paper, we use an intermediate value and set the transition at . in a more realistic situation , a gradual transitionshould be predicted given that the target gradually changes its properties as function of the depth .therefore , our simplified model is not expected to be accurate close to the transition region , nevertheless we believe it provides a reasonable way to approach the cratering scaling law for a layered target . + the neo population and the crater scaling described in the previous section are used to derive the mpf per unit time ( see fig .[ mpf ] ) .the main outcome of our model is that the adopted transition in the crater scaling law results into a `` s - shaped '' feature ( or flexure ) in the mpf .the position of such feature , which is determined by , is not known a - priori .however , as discussed in , in some cases can be constrained by the shape of the observed crater sfd .furthermore , the geological analysis of the terrains can help to derive the expected range of variation for .for instance , lava emplacements may partly strengthen or even completely replace the pre - existing fractured layer .hence in this latter cases the fractured horizon can be confined within a very thin regolith cover , negligible for our calculation ( ) . for young units with poor crater statistics, the choice of may affect the age determinations by up to a factor of 3 - 4 .thus , in order to derive a more accurate age estimate , it is of paramount importance to adapt the crater production function to the nature of the terrains investigated .+ the absolute age is given by the lunar chronology , which expresses the lunar crater cumulative number at 1 km ( ) as a function of time ( ) , using the following equation : where , , .the mpf function at a time is given by : the mpf( ) is used to derive the model cratering age by a best fit procedure that minimizes the reduced chi squared value , .data points are weighted according to their measurement errors .the formal errors on the best age correspond to a 50% increase of the around the minimum value .+ it must be realized that the formal statistical error on the model age only reflects the quality of the crater sfds . on top of that, other sources of uncertainties are present .they stem from the uncertainties involved in the physical parameters used in the model , although we want to stress here that the model ages are not very sensitive to details of the density and strength profile ( see 3.1 of * ? ? ?a more important issue is the applicability of the present neo population in the past .the chronology function assumes a linear dependence with time in the last ga , corresponding to an impactor population in a steady state . on the other hand, dynamical studies of recent main belt asteroid family formation suggested that the present neo flux may be higher that the average steady flux by a factor of 2 .this result also agrees to what found by cratering studies of young lunar terrains ( ga ; e.g. * ? ? ?* ) . concerning the layering structure , as described above, the choice of might affect the age estimate by a factor of 3 - 4 at most .the uncertainty due to the layering is , nevertheless , typically present only for young terrains where the crater sfd has a limited range of crater dimensions .the layering affects the specific shape of the crater sfd , and has to be evaluated case by case , as it will be discussed in detail in the following sections .finally , it must be noted that wavy features in the crater sfd can be due also to other processes than layering , like for instance partial crater obliteration due to subsequent lava flows .therefore , the nature of the s - shaped feature must be constrained as much as possible by geological analysis in order to achieve a more reliable age determination .geological maps of the rachmaninoff and raditladi basins were constructed considering both floors and ejecta . for the floor terrains ,the geological units were identified on the base of their different surface morphologies and spectral characteristics ( i.e. , albedo ) , along with an analysis of their stratigraphic relationships .the ejecta units , surrounding the basins , were outlined considering exclusively the area of continuous ejecta blankets , which are easily detectable thanks to their characteristic hummocky surface .the geological maps also take into account tectonic features affecting the areas .+ crater age determination is based on the primary craters , i.e. those formed by impacts with objects in heliocentric orbits .hence , a crucial point in assessing age by crater counts is to identify and avoid secondary craters .most of the secondaries are recognizable because they are directly related to their primaries ( e.g. , the secondaries are arranged in radial patterns around the primary ) , or occurr in loops , cluster and chains .the contribution of far - field secondaries , which are normally not distinguishable from primary craters ( e.g. * ? ? ? * ) , has been neglected .although this is a reasonable assumption for rachmaninoff and raditladi basins given their low crater densities and thus their presumable relatively young ages , mpf model ages may overestimate real ages .+ the rachmaninoff basin is surrounded by a continuous ejecta blanket and includes an interior peak ring structure , about 136 km in diameter , with extended smooth plains filling its floor .most of the basin walls are modified into terraces .several different geological units have been distinguished inside the floor on the basis of their different relative albedo and surface texture ( fig .[ rach_geo ] ) .the inner smooth plains are mostly within the peak ring except in the southern quadrant of rachmaninoff , where the smooth plains cover or embay the peak ring structure and some of the annular units within the rim .this observation suggests an origin of volcanic emplacement . in the wac enhanced - color images these plains show a yellow to reddish tone which stands out from the darker and bluer color of the other units within the basin and surrounding regionsthis clearly supports a different composition and origin of these inner smooth plains .several discontinuous and concentric troughs possibly due to the uplift and extension of the basin floor , affect the area enclosed by the peak ring and have been interpreted to be graben .the annular region between the peak ring and the rim basin includes seven different units .the most prominent is made up of bright materials , apparently younger than all the other units and possibly related to explosive volcanism .peak ring and terrace material boundaries stand out for their relief , whereas hummocky , dark , irregular and annular smooth plains do not show unequivocal stratigraphic relationships with each other .this suggests an almost coeval origin of these units that may consist of impact melts and breccias .this is furthermore confirmed by the wac images , where annular units do not reveal any color variations and are characterized by uniform blue color similar to the surrounding terrain . to shed more light on the origin of the floor material we dated the rachmaninoff basin using crater statistics of annular units and inner plains separately .bright material was neglected in the crater counts due to its limited extent .craters in the ejecta blanket were counted as well ( fig . [ rach_geo ] ) .+ easily recognizable secondary craters ( either elliptical in shape or arranged in loops and chains ) have not been detected within the rachmaninoff inner plains but numerous pits up to 3.5 km in diameter have been found in close proximity to the concentric grabens .these features are very unlikely to be impact craters and in our interpretation are most probably of tectonic ( structural pits , fault bounded depressions , en - echelon structures ) and/or volcanic ( more or less irregular vents ) origin ( fig .[ rach_ex ] , panels a and b ) . for this reasontheir counts were neglected for the purposes of age determination .clusters and chains of secondaries with irregular and elliptical shapes were recognized in the western sector of the annular units and appear to be directly related to a nearby 60 km primary peak crater overlying the rachmaninoff ejecta ( fig .[ rach_ex ] , panels c and d ) .self secondaries are numerous within the ejecta blanket .+ the resulting crater count statistics are reported in table [ tab ] where for each terrain , all crater - like features , bonafide craters , secondary craters and endogenic ( namely volcanic or tectonic ) features are listed .it is interesting to compare the sfds of all the counts ( fig .[ rach_sfd ] ) . according to our best interpretation of the detected features ,the inner plains contain more endogenic features than bonafide craters .hence , in our opinion the uneven distribution in r plots of crater - like features smaller than 4 km , which is generally attributed to the effect of far field secondaries on mercury , are in this case dominated by tectonic and volcanic features . for the annular units , both the secondary crater and endogenic sfds have steeper slopes with respect to the bonafide crater sfd, moreover they are limited to features smaller than 4 - 5 km .hence , for the annular units as well as the inner plains , the identification of endogenic features is clearly very important since it heavily affects the final bonafide crater sfd . on the ejecta ,most of the crater - like features appear to be self secondaries mostly arranged into clusters and chains and/or with an elliptical shape .all the terrains were dated using their bonafide crater sfds .+ . statistics of all the features detected on rachmaninoff and raditladi basin floors and ejecta .`` all '' indicates all crater - like features , `` bon '' the bonafide craters , `` sec '' secondary craters ( which includes chain , cluster and elliptical craters ) , `` end '' endogenic ( volcanic and tectonic ) features .pl . '' and `` ann . un. '' stand for inner plain and annular unit , respectively .[ cols="<,^,^,^,^,^,^ " , ] the mpf fits of the observed crater sfds are shown in figure [ rach_mpf ] .the lower panel shows the distribution of bonafide primary impact craters detected on the ejecta blanket .a remarkable feature in the crater sfd of the ejecta is the presence of a flexure point at about km .the actual shape of the bonafide crater sfd is partially due to the feature selection .nevertheless , we think that at the large crater sizes relevant here , our selection is reliable and , consequently , the observed flexure point is likely a real feature possibly reflecting a layered target with an upper weak horizon .hence the mpf best fit is achieved with km and gives a model age of ga .[ rach_mpf ] ( upper panel ) the bonafide crater sfd on inner plains and annular units are shown .note that , unlike the ejecta , both cases do not show the presence of a flexure point .this may be due to a real absence of an upper weak horizon or to the lack of large craters that would have otherwise allowed to retrieve information on the geomechanical properties of the deep crustal layers .the annular units are composed of breccias more or less welded by impact melts which can have only partially strengthened the fractured material either pre - dating or originating from the rachmaninoff impact .hence , it is reasonable to assume at least the same of the crust beneath the ejecta .this leads to a model age of ga , which is consistent with the model age of the ejecta and likely dates the rachmaninoff impact event .+ the inner plains are characterized by much poorer statistics within a small range of diameters , therefore the crater sfd can not be used to infer .nevertheless , geological analysis suggest that the inner plains are younger volcanic flows on the basis of their different albedo , color and overlapping relationship with respect to the unit emplaced between the peak - ring and the basin rim .this would make possible also the scenario in which the former fractured horizon , either pre - dating or originating from the impact itself , was completely hardened by the rising magmas and emplacement of lava fields ( fig .[ rac_scenario]a ) . in this case, the mpf acceptably fits the bonafide crater size distribution giving a model age of ga ( fig . [ rach_mpf ] ) .by contrast the fit would be very poor if the all crater - like features would be taken into account .this is not surprising considering the strong contribution we infer that tectonic and volcanic features have on the inner plains statistics .another possible scenario is that the magmatic activity within rachmaninoff was unable to totally strengthen the upper weak layer comprising of fractured material originated by the impact itself or inherited by primordial events .this could be due to either weakly sustained volcanism , which emplaced a thin volcanic sequence on top of a fractured material , and/or , a magma influx concentrated along few well defined conduits within a still fractured crust underneath the basin ( fig .[ rac_scenario]b ) . in this casewe also computed the model age using km as for the annular units and ejecta , obtaining a value of ga . in both cases ( upper weak crustal layer absent or preserved ) the inner plains turn out to be remarkably young , anddemonstrate that a recent volcanic activity occurred within the basin .+ raditladi contains an interior peak - ring structure 125 km in diameter and its walls appear to be degraded , with terraces most pronounced within the north and west sides of the rim .a continuous ejecta blanket with no visible system of rays surrounds the basin and extends up to 225 km from the basin rim ( fig .[ rad_geo ] ) .the floor is partially filled with smooth , bright reddish plains material that clearly embays the rim and the central peak ring .the northern and southern sectors of the basin floor consist of dark , relatively blue hummocky plains material confined between the rim and the peak ring .troughs are found close to the center of the basin arranged in a partially concentric pattern , km in diameter , and are interpreted either as graben resulting from post - impact uplift of the basin floor , or as circular dikes possibly representing fissural feeding vents .floor material was subdivided into two different units following : smooth and hummocky plains .smooth plains may have a volcanic origin , as appears to be the case for plains in the nearby caloris basin , however no clear stratigraphic relation with the hummocky plains has been found , suggesting that all the different terrains within raditladi basin may be coeval and directly related to the impact . with respect to the ejecta area ,we have selected the hummocky continuous ejecta blanket surrounding the basin .+ we performed a crater count of the inner plains within the peak ring and the annular units enclosed between the basin rim and the peak ring .counts were also performed on the ejecta blankets .+ within the inner plains , numerous small graben - related pits up to 5 km in diameter are identified ; as for the case of rachmaninoff , they are most probably tectonically - originated features and/or volcanic vents ( fig .[ rad_ex ] , panels a and b ) .specifically , two peculiar pits in the northern inner plains were interpreted as volcanic vents for the dark material on the crater floor .secondary craters have been found associated to a 23-km crater within the inner plains ( fig .[ rad_ex ] , panels a and c ) .some secondaries are also present on the annular plains , whereas the ejecta blanket is characterized by numerous self secondary craters , occurring mainly in clusters and chains .figure [ rad_geo ] shows the bonafide craters .the statistics of all the identified features are reported in table [ tab ] , whereas the corresponding sfds are shown in fig .[ rad_sfd ] .+ the cumulative bonafide crater sfds for different terrains of the raditladi basin are shown in fig .[ rad_mpf ] , along with the mpf model ages .+ the measured crater sfd on the ejecta blanket shows a flexure at about km .the position of the flexure is well above the size of craters that can no longer be distinguished because of the image resolution , nevertheless the contribution of secondary craters at these crater sizes might be important . in the assumption that this feature is due to the layering , the best fit achieved for km gives a model age of ga .the best fit for the annular units is achieved using km and the resulting model age is ga .these values are consistent with both the layering and model age inferred for the ejecta blankets .for this reason the basin formation can be reliably fixed at around 1.1 - 1.3 ga in accordance with the age suggested by on the basis of a relative - chronology approach . as for the rachmaninoff basin ,the poor statistics and the limited range of diameters imply that the inner plains sfd can not be used to constrain .we derived the model age with both the same of the annular units ( km ) , and a negligible thickness ( ) , obtaining ga and ga , respectively . the former model age leads to a paradox given that the annular units are certainly coeval with the basin formation and , consequently , must only be older or of the same age as the smooth plains .hence , the most reliable result for the inner plains is to consider a solid material yielding a crater retention age of about 1.1 ga . the solid material could be due to an emplacement of lavas soon afterward the impact leading to a complete hardening of the fractured and brecciated material within the basin ( fig .[ rad_scenario]a ) .this interpretation is consistent with the presence of volcano - tectonic features within the basin but may conflict with both the absence of distinctive color variations of the inner plains with respect to the surrounding areas and their unclear stratigraphic relationship with the annular units .alternatively , a great amount of impact melts able to completely harden the impact breccias may explain the derived crater retention age ( fig .[ rad_scenario]b ) .mpf crater chronology has been applied to date the rachmaninoff and raditaldi basins .age assesment has been performed taking into account target rheological layering and using the present neo population as the prime source of impactors .+ our results demonstrate that the volcanic activity within the rachmaninoff basin interior significantly post - dates the formation of the basin .the basin itself probably formed about 3.6 ga ago , whereas the volcanic inner plains may have formed less than 1 ga ago .therefore , mercury had a prolonged volcanic activity , which possibly persisted even longer than on the moon , where the youngest detected nearside flows ( on oceanus procellarum ; * ? ? ?* ) are about 1.1 ga old . on the other hand, the raditladi basin has an estimated model age of about 1 ga and no firm indication that the inner plains formed more recently than the basin itself .hence , these plains may be due to either huge volumes of impact melts or lavas emplaced soon afterward the basin formation . in the latter case , which is not clearly supported by the stratigraphic observations, volcanism might have been triggered by the impact itself .+ this work also shows the role of target properties in deriving the age of a surface .where such properties are neglected , as in traditional chronologies ( e.g. * ? ? ?* ) , the crater production function may be unable to accurately reproduce the observed crater sfd and/or to provide a consistent age for nearby terrains .the following examples serve to illustrate this point : the rachmaninoff ejecta bonafide crater sfd shows an s - shaped feature that , according to our best knowledge , can not be ascribed to processes other than a layered target ; the raditladi inner plains have an higher density of craters than the annular units , implying a paradoxically older age for the interior plains if the inner and outer plains had the same material properties .+ the derived ranges of ages for raditladi basin imply that its formation occurred long after the late heavy bombardment ( ga ) , at a time when the primary source of impactors was a neo - like population .this conclusion also likely applies to rachmaninoff basin , even if it can not be excluded that it was formed during or prior the late heavy bombardment .the neos average impact velocity on mercury is about 42 km / s . considering a most probable impact angle of ,the projectiles responsible for rachmaninoff and raditladi formation should have had diameters in the range 14 - 16 km ( see fig .[ sl ] ) . in the present neo population , bodiesare quickly replenished -in time scales of tens of myrs- mainly from the main belt via slow orbital migration into major resonances .such a migration , due mainly to yarkowsky effect , is size dependent and is negligible for objects larger than km .therefore larger objects , such as those required for the formation of the raditladi and rachmaninoff basins , are mainly produced by dynamical chaos loss .those simulations show that the rate of large impactors decreased by a factor of 3 over the last 3 ga . another source of large impactors is the sporadic direct injection into strong resonances due to collisions .the present neos average impact probability with mercury in the size range of 14 - 16 km , is of about one impact every 3.3 ga , in agreement with the proposed timescales of the formation of rachmaninoff and raditladi .+ the authors wish to thank p. michel for helpful discussions on the cratering processes on a layered target .we also wish to thank a. morbidelli for discussions regarding the neo population .finally , we thank the referees ( c. chapman and an anonymous one ) for providing very interesting comments , that helped to improve our work .blewett d.t . ,robinson m.s . ,denevi b.w . ,gillis - davis j.j . , head j.w ., solomon s.c ., holsclaw g.m . , mcclintock w.e . , 2009 .multispectral images of mercury from the first messenger flyby : analysis of global and regional color trends .earth planet .lett . , 285(3 - 4 ) , 272 - 282 .bottke , w. f. , jedicke , r. , morbidelli , a. , petit , j .-m . , & gladman , b. 2000 . understanding the distribution of near - earth asteroids .science , 288 , 2190 - 2194 .bottke , w. f. , morbidelli , a. , jedicke , r. , petit , j .-m . , levison , h. f. , michel , p. , & metcalfe , t. s. 2002 . debiased orbital and absolute magnitude distribution of the near - earth objects .icarus , 156 , 399 - 433 .bottke , w. f. , vokrouhlick , d. , & nesvorn , d. 2007 .an asteroid breakup 160 myr ago as the probable source of the k / t impactor .nature , 449 , 48 - 53 .collins , g. s. , melosh , h. j. , & marcus , r. a. 2005 .earth impact effects program : a web - based computer program for calculating the regional environmental consequences of a meteoroid impact on earth ., 40 , 817 - 840 .collins , g. s. , kenkmann , t. , osinski , g. r. , wnnemann , k. 2008 .mid - sized complex crater formation in mixed crystalline - sedimentary targets : insight from modeling and observation ., 43 , 1955 - 1977 .croft , s. k. 1985 .the scaling of complex craters .lunar planet .conf . , 15 , 828 - 842 .hawkins , s.e ., boldt , j.d . ,darlington , e.h . ,espiritu , r. , gold , r.e . ,gotwols , b. , grey , m.p . , hash , c.d . ,hayes , j.r ., jaskulek , s.e . ,kardian , c.j . ,keller , m.r ., malaret , e.r . ,murchie , s.l . ,murphy , p.k . , peacock , k. , prockter ,reiter , r.a ., robinson , m.s . , schaefer , e.d . ,shelton , r.g . , sterner , r.e . , taylor , h.w . , watters , t.r . , williams , b.d ., 2007 . the mercury dual imaging system on the messenger spacecraft .space sci .rev . , 131 , 247 - 338 .head j.w . ,murchie s.l . ,prockter l.m . ,solomon s.c . ,strom r.g . ,chapman c.r . , watters t.r ., blewett d.t ., gillis - davis j.j . ,fassett c.i ., dickson j.l . , hurwitz d.m . ,ostrach l.r . , 2009 .evidence for intrusive activity on mercury from the first messenger flyby .earth planet ., 285(3 - 4 ) , 251 - 262 .hiesinger , h. , head , j. w. , iii , wolf , u. , & neukum , g. 2001 .new age determinations of lunar mare basalts in mare cognitum , mare nubium , oceanus procellarum , and other nearside mare .lunar planet .abstracts , 32 , 1815 .holsapple , k. a. 1993 .the scaling of impact processes in planetary sciences .earth planet ., 21 , 333 - 373 . , k. a. , housen , k. r. 2007 . a crater and its ejecta : an interpretation of deep impact .icarus , 187 , 345 - 356 .hrz , f. , grieve , r. , heiken , g. , spudis , p. , and binder , a. 1991 . lunar surface processes . in lunar sourcebook : a user s guide to the moon ( g. h. heiken , d. t. vaniman , and b. m. french , eds . ) cambridge university press , cambridge .khan , a.k .mosegaard , k.l .rasmussen .2000 . a new seismic velocity model for the moon from a monte carlo inversion of the apollo lunar seismic data .j. geophys ., 27(11 ) , 1591 - 1594 . ,s. , morbidelli , a. , cremonese , g. 2005 .flux of meteoroid impacts on mercury ., 431 , 1123 - 1127 . ,s. , mottola , s. , cremonese , g. , massironi , m. , martellato , e. 2009 . a new chronology for the moon and mercuryj. , 137 , 4936 - 4948 .massironi , m. , cremonese , g. , marchi , s. , martellato , e. , mottola , s. , wagner , r.j .mercury chronology revisited through mpf application on mariner 10 data : new geological implications .j. geophys .lett . , 36 , 21204 .mcewen , a. s. , & bierhaus , e. b. 2006 .the importance of secondary cratering to age constraints on planetary surfaces .earth planet .sci . , 34 , 535 - 567 .mckinnon , w. b. , & schenk , p. m. 1985 .ejecta blanket scaling on the moon and - inferences for projectile populations .lunar planet .abstract , 16 , 544 .melosh , h. j. , 1989 .impact cratering : a geologic process .oxford university press , new york 1989 , pp .minton , d. a. , & malhotra , r. 2010 .dynamical erosion of the asteroid belt and implications for large impacts in the inner solar system .icarus , 207 , 744 - 757 .neukum , g. , & ivanov , b. a. 1994 , hazards due to comets and asteroids , 359 .pike , r. j. 1988 .geomorphology of impact craters on mercury .mercury ( a89 - 43751 19 - 91 ) .tucson , az , university of arizona press , 1988 , p. 165 - 273 .prockter , l.m . ,watters , t.r . ,chapman , c.r ., denevi , b.w . , head , j.w . ,solomon , s.c . ,murchie , s.l ., barnouin - jha , o.s . ,robinson , m.s . ,blewett , d.t . , gillis - davis , j. 2009 .the curious case of raditladi basin .lunar planet .abstract , 40 , 1758 .prockter , l.m . ,ernst , c.m . ,denevi , b.w . ,chapman , c.r ., head iii , j.w . ,fassett , c.i . ,merline , w.j . ,solomon , s.c . ,watters , t.r . , blewett , d.t . ,cremonese , g. , marchi , s. , massironi , m. , barnouin , o.s .evidence for young volcanism on mercury from the third messenger flyby .science , 329 , 668 - 671 .quaide , w. l. , & oberbeck , v. r. 1968 .thickness determinations of the lunar surface layer from lunar impact craters .. res . , 73 , 5247 .robinson m.s . ,murchie s.l ., blewett d.t . ,domingue d.l ., hawkins s.e ., head j.w . , holsclaw g.m . , mcclintock w.e . ,mccoy t.j . ,mcnutt r.l . ,prockter l.m . ,solomon s.c ., watters t.r .reflectance and color variations on mercury : regolith processes and compositional heterogeneity .science , 321 , 66 - 69 .simmons g. , todd t. , wang h. 1973 .the 25 km discontinuity : implications for lunar history .science , 182 , 158 - 161 .steffl , a. j. , cunningham , n. j. , durda , d. d. , & stern , s. a. 2009 , aas / division for planetary sciences meeting abstracts , 41 , # 43.01 stffler , d. , & ryder , g. 2001 .stratigraphy and isotope ages of lunar geologic units : chronological standard for the inner solar system .space sci ., 96 , 9 - 54 .strom , r.g . ,chapman , c.r ., merline , w.j . ,solomon , s.c . ,head , j.w .mercury cratering record viewed from messenger s first flyby .science , 321 , 79 .tokosoz m.n ., press f. , dainty a. , a. , anderson k. , latham g. , ewing m. dorman j. , lammlein d. , sutton g. , duennebeir f. 1972 .structure composition and properties of lunar crust .lunar planet .conf . , 3 , 2527 - 2544 .wagner , r. , head , j. w. , wolf , u. , & neukum , g. 2002 .stratigraphic sequence and ages of volcanic units in the gruithuisen region of the moon .j. geophys ., 107 , 5104 zappala , v. , cellino , a. , di martino , m. , migliorini , f. , & paolicchi , p. 1997 .maria s family : physical structure and possible implications for the origin of giant neas .icarus , 129 , 1 - 20 . | in this paper we present a crater age determination of several terrains associated with the raditladi and rachmaninoff basins . these basins were discovered during the first and third messenger flybys of mercury , respectively . one of the most interesting features of both basins is their relatively fresh appearance . the young age of both basins is confirmed by our analysis on the basis of age determination via crater chronology . the derived rachmaninoff and raditladi basin model ages are about 3.6 ga and 1.1 ga , respectively . moreover , we also constrain the age of the smooth plains within the basins floors . this analysis shows that mercury had volcanic activity until recent time , possibly to about 1 ga or less . we find that some of the crater size - frequency distributions investigated suggest the presence of a layered target . therefore , within this work we address the importance of considering terrain parameters , as geo - mechanical properties and layering , into the process of age determination . we also comment on the likelihood of the availability of impactors able to form basins with the sizes of rachmaninoff and raditladi in relatively recent times . mercury , raditladi basin , rachmaninoff basin , craters , age determination |
consider the canonical non - parametric regression setup where is an unknown function in ] . given , the conditional density is a product of , where denotes a univariate gaussian density function with mean and unit variance .based on observing , we estimate by a predictive density , a non - negative function of that integrates to 1 with respect to .common approaches to constructing includes the `` plug - in '' rule that simply substitutes an estimate for in , and the bayes rule that integrates with respect to a prior to obtain we measure the discrepancy between and by the average kullback leibler ( kl ) divergence assuming that belongs to a function space , such as a sobolev space , we are interested in the minimax risk it is worth observing that in this framework , the densities of future observations are estimated simultaneously by .an alternative approach is to estimate the densities individually by with risk when the s are equally spaced and goes to infinity , the risk above converges to which can be interpreted as the integrated kl risk of prediction at a random location in ] , that is , then , , where is the coefficient with respect to the basis element . a function space corresponds to a constraint on the parameter space of . in this paper, we consider function spaces whose parameter spaces have ellipsoid constraints , that is , where and .we approximate by a finite summation .the bias incurred by estimating instead of can be expressed as ^ 2 = \frac{1}{2 m } \sum_{i = n+1}^{\infty } \theta_i^2.\end{aligned}\ ] ] this bias is often negligible compared to the prediction risk ( [ risk : simultaneous ] ) ; for example , it is of order for sobolov ellipsoids , as defined in ( [ sobolev_space ] ) .therefore , from now on , we set .let , be a matrix whose entry equals and be a matrix whose entry equals . then, and are two independent gaussian vectors with and , where denotes the identity matrix .note that since the s and s are equally spaced , we have and . defining is then easy to check that and are independent and that where and .we refer to the model above as a _ gaussian sequence model _ since its number of parameters is increasing at the same rate as the number of data points .consider the problem of predictive density estimation for the gaussian sequence model ( [ model : normal ] ) .let denote a predictive density function of given .the incurred kl risk is defined to be and the corresponding minimax risk is given by the following theorem states that the two minimax risks , the one associated with from a non - parametric regression model and the one associated with from a normal sequence model , are equivalent .[ teo2.1 ] , where is defined in and in .see the .the idea of reducing a non - parametric regression model to a gaussian sequence model via an orthonormal function basis has been widely used for non - parametric function estimation .early references include ibraginov and hasminskii , efromovich and pinsker and references therein. for recent developments , see brown and low , nussbaum and johnstone .our proof of theorem [ teo2.1 ] , given in the , implies that _ simultaneous _ estimation of predictive densities in these two models are equivalent .however , this equivalence does not hold for the _ individual _ estimation approach described in section [ sec1 ] because the product form of the density estimators , that is , , is not retained under the transformation .direct evaluation of the minimax risk ( [ def : r ] ) is difficult because the parameter space is constrained . in this section ,we first consider a subclass of density estimators that have simple forms and investigate the minimax risk over this subclass . in next section , we then show that the minimax risk over this subclass is asymptotically equivalent to the overall minimax risk .such an approach was first used in pinsker to establish a minimax risk bound for the function estimation problem .it inspired a series of developments , including belitser and levit , tsybakov and goldenshluger and tsybakov .recall that in the problem of estimating the mean of a gaussian sequence model under loss , diagonal linear estimators of the form play an important role .indeed , pinsker showed that when the parameter space ( [ theta(c ) ] ) is an ellipsoid , the minimax risk among diagonal linear estimators is asymptotically minimax among all estimators .moreover , the results in diaconis and ylvisaker imply that if such a diagonal linear estimator is bayes , then the prior must be a gaussian prior with a diagonal covariance matrix .similarly , in investigating the minimax risk of predictive density estimation , we first restrict our attention to a special class of that are bayes rules under gaussian priors over the unconstrained parameter space . due to the above connection, we call these predictive densities _ linear _ predictive densities and call the minimax risk over this class the _ linear _ minimax risk , even though ` linear ' does not have any literal meaning in our setting . under a gaussian prior , where and for , the linear predictive density is given by note that is not a bayes estimator for the problem described in section [ sec2 ] because the prior distribution is supported on instead of on the ellipsoidal space .nonetheless , is a valid predictive density function .the following lemma provides an explicit form of the average kl risk of .[ lem3.1 ] the average kullback leibler risk ( [ risk : simultaneous ] ) of is given by ,\ ] ] where .let denote the posterior predictive density under the uniform prior , namely , then , by , lemma 2 , the average kl risk of is given by where and denotes the marginal distribution of under the normal prior .it is easy to check that and - \frac{1}{2 m } \sum_{i=1}^n \frac{\s2_{n+m } + \th_i^2}{\s2_{n+m } + s_i } , \label{lemma1:part2 } \\e \log m_s(x ; \s2_n ) & = & -\frac{n}{2 m } \sum_{i=1}^n \log [ 2 \curpi ( \s2_n + s_i ) ] - \frac{1}{2 m } \sum_{i=1}^n \frac{\s2_n + \th_i^2}{\s2_n + s_i}. \label{lemma1:part3}\end{aligned}\ ] ] the lemma then follows immediately by combining equations ( [ eq : marginal representation])([lemma1:part3 ] ) .we denote the linear minimax risk over all by , that is , this linear minimax risk is not directly tractable because the inside maximization is over a constrained space . in the following theorem, we first show that we can switch the order of and in equation ( [ r_l : def ] ) and then evaluate using the lagrange multiplier method. the following notation will be useful throughout .let denote a solution of the equation + = 2 c,\ ] ] where {+ } = \sup(x , 0) ] and let for .it is easy to check that the sequence satisfies the condition ( [ cond : s_i ] ) .therefore , by theorem [ lem4.1 ] , \\[-8pt ] & = & r_l(\theta ) - \frac{1}{2 m } \sum_{i=1}^n \log \frac{(\s2_n + b_i^2)(\s2_{n+m } + \tilde{\theta}_i^2)}{(\s2_{n+m } + b_i^2)(\s2_n + \tilde{\theta}_i^2 ) } + \mathrm{o}(\s2_n^\alpha)\qquad \mbox{as } \s2_n \rightarrow 0.\nonumber\end{aligned}\ ] ] next , we will derive the convergence rate of and show that the other terms are of smaller order .using the fact that for ( see ( [ def : n ] ) ) , we can rewrite as when , we have and .therefore , by means of a taylor expansion , similarly , since for , the second term in ( [ ineq : lower_bound ] ) can be written as for every , we have ( \s2_{n+m } + \tilde{\theta}_i^2 ) } { [ ( 1 + \gamma ) \s2_{n+m } + \tilde{\theta}_i^2 ] ( \s2_n + \tilde{\theta}_i^2 ) } \biggr)\\ & = & \log \biggl ( 1 + \gamma \frac{(\s2_n - \s2_{n+m } ) \tilde{\theta}_i^2}{(\s2_n + \tilde{\theta}_i^2)(\s2_{n+m } + \tilde{\theta}_i^2 ) + \gamma \s2_n ( \s2_{n+m } + \tilde{\theta}_i^2 ) } \biggr ) \\ & \leq & \log \biggl ( 1 + \gamma \frac{(\s2_n - \s2_{n+m } ) \tilde{\theta}_i^2}{(\s2_n + \tilde{\theta}_i^2)\s2_{n+m } } \biggr).\end{aligned}\ ] ] again using a taylor expansion , as well as the condition that , we obtain finally , since , by choosing , the last term in ( [ ineq : lower_bound ] ) satisfies combining ( [ ineq : lower_bound])([eq : term_3 ] ) , the theorem then follows .in this section , we apply theorems [ teo3.2 ] and [ teo4.2 ] to establish asymptotic behaviors of minimax risks over some constrained parameter spaces . in particular , we consider the asymptotics over balls and sobolev ellipsoids .[ ex1 ] suppose that and is restricted in an ball , the ball can be considered as a variant of the ellipsoid ( [ theta(c ) ] ) with and .although the values of the s here depend on , the proofs of the above theorems are still valid .it is easy to see that defined in ( [ def : n ] ) is equal to and that . therefore , by theorem [ teo4.2 ] , the minimax risk among all predictive density estimators is asymptotically equivalent to the minimax risk among _ linear _ density estimators .furthermore , by theorem [ teo3.2 ] , note that this minimax risk is strictly smaller than the minimax risk over the class of plug - in estimators since , for any plug - in density , = \frac{1}{2 } e \|\hat\theta - \theta\|^2\end{aligned}\ ] ] and by pinsker s theorem , the minimax risk of estimating under squared error loss is , which is larger than , by the fact that for any .[ ex2 ] suppose that and is restricted in a sobolev ellipsoid where for then , by ( [ def : n ] ) , we have as . substituting this relation into equation ( [ def : tl ] ) yields using the taylor expression and the asymptotic relation obtain where ^{{1}/{(2\alpha+1)}}.\ ] ] note that , by ( [ def : tth ] ) , + \bigl(1+\mathrm{o}(1)\bigr).\end{aligned}\ ] ] therefore , by theorem [ teo4.2 ] , the minimax risk among all predictive density estimators is asymptotically equivalent to the minimax risk among the _ linear _ density estimators. furthermore , by theorem [ teo3.2 ] , it is difficult to calculate an explicit form of the optimal constant for the minimax risk due to the function , but we can get an accurate bound for it . by taylor expansion , there exists , such that moreover , where therefore , that is , the convergence rate is and the convergence constant is between and . as in example[ ex1 ] , we compare the asymptotics of this minimax risk with the one over the class of plug - in estimators , where the latter can be easily computed by ( [ plugin_risk ] ) and the results in .direct comparison reveals that the convergence rates of both minimax risks are and the convergence constants can both be written in the form , where is a function depending only on .although it is hard to obtain an explicit representation for the convergence constant for the overall minimax risk , our simulation result in figure [ fig1 ] shows that it is strictly smaller than that over the class of plug - in estimators . and .] [ append ]in this appendix , we provide the proofs of theorem [ teo2.1 ] and lemma [ lem4.1 ] .proof of theorem [ teo2.1 ] let be an matrix whose entry equals . since the s form an orthogonal basis for and the s are equally spaced , we have .consider the transformation .since the first columns of are , the first elements of the transformed vector are just , defined in ( [ eq : transform ] ) , and we denote the remaining elements by . it is easy to check that and are independent multivariate gaussian variables , and the target density function satisfies where is the jacobian for this transformation .similarly , any predictor density estimator can be rewritten as where is a transformation of defined in ( [ eq : transform ] ) .note that the two predictive density functions on the left and right sides of the above equation may have different functional forms ; however , to simplify the notation , we use the same symbol to represent them when the context is clear .now , the average kl risk can be represented as \\[-8pt ] & = & { e}_{x , \x , \z | \th } \log \frac{p(\x , \z { \vert } \th)}{\hp(\x , \z { \vert } x)},\nonumber\end{aligned}\ ] ] where the second equality follows from ( [ trans_1 ] ) and ( [ trans_2 ] ) . since and are independent , we can split as where has a known distribution moreover , to evaluate the minimax risk , it suffices to consider predictive density estimators in the form because any predictive density can be written as , and if is equal to , then this density estimator is dominated by , due to the non - negativity of kl divergence . combining ( [ teo2.1:eq1])([teo2.1:eq3 ] ) , we have consequently , the minimax risk in the non - parametric regression model is equal to the minimax risk in the gaussian sequence model .proof of lemma [ lem4.1 ] let be the collection of all ( generalized ) bayes predictive densities . then , by , theorem 5 , is a complete class for the problem of predictive density estimation under kl loss .therefore , the minimax risk among all possible density estimators is equivalent to the minimax risk among ( generalized ) bayes estimators , namely , consider a gaussian distribution , where and the s satisfy condition ( [ cond : s_i ] ) .then , the first term of ( [ ineq : bound_1 ] ) is the bayes risk under over the unconstrained parameter space .it is achieved by the linear predictive density ; see .therefore , \\[-8pt ] & = & \frac{n}{2 m } \log \frac{\s2_n}{\s2_{n+m } } + \frac{1}{2m}\sum_{i=1}^n \log \frac{\s2_{n+m } + s_i^2}{\s2_n + s_i^2}.\nonumber\end{aligned}\ ] ] to bound the second term of ( [ ineq : bound_1 ] ) , note that for any bayes predictive density , where ( [ ineq : risk : bayes:1 ] ) is due to jensen s inequality , ( [ ineq : risk : bayes:2 ] ) is due to and ( [ ineq : risk : bayes:3 ] ) is due to therefore , ,\ ] ] where . using the cauchy schwarz inequality , we can further bound the right - hand side of ( [ eq : bound : term2 ] ) as follows : \\ & & \quad \le \frac{1}{m v_m } \biggl [ \sum_{i=1}^n \biggl ( \int_{\th^c } \th_i^4 \pi_s(\th ) \,\mathrm{d } \th \biggr)^{1/2 } \sqrt{\pi_s(\th^c ) } + \frac{c}{a_1 ^ 2 } \pi_s(\th^c ) \biggr ] \\ & & \quad = \frac{1}{m v_m } \biggl [ \sqrt{3 } \sqrt{\pi_s(\th^c ) } \sum_{i=1}^n s_i^2 + \frac{c}{a_1 ^ 2 } \pi_s(\th^c ) \biggr ] \\ & & \quad \le \frac{1}{m v_m } \biggl [ \sqrt{3}\frac{c}{a_1 } \sqrt{\pi_s(\th^c ) } + \frac{c}{a_1 } \pi_s(\th^c ) \biggr].\end{aligned}\ ] ] then , by , proposition 2 , which states that if are independent gaussian random variables with and , then we have ^{1/2 } \le \s2_n^ { \alpha},\end{aligned}\ ] ] due to condition ( [ cond : s_i ] ) . combining ( [ ineq : bound_1 ] ) , ( [ ineq : term_1 ] ) , ( [ eq : bound : term2 ] ) and ( [ order ] ) , the theorem then follows immediately .the authors would like to thank edward i. george for helpful discussions and the associate editor for generous insights and suggestions .this work was supported in part by the national science foundation under award numbers dms-07 - 32276 and dms-09 - 07070 .any opinions , findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the national science foundation . | we consider the problem of estimating the predictive density of future observations from a non - parametric regression model . the density estimators are evaluated under kullback leibler divergence and our focus is on establishing the exact asymptotics of minimax risk in the case of gaussian errors . we derive the convergence rate and constant for minimax risk among bayesian predictive densities under gaussian priors and we show that this minimax risk is asymptotically equivalent to that among all density estimators . |
the nonlinear delay differential equation where are positive constants , proposed by mackey and glass , has been used as an appropriate model for the dynamics of hematopoiesis ( blood cells production ) . in medical terms , denotes the density of mature cells in blood circulation at time and is the time delay between the production of immature cells in the bone marrow and their maturation for release in circulating bloodstream .as we may know , the periodic or almost periodic phenomena are popular in various natural problems of real world applications . in comparing with periodicity ,almost periodicity is more frequent in nature and much more complicated in studying for such model . on the other hand , many dynamical systems describe the real phenomena depend on the history as well as undergo abrupt changes in their states .this kind of models are best described by impulsive delay differential equations . a great deal of effort from researchers has been devoted to study the existence and asymptotic behavior of almost periodic solutions of and its generalizations due to their extensively realistic significance .we refer the reader to and the references therein .particularly , in , wang and zhang investigated the existence , nonexistence and uniqueness of positive almost periodic solution of the following model by using a new fixed point theorem in cone . very recently , using a fixed point theorem for contraction mapping combining with the lyapunov functional method , zhang et al . obtained sufficient conditions for the existence and exponential stability of a positive almost periodic solution to a generalized model of by employing a novel argument , a delay - independent criteria was established in ensuring the existence , uniqueness , and global exponential stability of positive almost periodic solutions of a non - autonomous delayed model of hematopoiesis with almost periodic coefficients and delays . in , alzabut et al . considered the following model of hematopoiesis with impulses where represents the instant at which the density suffers an increment of unit and density of mature cells in blood circulation decreases at prescribed instant by some medication and it is proportional to the density at that time . by employing the contraction mapping principle and applying gronwall - bellman s inequality , sufficient conditions which guarantee the existence and exponential stability of a positive almost periodic solution of system were given in as follows .[ thm1 ] assume that * the function is almost periodic in the sense of bohr and there exists a positive constant such that .* the sequence is almost periodic and . *the sequences are uniformly almost periodic and there exists a positive constant such that , where , and . *the function is almost periodic in the sense of bohr , , and there exists a positive constant such that . * the sequence is almost periodic and there exists a constant such that .if , then equation has a unique positive almost periodic solution .unfortunately , the above theorem is incorrect .for this , let us consider the following example .[ exam1 ] consider the following equation note that is a special case of .moreover , we can easily see that equation satisfies conditions ( c1)-(c5 ) , where and .suppose that system has a positive almost periodic solution .it is obvious that for any positive integer , we have which yields a contradiction .this shows that has no positive almost periodic solution .thus , theorem [ thm1 ] is incorrect , and theorem 3.2 in is also incorrect . motivated by the aforementioned discussions , in this paper we consider a generalized model of hematopoiesis with delays , harvesting terms and impulses of the form ,\quad t\neq t_k , \end{aligned}\\ & \delta x(t_k)=x\left(t_k^+\right)-x\left(t^-_k\right ) = \gamma_kx\left(t_k^-\right)+\delta_k , \quad k\in \mathbb{z } , \end{split}\ ] ] where is given positive integer , ,are nonnegative functions ; are nonnegative functions represent harvesting terms ; , are time delays ; are positive numbers and is a constant ; are constants ; are nonnegative integrable functions on $ ] with ; , is an increasing sequence involving the fixed impulsive points with .the main goal of the present paper is to establish conditions for the existence of a unique positive almost periodic solution of model .it is also proved that , under the proposed conditions , the unique positive almost periodic solution of is globally exponentially attractive .the rest of this paper is organized as follows .section 2 introduces some notations , basic definitions and technical lemmas .main results on the existence and exponential attractivity of a unique positive almost periodic solution of are presented in section 3 .an illustrative example is given in section 4 .the paper ends with the conclusion and cited references .let be a fixed sequence of real numbers satisfying for all , .let be an interval of , denoted by the space of all piecewise left continuous functions with points of discontinuity of the first kind at , .the following notations will be used in this paper . for bounded functions , and a bounded sequence , we set the following definitions are borrowed from .[ def1 ] the set of sequences , where , is said to be uniformly almost periodic if for any positive number , there exists a relatively dense set of -almost periods common for all sequences .[ def2 ] a function is said to be almost periodic if the following conditions hold * the set of sequences is uniformly almost periodic . * for any , there exists such that , if belong to the same interval of continuity of , , then . * for any , there exists a relatively dense set of -almost periods such that , if then for all , satisfying . for equation ,we introduce the following assumptions * the function is almost periodic in the sense of bohr and . *the functions , are nonnegative and almost periodic in the sense of bohr . * the function ,are bounded nonnegative and almost periodic in the sense of bohr in uniformly in and there exist positive constants such that * the functions , are almost periodic in the sense of bohr , are bounded , , . *the sequence is almost periodic . *the sequence is almost periodic satisfying where . *the set of sequences is uniformly almost periodic , .[ rm1 ] it should be noted that model includes as a special case . for that model , assumptions ( a3 ) , ( a4 )obviously be removed .furthermore , we make assumption ( a6 ) in order to correct condition ( c2 ) in . the following lemmas will be used in the proof of our main results .[ lem1 ] let assumption ( a7 ) holds .assume that functions , are almost periodic in the sense of bohr , a function and sequences , are almost periodic .then for any , there exist , relatively dense sets , such that * ; * ; * [ lem2 ] let assumption ( a7 ) holds .assume that functions , are almost periodic in uniformly in in the sense of bohr , a function and sequences , are almost periodic . then for any compact set and positive number , there exist , relatively dense sets , such that * ; * ; * the proof of this lemma is similar to the proof of lemma 2.1 in so let us omit it here .[ lem3 ] for given , real number and integers such that , if for all and then .the proof is straight forward , so let us omit it here .[ lem4 ] let assumptions ( a6 ) and ( a7 ) hold . if satisfies for all , then using the facts that and , from ( a6 ) , we have the proof is completed .[ lem5 ] let assumption ( a7 ) holds . for any , , we have the proof follows by some direct estimates and , thus , is omitted here .now , let , then . from biomedical significance , we only consider the initial condition ,\mathbb{r}).\ ] ]it should be noted that problem and has a unique solution defined on which is piecewise continuous with points of discontinuity of the first kind , namely , at which it is left continuous and the following relations are satisfied related to , we consider the following linear equation [ lem6 ] let assumptions ( a1 ) , ( a6 ) and ( a7 ) hold .then where is the cauchy matrix of , and .the proof is straight forward from , so let us omit it here .similar to lemma 36 in and lemma 2.6 in we have the following lemma .[ lem7 ] let assumptions ( a1 ) , ( a6 ) and ( a7 ) hold . then , for given , relatively dense sets , , satisfying * ; * for any , satisfying , , where \right\}.\ ] ] we divide the proof into two possible cases as follows . _ case 1 : _ . by lemma [ lem3 ] , . since , , and , it follows from , that _ case 2 : _similarly , we have by lemma [ lem4 ] , from - we obtain the proof is completed .it is worth noting that , the proof of lemma [ lem7 ] is different from those in . by employing lemma [ lem4 ] , we obtain a new bound for constant given in .[ lem8 ] assume that there exist constants , and a function satisfying * for , where and ; * for , where and denotes the upper - right dini derivative ; * for all satisfies .then where us set and .we define an operator as follows it can be verified that is an almost periodic solution on of if and only if .we define the following constants [ lem9 ] let assumptions ( a1)-(a7 ) hold .if then let .by lemma [ lem5 ] and lemma [ lem6 ] , from we have for each , let be an integer such that .if then , by lemma [ lem6 ] , it follows from the fact that if then from ( a7 ) and lemma [ lem6 ] , we have from and we obtain the proof is completed .now we are in position to introduce our main results as follows .[ thm2 ] under the assumptions ( a1)-(a7 ) , if then is almost periodic .let . for given ,there exists such that , if belong to the same interval of continuity of then by lemma [ lem2 ] and lemma [ lem7 ] , there exist , relatively dense sets , such that , for all , we have , i\in\underline{m};\\ & |a(t+\omega)-a(t)|<\delta , \ t\in \mathbb{r};\\ & |b_i(t+\omega)-b_i(t)|<\delta,\ ; |c_i(t+\omega)-c_i(t)|<\delta , \t\in \mathbb{r } , i\in\underline{m};\\ & |\tau_i(t+\omega)-\tau_i(t)|<\delta , \ ; |\sigma_i(t+\omega)-\sigma_i(t)|<\delta , \t\in \mathbb{r } , i\in\underline{m};\\ & |\gamma _ { k+p}-\gamma _ k| < \delta,\ ; |\delta _ { k+p}-\delta _ k| < \delta,\ ; \end{aligned}\ ] ] let , .one can easily see that we define . for , , let us set then we have and .it can be seen from and that by lemma [ lem5 ] and lemma [ lem6 ] , from , and the fact that , we have and it should be noted that , by ( a4 ) , , , are strictly increasing functions , and thus , there exist the inverse functions of .for each , denote then let , , then , by ( a4 ) , therefore , and hence , from equations - , we have } \left[\tau^*_i ( t_k+\omega+\epsilon)-\tau^*_i(t_k+\omega-\epsilon)\right]\\ & \leq\frac{a\epsilon}{a_l}+4a\|\phi\|\overline{\lambda}_i\epsilon \sum_{t_k<\bar t}e^{-a_l{\underline \lambda}_i(\bar t - t_k ) } \leq \bigg(\frac{1}{a_l}+\frac{4\overline{\lambda}_i\|\phi\|}{1-e^{-a_l{\underline\lambda}_i\eta}}\bigg)a\epsilon . \end{aligned}\ ] ] by the same arguments used in deriving , we obtain combining - , and , we readily obtain \epsilon.\ ] ] next , let us set it follows from and that by lemmas [ lem5 ] and [ lem6 ] , from we have and where . from - ,we readily obtain \epsilon.\ ] ] now , we define then , from and , we have also using lemma [ lem5 ] and lemma [ lem6 ] , from and the fact that , we obtain let . similarly to and , we readily obtain inequalities - yield let us set then for each , there exists a unique integer such that . by lemma [ lem3 ]we have .thus where note that and , from and lemma [ lem5 ] , we have similarly , we obtain it follows from - that \epsilon.\ ] ] we also have therefore + \frac{a\epsilon}{1-e^{-\frac{1}{2}a_l\eta}}.\ ] ] we can see clearly from , , , and that there exists a positive constant such that , for all , .this shows that is almost periodic .the proof is completed .[ thm3 ] let assumptions ( a1)-(a7 ) hold .if , defined in , is positive and where , then equation has a unique positive almost periodic solution .we define .it is worth noting that , from lemma [ lem9 ] , theorem [ thm2 ] and the assumption , we have . for any ,applying lemma [ lem6 ] we obtain therefore which yields is a contraction mapping on by condition. then has a unique fixed point in , namely .it should be noted that , , and hence , also has a unique fixed point in .this shows that has a unique positive almost periodic solution .the proof is completed .[ thm4 ] let assumptions ( a1)-(a7 ) hold . if and then has a unique positive almost periodic solution . moreover , every solution of converges exponentially to as . by theorem [ thm3 ] , has a unique positive almost periodic solution .let be a solution of and .we define then \\ & \leq -a_lv(t)+\sum_{i=1}^m\left(b_{im}k^*_i+c_{im}g^*_i + l_i\right)\overline{v}(t ) , \ ; t\ne t_k,\ t\geq \alpha,\\ \delta v(t_k)&=\gamma_k v(t^-_k ) , \ ; t_k\geq\alpha , k\in \mathbb{z},\end{aligned}\ ] ] where . by lemma [ lem8 ] ,there exists a positive constant such that this shows that converges exponentially to as .the proof is completed . the existence and exponential stability of positive almost periodic solution ofis presented in the following corollary as an application of our obtained results with .[ cr1 ] under the assumptions ( a1 ) , ( a2 ) ( with ) and ( a5)-(a7 ) , if and where , then has a unique positive almost periodic solution which is exponentially stable .in this section we give a numerical example to illustrate the effectiveness of our conditions . for illustrating purpose ,let us consider the following equation where it should be noted that , the functions and are almost periodic in the sense of bohr , is almost periodic in uniformly in , .therefore , assumptions ( a1)-(a5 ) and ( a7 ) are satisfied . on the other hand , for any .thus , and assumption ( a6 ) is satisfied .taking some computations we obtain and . by theorem [ thm3 ], equation has a unique positive almost periodic solution .furthermore , it can be seen that , and hence , . by theorem [ thm4 ] , every solution of converges exponentially to as tends to infinity . as presented in figure 1 , state trajectories of with different initial conditionsconverge to the unique positive almost periodic solution of .this paper has dealt with the existence and exponential attractivity of a unique positive almost periodic solution for a generalized model of hematopoiesis with delays and impulses . using the contraction mapping principle and a novel type of impulsive delay inequality ,new sufficient conditions have been derived ensuring that all solutions of the model converge exponentially to the unique positive almost periodic solution .t. t. anh , existence and global asymptotic stability of positive periodic solutions of a lotka - volterra type competition systems with delays and feedback controls , _ electron . j. diff ._ , vol . 2013 ( 2013 ) , no .261 , 116 . | in this paper , a generalized model of hematopoiesis with delays and impulses is considered . by employing the contraction mapping principle and a novel type of impulsive delay inequality , we prove the existence of a unique positive almost periodic solution of the model . it is also proved that , under the proposed conditions in this paper , the unique positive almost periodic solution is globally exponentially attractive . a numerical example is given to illustrate the effectiveness of the obtained results . [ section ] [ theorem]corollary [ section ] [ section ] [ section ] [ section ] |
approaches to the late - time acceleration of the universe may be divided into three broad classes .first , it is possible that there is some as yet undiscovered property of our existing model of gravity and matter that leads to acceleration at the current epoch . into this categoryone might include the existence of a tiny cosmological constant and the possibility that the backreaction of cosmological perturbations might cause self - acceleration .second is the idea that there exists a new dynamical component to the cosmic energy budget .this possibility , with the new source of energy density modeled by a scalar field , is usually referred to as _ dark energy_. finally , it may be that curvatures and length scales in the observable universe are only now reaching values at which an infrared modification of gravity can make itself apparent by driving self - acceleration .it is this possibility that i will briefly describe in this article , submitted to the proceedings of the nasa workshop _ from quantum to cosmos : fundamental physics research in space_. while i will mention a number of different approaches to modified gravity , i will concentrate on laying out the central challenges to constructing a successful modified gravity model and on illustrating them with a particular simple example . detailed descriptions of some of the other possible ways to approach this problem can be found in the excellent contributions of sean carroll , cedric deffayet , gia dvali and john moffat .although , within the context of general relativity ( gr ) , one does nt think about it too often , the metric tensor contains , in principle , more degrees of freedom than the usual spin-2 _ graviton _ ( see sean carroll s talk in these proceedings for a detailed discussion of this ) .the reason why one does nt hear of these degrees of freedom in gr is that the einstein - hilbert action is a very special choice , resulting in second - order equations of motion , which constrain away the scalars and the vectors , so that they are non - propagating .however , this is not the case if one departs from the einstein - hilbert form for the action .when using any modified action ( and the usual variational principle ) one inevitably frees up some of the additional degrees of freedom .in fact , this can be a good thing , in that the dynamics of these new degrees of freedom may be precisely what one needs to drive the accelerated expansion of the universe .however , there is often a price to pay .the problems may be of several different kinds .first , there is the possibility that along with the desired deviations from gr on cosmological scales , one may also find similar deviations on solar system scales , at which gr is rather well - tested .second is the possibility that the newly - activated degrees of freedom may be badly behaved in one way or another ; either having the wrong sign kinetic terms ( ghosts ) , and hence being unstable , or leading to superluminal propagation , which may lead to other problems .these constraints are surprisingly restrictive when one tries to create viable modified gravity models yielding cosmic acceleration . in the next few sectionsi will describe several ways in which one might modify the action , and in each case provide an explicit , clean , and simple example of how cosmic acceleration emerges .however , i will also point out how the constraints i have mentioned rule out these simple examples , and mention how one must complicate the models to recover viable models .the simplest way one could think to modify gr is to replace the einstein - hilbert lagrangian density by a general function of the ricci scalar .s = d^4 x + d^4 x l_m[_i , g _ ] , [ jordanaction ] where is the ( reduced ) planck mass and is the lagrangian density for the matter fields . here, i have written the matter lagrangian as ] . the field equation for the metricis then [ cdtteqn ] ( 1+)r _ - ( 1-)rg _ + ^4r^-2 = . the constant - curvature vacuum solutions , for which , satisfy .thus , there exists a constant - curvature vacuum solution which is de sitter space .we will see that the de sitter solution is , in fact , unstable , albeit with a very long decay time .the time - time component of the field equations for this metric is [ newfriedmann ] 3h^2 - ( 2hh + 15h^2h+2h^2 + 6h^4 ) = .as i have discussed , one may now transform to the _ einstein frame _ , where the gravitational lagrangian takes the einstein - hilbert form and the additional degree of freedom appears as a fictitious scalar field , with potential shown in the figure below .[ potential ] , title="fig : " ] denoting with a tilde all quantities ( except ) in the einstein frame , the relevant einstein - frame cosmological equations of motion are \ , \\\ ] ] where a prime denotes , and where \ , \ ] ] with a constant , and finally , note that the matter - frame hubble parameter is related to that in the einstein frame by how about cosmological solutions in the einstein frame ? ordinarily , einstein gravity with a scalar field with a minimum at would yield a minkowski vacuum state . however , here this is no longer true .even though as , this corresponds to a curvature singularity and so is not a minkowski vacuum .the other minimum of the potential , at , does not represent a solution .focusing on vacuum solutions , i.e. , , the beginning of the universe corresponds to and .the initial conditions we must specify are the initial values of and , denoted as and .there are then three qualitatively distinct outcomes , depending on the value of ._ 1 . eternal de sitter ._ there is a critical value of for which just reaches the maximum of the potential and comes to rest . in this casethe universe asymptotically evolves to a de sitter solution ( ignoring spatial perturbations ) . as we have discovered before ( and is obvious in the einstein frame ) , this solution requires tuning and is unstable .power - law acceleration ._ for , the field overshoots the maximum of .soon thereafter , the potential is well - approximated by , and the solution corresponds to in the matter frame .thus , the universe evolves to late - time power - law inflation , with observational consequences similar to dark energy with equation - of - state parameter ._ 3 . future singularity . _ for , does not reach the maximum of its potential and rolls back down to .this yields a future curvature singularity .what about including matter ?as can be seen from ( [ confscalar ] ) , the major difference here is that the equation - of - motion for in the einstein frame has a new term .furthermore , since the matter density is much greater than for , this term is very large and greatly affects the evolution of .the exception is when the matter content is radiation alone ( ) , in which case it decouples from the equation due to conformal invariance . despite this complication ,it is possible to show that the three possible cosmic futures identified in the vacuum case remain in the presence of matter . thus far, the dimensionful parameter is unspecified . by choosing ,the corrections to the standard cosmology only become important at the present epoch , explaining the observed acceleration of the universe without recourse to dark energy .clearly the choice of correction to the gravitational action can be generalized .terms of the form , with , lead to similar late - time self acceleration , which can easily accommodate current observational bounds on the equation of state parameter . now ,as i mentioned in the introduction , any modification of the einstein - hilbert action must , of course , be consistent with the classic solar system tests of gravity theory , as well as numerous other astrophysical dynamical tests .we have chosen the coupling constant to be very small , but we have also introduced a new light degree of freedom .as shown by chiba , the simple model above is equivalent to a brans - dicke theory with in the approximation where the potential was neglected , and would therefore be inconsistent with experiment ( although see for suggestions that the conformally transformed theory may not be the correct way to analyze deviations from gr ) . to construct a realistic modelrequires a more complicated function , with more than one adjustable parameter in order to fit the cosmological data and satisfy solar system bounds .examples can be found in .it is natural to consider generalizing the action of to include other curvature invariants .there are , of course , any number of terms that one could consider , but for simplicity , focus on those invariants of lowest mass dimension that are also parity - conserving we consider actions of the form + \int d^4 x\ , \sqrt{-g}\ , { \cal l}_m \ , \label{genaction}\ ] ] where is a general function describing deviations from general relativity .it is convenient to define in terms of which the equations of motion are + \box(f_p\,r_{\mu\nu})\nonumber\\ & + & g_{\mu\nu}\,\nabla_\alpha\nabla_\beta(f_p\,r^{\alpha\beta } ) -4\nabla_\alpha\nabla_\beta[f_q\,r^\alpha{}_{(\mu\nu)}{}^\beta ] = 8\pi g\,t_{\mu\nu}\ .\label{equaz}\end{aligned}\ ] ] it is straightforward to show that actions of the form ( [ genaction ] ) generically admit a maximally - symmetric solution : a non - zero constant .however , an equally generic feature of such models is that this de sitter solution is unstable . in the cdtt modelthe instability is to an accelerating power - law attractor .this is a possibility that we will also see in many of the more general models under consideration here .since we are interested in adding terms to the action that explicitly forbid flat space as a solution , i will , in a similar way as in , consider inverse powers of the above invariants and , for simplicity , specialize to a class of actions with where is a positive integer ( taken to be unity ) , has dimensions of mass and , and are dimensionless constants .in fact , for general the qualitative features of the system are as for . for the purposes of this short talk , i will focus on a specific example - actions containing modifications involving only , with the prototype being , with a parameter with dimensions of mass .it is easy to see that there is a constant curvature vacuum solution to this action given by .however , we would like to investigate other cosmological solutions and analyze their stability . from ( [ equaz ] ) , with the flat cosmological ansatz , the analogue of the friedmann equation becomes 3h^2&- & = 0 .[ riccieqn ] asymptotic analysis of this equation ( substituting in a power - law ansatz and taking the late - time limit ) yields two late - time attractors with powers and . however , in order to obtain a late - time accelerating solution ( ) , it is necessary to give accelerating initial conditions ( ) , otherwise the system is in the basin of attraction of the non - accelerating attractor at ( this type of behavior is generic in some other modified gravity theories ).while i ve given a simple example here , cosmologically viable models are described in . what about the other constraints on these models ?it has been shown that solar system constraints , of the type i have described for models , can be evaded by these more general models whenever the constant is nonzero . roughly speaking ,this is because the schwarzschild solution , which governs the solar system , has vanishing and , but non - vanishing .more serious is the issue of ghosts and superluminal propagation .it has been shown that a necessary but not sufficient condition that the action be ghost - free is that , so that there are no fourth derivatives in the linearised field equations . what remained was the possibility that the second derivatives might have the wrong signs , and also might allow superluminal propagation at some time in a particular cosmological background .it has recently been shown that in a frw background with matter , the theories are ghost - free , but contain superluminally propagating scalar or tensor modes over a wide range of parameter space .it is certainly necessary to be ghost - free .whether the presence of superluminally propagating modes is a fatal blow to the theories remains to be seen .given the immense challenge posed by the accelerating universe , it is important to explore every option to explain the underlying physics. modifying gravity may be one of the more radical proposals , but it is not one without precedence as an explanation for unusual physics .however , it is an approach that is tightly constrained both by observation and theoretical obstacles . in the brief time and spaceallowed , i have tried to give a flavor of some attempts to modify gr to account for cosmic acceleration without dark energy .i have focused on two of the directions in which i have been involved and have chosen to present simple examples of the models , which clearly demonstrate not only the cosmological effects , but also how constraints from solar system tests and theoretical consistency apply .there are a number of other proposals for modified gravity and , while i have had neither time nor space to devote to them here , others have discussed some of them in detail at this meeting .there is much work ahead , with significant current effort , my own included , devoted to how one might distinguish between modified gravity , dark energy and a cosmological constant as competing explanations for cosmic acceleration .i would like to thank the organizers of the q2c conference , and in particular slava turyshev , for their hard work and dedication in running such a stimulating meeting .i would also like to thank my many coauthors on the work discussed here for such enjoyable and productive collaborations , and for allowing me to reproduce parts of our work in this article .this work was supported in part by the nsf under grant phy-0354990 , by research corporation , and by funds provided by syracuse university 0 g. r. dvali , g. gabadadze and m. porrati , phys .b * 485 * , 208 ( 2000 ) [ arxiv : hep - th/0005016 ] . c. deffayet , phys .b * 502 * , 199 ( 2001 ) [ arxiv : hep - th/0010186 ] . c. deffayet , g. r. dvali and g. gabadadze , phys .d * 65 * , 044023 ( 2002 ) [ arxiv : astro - ph/0105068 ] .k. freese and m. lewis , phys .b * 540 * , 1 ( 2002 ) [ arxiv : astro - ph/0201229 ] .g. dvali and m. s. turner , arxiv : astro - ph/0301510 . s. m. carroll , v. duvvuri , m. trodden and m. s. turner , phys . rev .d * 70 * , 043528 ( 2004 ) [ arxiv : astro - ph/0306438 ] .s. capozziello , s. carloni and a. troisi , arxiv : astro - ph/0303041 .d. n. vollick , phys .d * 68 * , 063510 ( 2003 ) [ arxiv : astro - ph/0306630 ] .e. e. flanagan , phys .lett . * 92 * , 071101 ( 2004 ) [ arxiv : astro - ph/0308111 ] .e. e. flanagan , class .* 21 * , 417 ( 2003 ) [ arxiv : gr - qc/0309015 ] .d. n. vollick , `` on the viability of the palatini form of 1/r gravity , '' class .* 21 * , 3813 ( 2004 ) [ arxiv : gr - qc/0312041 ] .m. e. soussa and r. p. woodard , gen .grav . * 36 * , 855 ( 2004 ) [ arxiv : astro - ph/0308114 ] .s. nojiri and s. d. odintsov , gen .grav . * 36 * , 1765 ( 2004 ) [ arxiv : hep - th/0308176 ] .s. m. carroll , a. de felice , v. duvvuri , d. a. easson , m. trodden and m. s. turner , phys .d * 71 * , 063513 ( 2005 ) [ arxiv : astro - ph/0410031 ] .n. arkani - hamed , h. c. cheng , m. a. luty and s. mukohyama , jhep * 0405 * , 074 ( 2004 ) [ arxiv : hep - th/0312099 ] . g. gabadadze and m. shifman , phys .d * 69 * , 124032 ( 2004 ) [ arxiv : hep - th/0312289 ] .j. w. moffat , arxiv : astro - ph/0403266 .t. clifton , d. f. mota and j. d. barrow , mon . not .soc . * 358 * , 601 ( 2005 ) [ arxiv : gr - qc/0406001 ] .s. m. carroll , i. sawicki , a. silvestri and m. trodden , arxiv : astro - ph/0607458 .j. d. barrow and a. c. ottewill , j. phys .a * 16 * , 2757 ( 1983 ) .j. d. barrow and s. cotsakis , phys .b * 214 * , 515 ( 1988 ) .j. d. barrow and s. cotsakis , phys .b * 258 * , 299 ( 1991 ) .g. magnano and l. m. sokolowski , phys .d * 50 * , 5039 ( 1994 ) [ arxiv : gr - qc/9312008 ] .a.dobado and a.l .maroto , phys .d * 52 * , 1895 ( 1995 ) .h. j. schmidt , astron .* 311 * , 165 ( 1990 ) [ arxiv : gr - qc/0109004 ] .g. magnano and l. m. sokolowski , phys .d * 50 * , 5039 ( 1994 ) [ arxiv : gr - qc/9312008 ] .t. chiba , phys .b * 575 * , 1 ( 2003 ) [ arxiv : astro - ph/0307338 ] . b. bertotti , l. iess and p. tortora , nature * 425 * , 374 ( 2003 ) .v. faraoni , arxiv : gr - qc/0607016 .s. capozziello and a. troisi , phys .d * 72 * , 044022 ( 2005 ) [ arxiv : astro - ph/0507545 ] .s. capozziello , a. stabile and a. troisi , higher order gravity compatible with experimental constraints on eddington arxiv : gr - qc/0603071 .p. zhang , phys .d * 73 * , 123504 ( 2006 ) [ arxiv : astro - ph/0511218 ] .d. a. easson , f. p. schuller , m. trodden and m. n. r. wohlfarth , phys .d * 72 * , 043504 ( 2005 ) [ arxiv : astro - ph/0506392 ] .o. mena , j. santiago and j. weller , phys .* 96 * , 041103 ( 2006 ) [ arxiv : astro - ph/0510453 ] .i. navarro and k. van acoleyen , phys .b * 622 * , 1 ( 2005 ) [ arxiv : gr - qc/0506096 ] .t. chiba , jcap * 0503 * , 008 ( 2005 ) [ arxiv : gr - qc/0502070 ] .i. navarro and k. van acoleyen , consistent long distance modification of gravity from inverse powers of the jcap * 0603 * , 008 ( 2006 ) [ arxiv : gr - qc/0511045 ] . a. de felice , m. hindmarsh and m. trodden , arxiv : astro - ph/0604154 .g. calcagni , b. de carlos and a. de felice , arxiv : hep - th/0604201 . | i briefly discuss some attempts to construct a consistent modification to general relativity ( gr ) that might explain the observed late - time acceleration of the universe and provide an alternative to dark energy . i mention the issues facing extensions to gr , illustrate these with two specific examples , and discuss the resulting observational and theoretical obstacles . this article comprises an invited talk at the nasa workshop _ from quantum to cosmos : fundamental physics research in space_. |
the possibility to predict future states of a system stands at the foundations of scientific knowledge with an obvious relevance both from a conceptual and applicative point of view .the perfect knowledge of the evolution law of a system may induce the conclusion that this aim could be attained .this classical deterministic point of view was claimed by laplace : once the evolution laws of the system are known , the state at a certain time completely determines the subsequent states for every time .however it is well established now that in some systems , full predictability can not be accomplished in practice because of the unavoidable uncertainty in the initial conditions .indeed , as already stated by poincar , long - time predictions are reliable only when the evolution law does not amplify the initial uncertainty too rapidly .therefore , from the point of view of predictability , we need to know how an error on the initial state of the system grows in time . in systems with great sensitive dependence on initial conditions ( deterministic chaotic systems ) errorsgrows exponentially fast in time , limiting the ability to predict the future states .a branch of the theory of dynamical systems has been developed with the aim of formalizing and characterizing the sensitivity to initial conditions .the lyapunov exponent and the kolmogorov - sinai entropy are the two main indicators for measuring the rate of error growth and information production during a deterministic system evolution .a complementary approach has been developed in the context of information theory , data compression and algorithmic complexity theory and it is rather clear that the latter point of view is closely related to the dynamical systems one .if a system is chaotic , then its predictability is limited up to a time which is related to the first lyapunov exponent , and the time sequence by which we encode one of its chaotic trajectories can not be compressed by an arbitrary factor , i.e. is algorithmically complex . on the contrary, the coding of a regular trajectory can be easily compressed ( e.g. , for a periodic trajectory it is sufficient to have the sequence for a period ) so it is `` simple '' . in this paperwe will discuss how unpredictability and algorithmic complexity are closely related and how information and chaos theory complete each other in giving a general understanding of complexity in dynamical processes .in particular , we shall consider the extension of this approach , nowadays well established in the context of low dimensional systems and for asymptotic regimes , to high dimensional systems with attention to situations far from asymptotic ( i.e. finite time and finite observational resolution ) .the characteristic lyapunov exponents are somehow an extension of the linear stability analysis to the case of aperiodic motions . roughly speaking , they measure the typical rate of exponential divergence of nearby trajectories and , thus , contain information on the growing rate of a very small error on the initial state of a system . consider a dynamical system with an evolution law given , e.g. , by the differential equation we assume that is smooth enough that the evolution is well - defined for time intervals of arbitrary extension , and that the motion occurs in a bounded region of the phase space .we intend to study the separation between two trajectories , and , starting from two close initial conditions , and , respectively . as long as the difference between the trajectories , , remains small ( infinitesimal , strictly speaking ), it can be regarded as a vector , , in the tangent space .the time evolution of is given by the linearized differential equations : under rather general hypothesis , oseledec proved that for almost all initial conditions , there exists an orthonormal basis in the tangent space such that , for large times , where the coefficients depend on .the exponents are called _ characteristic lyapunov exponents _ ( les ) .if the dynamical system has an ergodic invariant measure , the spectrum of les does not depend on the initial condition , except for a set of measure zero with respect to the natural invariant measure .equation ( [ eq:1 - 5 ] ) describes how a -dimensional spherical region of the phase space , with radius centered in , deforms , with time , into an ellipsoid of semi - axes , directed along the vectors .furthermore , for a generic small perturbation , the distance between the reference and the perturbed trajectory behaves as .\ ] ] if we have a rapid ( exponential ) amplification of an error on the initial condition . in such a case ,the system is chaotic and , _ de facto _ , unpredictable on the long times . indeed ,if the initial error amounts to , and we purpose to predict the states of the system with a certain tolerance ( not too large ) , then the prediction is reliable just up to a _ predictability time _ given by this equation shows that is basically determined by the largest lyapunov exponent , since its dependence on and is logarithmically weak .because of its preeminent role , is often referred as `` the lyapunov exponent '' , and denoted by . in experimental investigations of physical processes ,the access to a system occurs only through a measuring device which produces a time record of a certain observable , i.e. a sequence of data . in thisregard a system , whether or not chaotic , generates messages and may be regarded as a source of information whose properties can be analysed through the tools of information theory .the characterization of the information contained in a sequence can be approached in two very different frameworks .the first considers a specific message ( sequence ) as belonging to the ensemble of all the messages that can be emitted by a source , and defines an average information content by means of the average compressibility properties of the ensemble .the second considers the problem of characterizing the universal compressibility ( i.e. ensemble independent ) of a specific sequence and concerns the theory of algorithmic complexity and algorithmic information theory . for the sake of self - consistency we briefly recall the concepts and ideas about the shannon entropy , that is the basis of whole information theoryconsider a source that can output different symbols ; denote with the symbol emitted by the source at time and with the probability that a given word , of length , is emitted .we assume that the source is stationary , so that , for the sequences , the time translation invariance holds : .we introduce the -block entropies for stationary sources the limit exists and defines the shannon entropy which quantifies the richness ( or `` complexity '' ) of the source emitting the sequence .this can be precisely expressed by the first theorem of shannon - mcmillan that applies to stationary ergodic sources : the ensemble of -long subsequences , when is large enough , can be partitioned in two classes , and such that all the words have the same probability and the meaning of this theorem is the following .an -states process admits , in principle , possible sequences of length .however the number of typical sequences , , effectively observable ( i.e. those belonging to ) is note that if .the entropy per symbol , , is a property of the source .however , because of the ergodicity can be obtained by analyzing just one single sequence in the ensemble of the typical ones , and it can also be viewed as a property of each typical sequence . in information theory , expression ( [ eq : wordstypical ] ) is somehow the equivalent of the boltzmann equation in statistical thermodynamics : , being the number of possible microscopic configurations and the thermodynamic entropy , this justifies the name `` entropy '' for .the relevance of the shannon entropy in information theory is given by the fact that sets the maximum compression rate of a sequence .indeed a theorem of shannon states that , if the length of a sequence is large enough , there exists no other sequence ( always using symbols ) , from which it is possible to reconstruct the original one , whose length is smaller than . in other words , represents the maximum allowed compression rate . the relation between shannon entropy and data compression problemsis well illustrated by considering the optimal coding ( shannon - fano ) to map objects ( e.g. the -words ) into sequences of binary digits . denoting with the binary length of the sequence specifying , we have i.e. , in a good coding , the mean length of a -word is equal to times the shannon entropy , apart from a multiplicative factor , since in the definition ( [ eq : shannon ] ) of we used the natural logarithm and here we want to work with a two symbol code . after the introduction of the shannon entropy we can easily define the kolmogorov - sinai entropy which is the analogous measure of complexity applied to dynamical systems .consider a trajectory , , generated by a deterministic system , sampled at the times , with .perform a finite partition of the phase space , with the finite number of symbols enumerating the cells of the partition . the time - discretized trajectory determines a sequence , whose meaning is clear : at the time the trajectory is in the cell labeled by .to each subsequence of length one can associate a word of length : .if the system is ergodic , as we suppose , from the frequencies of the words one obtains the probabilities by which the block entropies are calculated : the probabilities , computed by the frequencies of along a trajectory , are essentially dependent on the stationary measure selected by the trajectory .the entropy per unit time of the trajectory with respect to the partition , , is defined as follows : notice that , for the deterministic systems we are considering , the entropy per unit time does not depend on the sampling time .the ks - entropy ( ) , by definition , is the supremum of over all possible finite partitions the extremal character of makes every computation based on the definition ( [ eq : ks ] ) , impossible in the majority of practical cases . in this respect, a useful tool would be the kolmogorov - sinai theorem , through which one is granted that if is a generating partition .a partition is said to be generating if every infinite sequence corresponds to a single initial point .however the difficulty now is that , with the exception of very simple cases , we do not know how to construct a generating partition .we only know that , according to the krieger theorem , there exists a generating partition with elements such that .then , a more tractable way to define is based upon considering the partition made up by a grid of cubic cells of edge , from which one has we expect that becomes independent of when is so fine to be `` contained '' in a generating partition . for discrete time mapswhat has been exposed above is still valid , with ( however , krieger s theorem only applies to invertible maps ) .the important point to note is that , for a truly stochastic ( i.e. non - deterministic ) system , with continuous states , is not bounded and .the shannon entropy establishes a limit on how efficiently the ensemble of messages emitted by a source can be coded .however , we may wonder about the compressibility properties of a single sequence with no reference to its belonging to an ensemble . that is to say, we are looking for an universal characterization of its compressibility or , it is the same , an universal definition of its information content .this problem can be addressed through the notion of _ algorithmic complexity _ , that concerns the difficulty in reproducing a given string of symbols .everybody agrees that the binary digits sequence is , in some sense , more random than the notion of algorithmic complexity , independently introduced by kolmogorov , chaitin and solomonov , is a way to formalize the intuitive idea of randomness of a sequence .consider , for instance , a binary digit sequence ( this does not constitute a limitation ) of length , , generated by a certain computer code on a given machine .the algorithmic complexity ( or algorithmic information content ) of is the bit - length of the shortest computer program able to give and to stop afterward .of course , such a length depends not only on the sequence but also on the machine . however , kolmogorov proved the existence of a universal computer , , able to perform the same computation that a program makes on , with a modification of that depends only on .this implies that for all finite strings : where is the complexity with respect to the universal computer and depends only on the machine .we can consider the algorithmic complexity with respect to a universal computer dropping the -dependence in the symbol for the algorithmic complexity , .the reason is that we are interested in the limit of very long sequences , , for which one defines the algorithmic complexity per unit symbol : that , because of ( [ eq : kolmocomplex ] ) , is an intrinsic quantity , i.e. independent of the machine . now coming back to the -sequences ( [ eq : seq1 ] ) and ( [ eq : seq2 ] ), it is obvious that the latter can be obtained with a minimal program of length and therefore when taking the limit in ( [ eq : acomplexity ] ) , one obtains . of course not exceed , since the sequence can always be generated by a trivial program ( of bit length ) therefore , in the case of a very irregular sequence , e.g. , ( [ eq : seq1 ] ) , one expects ( i.e. ) , and the sequence is named complex ( i.e. of non zero algorithmic complexity ) or random .algorithmic complexity can not be computed , and the un - computability of may be understood in terms of gdel s incompleteness theorem . beyond the problem of whether or not is computable in a specific case , the concept of algorithmic complexity brings an important improvement to clarify the vague and intuitive notion of randomness . between the shannon entropy , , and the algorithmic complexity , there exists the straightforward relationship where , being the algorithmic complexity of the -words , in the ensemble of sequences , , with a given distribution of probabilities , . therefore the expected complexity is asymptotically equal to the shannon entropy ( modulo the factor ) .it is important to stress again that , apart from the numerical coincidence of the values of and , there is a conceptual difference between the information theory and the algorithmic complexity theory .the shannon entropy essentially refers to the information content in a statistical sense , i.e. it refers to an ensemble of sequences generated by a certain source .the algorithmic complexity defines the information content of an individual sequence .the notion of algorithmic complexity can be also applied to the trajectories of a dynamical system .this requires the introduction of finite open coverings of the phase space , the corresponding encoding of trajectories into symbolic sequences , and the searching of the supremum of the algorithmic complexity per symbol at varying the coverings .brudno s and white s theorems state that the complexity for a trajectory starting from the point , is for almost all with respect to the natural invariant measure .the factor stems again from the conversion between natural logarithms and bits .this result indicates that the ks - entropy quantifies not only the richness of a dynamical system but also the difficulty of describing its typical sequences .let us consider a chaotic map the transmission of the sequence , accepting only errors smaller than a tolerance , is carried out by using the following strategy : 1 . transmit the rule ( [ eq : mappa ] ) : for this task one has to use a number of bits independent of the sequence length .2 . specify the initial condition with a precision using a finite number of bits which is independent of .3 . let the system evolve till the first time such that the distance between two trajectories , that was initially , equals and then specify again the new initial condition with precision .4 . let the system evolve and repeat the procedure ( 2 - 3 ) , i.e. each time the error acceptance tolerance is reached specify the initial conditions , , with precision . the times are defined as follows : putting , is given by the minimum time such that and so on . following the steps , the receiver can reconstruct , with a precision , the sequence , by simply iterating on a computer the evolution law ( [ eq : mappa ] ) between and , and , and so on .the amount of bits necessary to implement the above transmission ( 1 - 4 ) can be easily computed . for simplicity of notationwe introduce the quantities which can be regarded as a sort of _ effective _ lyapunov exponents .the le can be written in terms of as follows where is the average time after which we have to transmit the new initial condition .note that to obtain from the s requires the average ( [ eq : liapt ] ) , because the transmission time , , is not constant . if is large enoughthe number of transmissions , , is .therefore , noting that in each transmission , a reduction of the error from to requires the employ of bits , the total amount of bits used in the transmission is in other words the number of bits for unit time is proportional to . in more than one dimension , we have simply to replace with in ( [ eq : bits ] ) , because the above transmission procedure has to be repeated for each of the expanding directions .lyapunov exponents and ks - entropy are properly defined only in specific asymptotic limits : very long times and arbitrary accuracy . however , predictability problem in realistic situations entails considering finite time intervals and limited accuracy .the first obvious way for quantifying the predictability of a physical system is in terms of the _ predictability time _ , i.e. the time interval on which one can typically forecast the system .a simple argument suggests however , the above relation is too naive to be of practical relevance , in any realistic system . indeed , it does not take into account some basic features of dynamical systems . the lyapunov exponent is a global quantity , because it measures the average rate of divergence of nearby trajectories .in general there exist finite - time fluctuations and their probability distribution functions ( pdf ) is important for the characterization of predictability .generalized lyapunov exponents _ have been introduced with the purpose to take into account such fluctuations . moreover ,the lyapunov exponent is defined for the linearized dynamics , i.e. , by computing the rate of separation of two infinitesimally close trajectories . on the other hand , in measuring the predictability time ( [ eq:2.1 - 1 ] )one is interested in a finite tolerance , because the initial error is finite .a recent generalization of the lyapunov exponent to _ finite size _errors extends the study of the perturbation growth to the nonlinear regime , i.e. both and are not infinitesimal .we discuss now an example where the lyapunov exponent is of little relevance for characterizing the predictability .this problem can be illustrated by considering the following coupled map model : where , , is a rotation matrix of arbitrary angle , is a vector function and is a chaotic map . for simplicitywe consider a linear coupling and the logistic map . for have two independent systems : a regular and a chaotic one .thus the lyapunov exponent of the subsystem is , i.e. , it is completely predictable . on the contrary, the subsystem is chaotic with .the switching on of a small coupling ( ) yields a single three - dimensional chaotic system with a positive global lyapunov exponent a direct application of ( [ eq:2.1 - 1 ] ) would give but this result is clearly unacceptable : the predictability time for seems to be independent of the value of the coupling .this is not due to an artifact of the chosen example , indeed , the same argument applies to many physical situations .a well known example is the gravitational three body problem , with one body ( asteroid ) much smaller than the other two ( planets ) .when the gravitational feedback of the asteroid on the two planets is neglected ( restricted problem ) , one has a chaotic asteroid in the regular field of the planets . as soon as the feedback is taken into account ( i.e. in the example ) one has a non - separable three body system with a positive le .of course , intuition correctly suggests that , in the limit of small asteroid mass ( ) , a forecast of the planet motion should be possible even for very long times .the apparent paradox arises from the misuse of formula ( [ eq:2.1 - 1 ] ) , strictly valid for tangent vectors , to the case of non infinitesimal regimes .as soon as the errors become large , the full nonlinear evolution of the three body system has to be taken into account .this situation is clearly illustrated by the model ( [ eq:2.3 - 1 ] ) in figure [ fig:2.3 - 1 ] .the evolution of is given by where , with our choice , . at the beginning ,both and grow exponentially . however , the available phase space for is finite and the uncertainty reaches the saturation value in a time . at larger timesthe two realizations of the variable are completely uncorrelated and their difference in ( [ eq:2.3 - 4 ] ) acts as a noisy term . as a consequence ,the growth of the uncertainty on becomes diffusive with a diffusion coefficient proportional to so that : this example shows that , even in simple systems , the lyapunov exponent can be of little relevance for the characterization of the predictability . in more complex systems , in which different scales are present , one is typically interested in forecasting the large scale motion , while the le is related to the small scale dynamics .a familiar example of that is weather forecast : despite the le of the atmosphere is indeed rather large , due to the small scale convective motion , large - scale weather predictions are possible for about days .it is thus natural to seek for a generalization of the le to finite perturbations from which one can obtain a more realistic estimation for the predictability time .it is worth underlining the important fact that finite errors are not confined in the tangent space but are governed by the complete nonlinear dynamics . in this sensethe extension of the le to finite errors will give more information on the system .aiming to generalize the le to non infinitesimal perturbations let us now define the finite size lyapunov exponent ( fsle ) .consider a reference and a perturbed trajectory , such that .one integrates the two trajectories and computes the time necessary for the separation to grow from to . at time the distance between the trajectories is rescaled to and the procedure is repeated in order to compute .the threshold ratio must be , but not too large in order to avoid contributions from different scales in .a typical choice is ( for which is properly a `` doubling '' time ) or .in the same spirit of the discussion leading to eq.s ( [ eq : mars1 ] ) and ( [ eq : liapt ] ) , we may introduce an effective finite size growth rate : after having performed error - doubling experiments , we can define the fsle as where is see for details . in the infinitesimal limit , the fsle reduces to the standard lyapunov exponent in practice this limit means that displays a constant plateau at for sufficiently small ( fig . [ fig:2.3 - 2 ] ) . for finite value of the behavior of on the details of the non linear dynamics .for example , in the model ( [ eq:2.3 - 1 ] ) the diffusive behavior ( [ eq:2.3 - 5 ] ) , by simple dimensional arguments , corresponds to . as a function of for the coupled map ( [ eq:2.3 - 1 ] ) with .the perturbation has been initialized as in fig .[ fig:2.3 - 1 ] . for , ( horizontal line ) .the dashed line shows the behavior . ]since the fsle measures the rate of divergence of trajectories at finite errors , one might wonder whether it is just another way to look at the average response as a function of time .the answer is negative , because taking the average at fixed time is not the same as computing the average doubling time at _ fixed scale _ , as in ( [ eq:2.3 - 10 ] ) .this is particularly clear in the case of strongly intermittent system , in which can be very different in each realization . inthe presence of intermittency , averaging over different realizations at fixed times can produce a spurious regime due to the superposition of exponential and diffusive contributions by different samples at the same time .the fsle method can be easily applied to data analysis . for other approaches addressing the problem of non - infinitesimal perturbationssee . for most systems ,the computation of kolmogorov - sinai entropy ( [ eq : ks ] ) is practically impossible , because it involves the limit on arbitrary fine resolution and infinite times .however , in the same philosophy of the fsle , by relaxing the requirement of arbitrary accuracy , one can introduce the -entropy which measures the amount of information for reproducing a trajectory with finite accuracy in phase - space . roughly speakingthe -entropy can be considered the counterpart , in information theory , of the fsle .such a quantity was originally introduced by shannon , and by kolmogorov . recentlygaspard and wang made use of this concept to characterize a large variety of processes .we start with a continuous - time variable , which represents the state of a -dimensional system , we discretize the time by introducing an interval and we consider the new variable of course and it corresponds to the trajectory which lasts for a time . in data analysis ,the space where the state of the system lives is unknown and usually only a scalar variable can be measured .then , one considers vectors , that live in and allow a reconstruction of the original phase space , known as delay embedding in the literature , and it is a special case of ( [ eq:2 - 1 ] ) .introduce now a partition of the phase space , using cells of edge in each of the directions .since the region where a bounded motion evolves contains a finite number of cells , each can be coded into a word of length , out of a finite alphabet : where labels the cell in containing . from the time evolution oneobtains , under the hypothesis of ergodicity , the probabilities of the admissible words .we can now introduce the -entropy per unit time , : where is the block entropy of blocks ( words ) with length : for the sake of simplicity , we ignored the dependence on details of the partition . to make partition - independent one has to consider a generic partition of the phase space and to evaluate the shannon entropy on this partition : .the -entropy is thus defined as the infimum over all partitions for which the diameter of each cell is less than : note that the time dependence in ( [ def : eps ] ) is trivial for deterministic systems , and that in the limit one recovers the kolmogorov - sinai entropy the previous sections , we discussed the characterization of dynamical behaviors when the evolution laws are known either exactly or with some degree of uncertainty . in experimental investigations , however , only time records of some observable are available , while the equations of motion for the observable are generally unknown . the predictability problem of this latter case , at least from a conceptual point of view ,can be treated as if the evolution laws were known .indeed , in principle , the embedding technique allows for a reconstruction of the phase space .nevertheless there are rather severe limitations for high dimensional systems and even in low dimensional ones non trivial features appear in the presence of noise . in this section we show that an entropic analysis at different resolution scales provides a pragmatic classification of a signal and gives suggestions for modeling of systems . in particular we illustrate , using some examples , how quantities such as the -entropy or the fsle can display a subtle transition from the large to the small scales .a negative consequence of this is the difficulty in distinguishing , only from data analysis , a genuine deterministic chaotic system from one with intrinsic randomness . on the other hand ,the way the -entropy or fsle depends on the ( resolution ) scale , allows for a classification of the stochastic or chaotic character of a signal , and this gives some freedom in modeling the system .the `` true character '' of the number sequence obtained by a ( pseudo ) random number generator ( prng ) on a computer is an issue of paramount importance in computer simulations and modeling .one would like to have a sequence with a random character as much as possible , but is forced to use deterministic algorithms to generate .this subsection is mainly based on the paper .a simple and popular prng is the multiplicative congruent one : with an integer multiplier and modulus .the are integer numbers from which one hopes to generate sequence of random variables , which are uncorrelated and uniformly distributed in the unit interval .a first problem arises from the periodic nature of the rule ( [ prng ] ) as a consequence of its discrete nature .note that the rule ( [ prng ] ) can be interpreted also as a deterministic dynamical system , i.e. which has a uniform invariant measure and a ks entropy .when imposing the integer arithmetics of eq .( [ prng ] ) onto this system , we are , in the language of dynamical systems , considering an unstable periodic orbit of eq .( [ xdyn ] ) , with the particular constraint that , to achieve the period ( i.e. all integers should belong to the orbit of eq .( [ prng ] ) ) , it has to contain all values , with . since the natural invariant measure of eq .( [ xdyn ] ) is uniform , such an orbit represents the measure of a chaotic solution in an optimal way .every sequence of a prng is characterized by two quantities : its period and its positive lyapunov exponent , which is identical to the entropy of a chaotic orbit of the equivalent dynamical system .of course a good random number generator must have a very large period , and as large as possible entropy .it is natural to ask how this apparent randomness can be reconciled with the facts that ( a ) the prng is a deterministic dynamical systems ( b ) it is a discrete state system .if the period is long enough , on shorter times only point ( a ) matters and it can be discussed in terms of the behavior of the -entropy , . at high resolutions ( ), it seems rather reasonable to think that the true deterministic chaotic nature of the congruent rule shows up , and , therefore , . on the other hand , for , one expects to observe the `` apparent random '' behavior of the system , i.e. , see fig [ fig : rng_entropy ] .the -entropies , , at varying the embedding dimension for the multiplicative congruential random number generator eq . [ prng ] for different choices of and .] we discuss an example of high - dimensional system with a non - trivial behavior at varying the resolution scales , namely the emergence of nontrivial collective behavior .let us consider a globally coupled map ( gcm ) defined as follows where is the total number of elements , and is a chaotic map on the interval $ ] , depending on the control parameter .the evolution of a macroscopic variable , e.g. , the center of mass upon varying and in eq .( [ eq:3.38 ] ) , displays different behaviors : \(a ) _ standard chaos _: obeys a gaussian statistics with a standard deviation ; \(b ) _ macroscopic periodicity _ : is a superposition of a periodic function and small fluctuations ; \(c ) _ macroscopic chaos _ : exhibits an irregular motion , as seen by plotting vs. .the plot sketches a structured function ( with thickness ) , and suggests a chaotic motion for . in the case of _macroscopic chaos _, the center of mass is expected to evolve with typical times longer than the characteristic time of the full dynamics ( microscopic dynamics ) ; being the lyapunov exponent of the gcm . indeed ,conceptually , macroscopic chaos for gcm can be thought of as the analogous of the hydro - dynamical chaos for molecular motion . in spite of a huge microscopic lyapunov exponent ( , is the collision time ) , one can have rather different behaviors at a hydro - dynamical ( coarse grained ) level : regular motion ( ) or chaotic motion ( ) . in principle , if the hydrodynamic equations were known , a characterization of the macroscopic behavior would be possible by means of standard dynamical system techniques .however , in generic cml there are no general systematic methods to build up the macroscopic equations , apart from particular cases .we recall that for chaotic systems , in the limit of infinitesimal perturbations , one has , i.e. displays a plateau at the value for sufficiently small . however , for non infinitesimal , one can expect that the -dependence of may give information on the characteristic time - scales governing the system , and , hence , it could be able to characterize the macroscopic motion .in particular , at large scales ( ) , the fast microscopic components saturate and , where can be fairly called the `` macroscopic '' lyapunov exponent .the fsle has been determined by looking at the evolution of , which has been initialized at the value by shifting all the elements of the unperturbed system by the quantity ( i.e. ) , for each realization .the computation has been performed by choosing the tent map as local map , but similar results can be obtained for other maps .the main result can be summarized as follows : * at small , where is the number of elements , the `` microscopic '' lyapunov exponent is recovered , i.e. * at large , another plateau appears , which can be much smaller than the microscopic one .the emerging scenario is that , at a coarse - grained level , i.e. , the system can be described by an `` effective '' hydro - dynamical equation ( which in some cases can be low - dimensional ) , while the `` true '' high - dimensional character appears only at very high resolution , i.e. consider the following map which generates a diffusive behavior on the large scales : where indicates the integer part of and is given by : \ , .\label{eq : mappaf}\ ] ] the largest lyapunov exponent can be obtained immediately : , with .one expects the following scenario for : where is the diffusion coefficient , .the map ( [ eq : mappaf ] ) for is shown with superimposed the approximating ( regular ) map ( [ eq:3 - 5 ] ) obtained by using intervals of slope . ]consider now a stochastic system , namely a noisy map where , as shown in fig . [ map ] , is a piece wise linear map which approximates the map , and is a stochastic process uniformly distributed in the interval , and no correlation in time . when , as is the case we consider , the map ( [ eq:3 - 5 ] ) , in the absence of noise , gives a non - chaotic time evolution .lyapunov exponent versus obtained for the map ( [ eq : mappaf ] ) with ( ) and for the noisy ( regular ) map ( [ eq:3 - 5 ] ) ( ) with intervals of slope and .straight lines indicate the lyapunov exponent and the diffusive behavior .now we compare the finite size lyapunov exponent for the chaotic map ( [ eq:3 - 1 ] ) and for the noisy one ( [ eq:3 - 5 ] ) . in the latterthe fsle has been computed using two different realizations of the noise . in fig .[ fslediff ] we show versus for the two cases .the two curves are practically indistinguishable in the region .the differences appear only at very small scales where one has a which grows with for the noisy case , remaining at the same value for the chaotic deterministic case .both the fsle and the -entropy analysis show that we can distinguish three different regimes observing the dynamics of ( [ eq:3 - 5 ] ) on different length scales . on the large length scales we observe diffusive behavior in both models .on length scales both models show chaotic deterministic behavior , because the entropy and the fsle are independent of and larger than zero .finally on the smallest length scales we see stochastic behavior for the system ( [ eq:3 - 5 ] ) , _ i.e. _ , while the system ( [ eq:3 - 1 ] ) still shows chaotic behavior .the above examples show that the distinction between chaos and noise can be a highly non trivial task , which makes sense only in very peculiar cases , e.g. , very low dimensional systems .nevertheless , even in this case , the entropic analysis can be unable to recognize the `` true '' character of the system due to the lack of resolution .again , the comparison between the diffusive map ( [ eq:3 - 1 ] ) and the noisy map ( [ eq:3 - 5 ] ) is an example of these difficulties . for both the system ( [ eq:3 - 1 ] ) and ( [ eq:3 - 5 ] ) , in spite of their `` true '' character , will be classified as chaotic , while for both can be considered as stochastic . in high - dimensional chaotic systems , with degrees of freedom, one has typically for ( where as ) while for , decreases , often with a power law .since also in some stochastic processes the -entropy obeys a power law , this can be a source of confusion .these kind of problems are not abstract ones , as a recent debate on `` microscopic chaos '' demonstrates .the detection of microscopic chaos by data analysis has been recently addressed in a work of gaspard et al .these authors , from an entropic analysis of an ingenious experiment on the position of a brownian particle in a liquid , claim to give an empirical evidence for microscopic chaos .in other words , they state that the diffusive behavior observed for a brownian particle is the consequence of chaos at a molecular level .their work can be briefly summarized as follows : from a long ( data ) record of the position of a brownian particle they compute the -entropy with the cohen - procaccia method from which they obtain : where is the diffusion coefficient .then , _ assuming _ that the system is deterministic , and making use of the inequality , they conclude that the system is chaotic. however , their result does not give a direct evidence that the system is deterministic and chaotic .indeed , the power law ( [ eq : gasp ] ) can be produced with different mechanisms : 1 . a genuine chaotic system with diffusive behavior , as the map ( [ eq : mappaf ] ) ; 2 . a non chaotic system with some noise , as the map ( [eq:3 - 5 ] ) , or a genuine brownian system ; 3 .a deterministic linear non chaotic system with many degrees of freedom ( see for instance ) ; 4 . a `` complicated '' non chaotic system as the ehrenfest wind - tree model where a particle diffuses in a plane due to collisions with randomly placed , fixed oriented square scatters , as discussed by cohen et al . in their comment to ref . .it seems to us that the weak points of the analysis in ref . are : \a ) the explicit assumption that the system is deterministic ; \b ) the limited number of data points and therefore limitations in both resolution and block length .the point ( a ) is crucial , without this assumption ( even with an enormous data set ) it is not possible to distinguish between 1 ) and 2 ) .one has to say that in the cases 3 ) and 4 ) at least in principle it is possible to understand that the systems are `` trivial '' ( i.e. not chaotic ) but for this one has to use a huge number of data .for example cohen et al . estimated that in order to distinguish between 1 ) and 4 ) using realistic parameters of a typical liquid , the number of data points required has to be at least . concluding , we have the apparently paradoxical result that `` complexity '' helps in the construction of models .basically , in the case in which one has a variety of behaviors at varying the scale resolution , there is a certain freedom on the choice of the model to adopt .for some systems the behavior at large scales can be realized both with chaotic deterministic models or suitable stochastic processes . from a pragmatic point of view ,the fact that in certain stochastic processes can be indeed extremely useful for modeling such high - dimensional systems .perhaps , the most relevant case in which one can use this freedom in modeling is the fully developed turbulence whose non infinitesimal ( the so - called inertial range ) properties can be successfully mimicked in terms of multi - affine stochastic process ( see ref .the guideline of this paper has been _ the interpretation of different aspects of the predictability of a system as a way to characterize its complexity_. we have discussed the relation between chaoticity , the kolmogorov - sinai entropy and algorithmic complexity . as clearly exposed in the seminal works of alekseev and yakobson and ford , the time sequences generated by a system with sensitive dependence on initial conditions have non - zero algorithmic complexity .a relation exists between the maximal compression of a sequence and its ks - entropy .therefore , one can give a definition of complexity , without referring to a specific description , as an intrinsic property of the system .the study of these different aspects of predictability constitutes a useful method for a quantitative characterization of `` complexity '' , suggesting the following equivalences : the above point of view , based on dynamical systems and information theory , quantifies the complexity of a sequence considering each symbol relevant but it does not capture the structural level .let us clarify this point with the following example .a binary sequence obtained with a coin tossing is , from the point of view adopted in this review , complex since it can not be compressed ( i.e. it is unpredictable ) . on the other handsuch a sequence is somehow trivial , i.e. with low `` organizational '' complexity .it would be important to introduce a quantitative measure of this intuitive idea .the progresses of the research on this intriguing and difficult issue are still rather slow .we just mention some of the most promising proposals as the logical depth and the sophistication .f. takens , `` detecting strange attractors in turbulence '' in _ dynamical systems and turbulence ( warwick 1980 ) _ , vol .898 of _ lecture notes in mathematics _ ,rand and l .- s .young ( eds . ) , pg . 366 , springer - verlag , berlin ( 1980 ) . | some aspects of the predictability problem in dynamical systems are reviewed . the deep relation among lyapunov exponents , kolmogorov - sinai entropy , shannon entropy and algorithmic complexity is discussed . in particular , we emphasize how a characterization of the unpredictability of a system gives a measure of its complexity . a special attention is devoted to finite - resolution effects on predictability , which can be accounted with suitable generalization of the standard indicators . the problems involved in systems with intrinsic randomness is discussed , with emphasis on the important problems of distinguishing chaos from noise and of modeling the system . _ all the simple systems are simple in the same way , each complex system has its own complexity _ ( freely inspired by _ anna karenina _ by lev n. tolstoy ) |
there has been growing interest in the non - extensive statistical mechanics based on tsallis generalized entropy ( in unit ) : at least formally , tsallis entropy is an extension of conventional boltzmann - shannon ( bs ) entropy with one - real - parameter of . in the limit of , tsallis entropy eq .( [ sq ] ) reduces to bs entropy , , since .the parameter may be interpreted as a quantity characterizing the degree of non - extensivity of tsallis entropy through the so - called _ pseudo - additivity _ : where and denote two statistically independent sub - systems .it is worth while to realize that the pseudo - additivity of is one of the crucial ingredients of tsallis non - extensive statistical mechanics .in fact , the uniqueness of tsallis entropic form is proved for an entropy that fulfills the generalization of the shannon - kinchin axioms based on the pseudo - additive conditional entropy obeying the pseudo - additivity instead of additivity .then what is a role of the pseudo - additivity ? by rewriting eq .( [ sq ] ) as the following form , we see that the pseudo - additivity of tsallis entropy comes from the -logarithmic function , which is defined by since it equips the pseudo - additivity as the inverse function of the -logarithmic function is -exponential one , which is defined by ^{\frac{1}{1-q}},\ ] ] for and otherwise .as tsallis has already pointed out , the parameter plays a similar role as the light velocity in special relativity or planck s constant in quantum mechanics in the sense of a one - parameter extension of classical mechanics . unlike or ,however , does not seem to be a universal constant .thus it is a natural question whether is merely an adjustable parameter or not in the non - extensive statistical mechanics . in some cases the parameter has no physical meaning , but when it is used as an adjustable parameter the resulting distributions give excellent agreement with experimental data . in other but a few cases , uniquely determined by the constraints of the problem and thereby may have a physical meaning .recent studies of the characterization of mixing in one - dimensional ( 1d ) dissipative maps and of symbolic sequences seems to provide a positive answer to the above question since there exists the special value such that becomes linear .for example , in the studies of mixing in simple logistic map , can be obtained by three different methods based on : i ) the upper bound of a power - law sensitivity to initial conditions ; ii ) the singularity indices in multi - fractal structure ; and iii ) the rate of information loss in terms of .the remarkable fact is that all methods lead to the same value of , which may shed some light on the physical meaning of .they established some connections among the sensitivity to initial conditions , tsallis entropy and the proper entropic index .in particular we focus on the work of buiatti _et al . _ , in which they have shown , for the symbolic sequences with length of , that the generalized block entropy is proportional to when the proper entropic index is used . in other words, there may exist a proper entropic index , which may statistically characterize a non - extensive system .in this work we study a reason why the generalized block entropy in the work of buiatti _et al . _ is proportional to the length of symbolic sequences when we use the proper entropic index .in particular we focus our attention on a role of the pseudo - additivity of the conditional entropy in characterizing a non - extensive system with the proper entropic index .we reformulate the constraint of obtaining the proper entropic index as follows : the pseudo - additive conditional entropy becomes additive with respect to when the proper entropic index is used . in other words , for the special value of entropic index , the additivity of the conditional entropy is held in spite of that the involved subsystems are not statistically independent of each other .the rest of the paper is organized as follows : in the next section we explain the constraint of obtaining the proper entropic index in the work of buiatti _et al . _ , and propose our constraint of obtaining .we then show the equivalence of the two constraints and discuss the underlying simple mechanism of why the conditional entropy becomes additive for the proper entropic index under the assumption of equi - probability . in section 3 , we discuss long - range correlation expressed by -exponential function .section 4 is devoted to our conclusions .buiatti _ et al . _ showed , by studying a symbolic binary sequence with a long - range correlation , that for the probability of each path with length of , the generalized block entropy , is proportional to if and only if the variable index equals the proper entropic index .we reformulate this constraint as the following .the pseudo - additive conditional entropy , which is defined by should satisfy the _ additivity _, when is equal to the proper entropic index . at first sightour constraint seems to be paradoxical , since is pseudo - additive in general .we explain in the followings that the equivalence of the two constraints which determine , and discuss an underlying simple mechanism connecting the variable entropic index with the proper index .the method of buiatti _et al . _is rephrased as follows : is a linear function of when , i.e. where is a proportional constant , or equivalently subtracting from the both sides and after a little bit algebra , eq .( [ linear - sq ] ) is rewritten as dividing the both sides of eq .( [ linear - sq2 ] ) by and using the definition of the conditional entropy of eq .( [ cond - ent ] ) , it is obvious that eq .( [ linear - sq ] ) is equivalent to eq .( [ additive - sq ] ) .hence we have reformulated the method of buiatti __ as follows : the conditional entropy becomes additive when we use the proper entropic index .having explained the equivalence of buiatti _ et al . _ and our methods of obtaining a proper entropic index , we now consider why the conditional entropy is proportional to the length of symbolic sequences when . under the assumption of equi - probability , tsallis entropy can be written in terms of the number of states for the symbolic sequences with the length of as then the conditional entropy of eq .( [ cond - ent ] ) is expressed in terms of as suppose that the number of states obeys a power - law evolution , which can be well described by with the proper of a system of interest , where is another constant . the relation between and of eq .( [ linear - sq ] ) is discussed in the next section .substituting eq .( [ w ] ) into eq .( [ cond - sq ] ) , the corresponding conditional entropy can be written as = \ln_q [ \exp_{q^*}(l'_{q^ * } \ ; n ) ] .\ ] ] now we readily see that is proportional to , if and only if we set to . in other words , if obeys the -exponential evolution of eq .( [ w ] ) , then it is reasonable to use its inverse function in order to define the conditional entropy .tsallis entropic description may be well suited for a long - range correlated system which obeys a power - law evolution described by -exponential function .then how can -exponential function express long - range correlation ?we here explain that long - range correlation may be expressed by the non - factorizability of -exponential function into independent terms .the long - range correlation in this case means the initial condition dependency of long duration .it is known that -exponential function can not be resolved into a product of independent terms unless .for example is not resolved into the independent factors as .instead it can be expressed as the product of the dependent factors as .let us focus on the long - range correlation associated with the power - law evolution of described by eq .( [ w ] ) . using eqs .( [ lin - sq ] ) and ( [ equi - prob ] ) , the -exponential dependency of can be expressed as = \exp_q [ s_q(1 ) + l_qn ] \nonumber \\ & = & \exp_q[s_q(1 ) ] \cdot \exp_q[\frac{l_q n}{1+(1-q)s_q(1 ) } ] \nonumber \\ & = & \exp_q[s_q(1 ) ] \cdot \exp_q[\frac{l_q}{1+(1-q)s_q(1 ) } ] \nonumber \\ & \times & \cdots \times \exp_q [ \frac{l_q}{1 + ( 1-q)\{s_q(1)+n-1\ } } ] .\label{long - range}\end{aligned}\ ] ] note that appears in all terms and this reflects the initial condition dependency of long duration .this feature is consistent with the single - trajectory approach by montangero __ in which they fix a given initial condition in order to obtain the of the non - extensive version of kolmogorov - sinai entropy for the dynamics of the logistic map at the chaotic threshold .because of the initial condition dependency of long duration , an averaging over many different initial conditions is not appropriate. now let us focus on the relation between the proportional constants and in the previous section . from the second line of eq .( [ long - range ] ) , we see that .\ ] ] comparing this with eq .( [ w ] ) , and are related by which is the same relation of between the lagrange multiplier of optimal lagrange multipliers ( olm ) method and that of tsallis - mendes - plastino one in canonical ensemble formalism , where denotes partition function .we have proposed a constraint of obtaining the proper tsallis entropic index in describing the evolutions of correlated symbolic sequences with length .the proper entropic index can be determined by requiring that the conditional entropy should be proportional to if and only if equals the proper entropic index . in other words becomes _ additive _ for the proper .it is the non - factorizability of -exponential function into independent terms that can express a long - range correlation .one of the authors ( t. w ) acknowledges s. abe , t. arimitsu and n. arimitsu for useful comments and valuable discussion at the 9th symposium on non - equilibrium statistical physics held at tsukuba , japan . c. tsallis , cond - mat/0010150 .r. j. v. dos santos , j. math .* 38 * ( 1997 ) 4104 ; k. s. fa , j. phys .a * 31 * ( 1998 ) 8159 ; s. abe , phys . lett .a * 271 * ( 2001 ) 74 . c. e. shannon and w. weaver , _ the mathematical theory of communication _ ( university of illinois pres , urbana , 1963 ) ; a. i. khinchin , _ mathematical foundations of information theory _( dover , new york , 1957 ) . u. m. s. costa , m. l. lyra , a. r. plastino and c. tsallis , phys .e * 56 * ( 1997 ) 245 .v. latora , a. rapisarda , c. tsallis and m. baranger , phys .lett . a * 273 * ( 2000 ) 97 .f. a. b. f. de moura , u. tirnakli and m. l. lyra , cond - mat/0002163 .m. buiatti , p. grigoline and l. palatella , physica a * 268 * ( 1999 ) 214 .t. arimitsu and n. arimitsu , phys .e * 61 * , ( 2000 ) 3273 ; preprint ( 2001 ) .s. abe and a. k. rajagopal , physica a * 289 * ( 2001 ) 157 ; quant - ph/0003145 . m. ignaccolo and p. grigolini , cond - mat/0004155. s. martnez , f. nicols , f. pennini and a. plastino , physica a * 286 * ( 2000 ) 489 . | tsallis non - extensive entropy enables us to treat both a power and exponential evolutions of underlying microscopic dynamics on equal footing by adjusting the variable entropic index to proper one . we propose an alternative constraint of obtaining the proper entropic index that the non - additive conditional entropy becomes additive if and only if in spite of that the associated system can not be decomposed into statistically independent subsystems . long - range ( time ) correlation expressed by -exponential function is discussed based on the nature that -exponential function can not be factorized into independent factors when . and non - extensivity , tsallis entropy , pseudo - additivity , power law 05.20.-y , 05.90.+m , 05.45.-a |
the classic black hole thermodynamics relates the mass , surface gravity , and outer horizon area of a black hole solution to the energy , temperature , and entropy ( , , and , resp . ) according to have been set to unity . ] : the formalism has been extended by allowing the cosmological constant of the theory to be dynamical , supplying a pressure _ via _ , along with its conjugate volume .now , the black hole mass is related to the enthalpy of the system instead of the energy : .the first law now reads as : the temperature and the entropy remain related to the surface gravity and area of the black hole as in equations ( [ oldtd ] ) .the are gauge charges , and are angular momenta , while and are their conjugate potentials and angular velocities , respectively .the black holes may have other parameters and they enter additively with their conjugates to the first law ( [ newtd ] ) in the usual way .this formalism works in multiple dimensions .interestingly , for the static black holes , the thermodynamic volume is just the naive `` geometric '' volume of the black holes : the volume of the ball of radius ( our notation for the horizon radius in this paper ) . in this_ extended _ black hole thermodynamics , since the pressure and volume are now in play , alongside temperature and entropy ( [ newtd ] ) , it is natural to study devices which can extract useful mechanical work from heat energy , _i.e. _ , traditional heat engines .these devices were named `` holographic heat engines '' , since for negative cosmological constant ( _ i.e. _ with positive pressure , since ) such cycles represent a journey through a family of holographically dual non gravitational field theories ( at large ) defined in one dimension fewer .although we have holographic applications in mind for some of this work , for this paper our focus will be on the black hole side of the story , an interesting context in its own right .so for the purposes of the gravitational theory , the working substance of the heat engine is a particular black hole solution of the gravity system .it supplies an equation of state through the relation between its temperature and the black hole parameters defined in the usual way ( we will give examples below ) .the precise form of all these relations depends on the type of black hole , and the parent theory of gravity under discussion .one may extract mechanical work from such an engine _ via _ the term in the first law of thermodynamics in the classic way : define a closed cycle in state space during which there is a net input heat flow , a l0.4 net output heat flow , and a net output work .so .a central quantity , the efficiency for the cycle , is defined as .its value is sensitive to the details of the equation of state of the system and also to the choice of cycle in state space .consider the cycle given in figure [ fig : prototype ] . in refs. is explained why this is a natural choice for static black holes . for such holes ,the entropy and the volume are not independent , being both simple functions of , the horizon radius .so isochores are adiabats , and so the only heat flows are along the top and bottom lines .computing the efficiency boils down to evaluating along those isobars , where is the specific heat at constant pressure . in general , calculation of efficiencyis a difficult task to perform exactly using this approach , and high temperature or high pressure computations are used to get approximate results .recently , however , ref . showed a much simpler way to evaluate the efficiency .the first law is : and along the isobars , .therefore the total heat flow along an isobar is simply the enthalpy change .normally , that might not be a useful rewriting , but in extended gravitational thermodynamics , a precise expression for the enthalpy is readily available since it is just the black hole mass .this results in a remarkably simple exact formula : where the black hole mass is evaluated at each corner of the rectangle , with the labelling given in figure [ fig : prototype ] . is usually written as a function of and . in the examples of this paper ,since is a simple function of , we will be easily able to write down as a function of and .it was also shown in ref. that the result ( [ simplemass ] ) can be used as the basis for an algorithm for computing the efficiency of a cycle of _ arbitrary _ shape to any desired accuracy . any closed shape on the state spacecan be approximated by tiling with a regular lattice of rectangles .r0.4 this is possible because cycles are additive ( see figure [ fig : addlaw ] ) .consequently , only the cells at the edge contribute .any mismatch between the edge of the cycle s contour and the tiling s edge can be reduced by simply shrinking the size of the unit cell .edge cells are called hot cells if they have their upper edges open , and cold cells if they have their lower edges open .summing all the hot cell mass differences ( evaluated at the top edges ) will give and summing all the cold cell mass differences ( evaluated at the bottom edges ) will yield . so the efficiency is : where we have labelled all cells corners in the same way as the prototype cycle in figure [ fig : prototype ] .an example with a triangular cycle was given in to show the algorithm in action supporting the previous argument .as already stated , a given black hole , thought of as a working substance for a heat engine , supplies a particular equation of state .the efficiency will depend upon this choice .moreover , the efficiency will also depend upon the details of the choice of cycle . for maximizing ,certain choices of cycle will be better adapted to a particular working substance ( choice of black hole ) than others .( for example , for the same cycle of figure [ fig : prototype ] , a non - static black hole will generically have a larger due to non zero heat flows along the isochores , and therefore a smaller . )so a natural question arises : how does one compare the efficiency of different working substances ?we have in mind a comparison that depends as little as possible on special choices of cycle .in other words , in comparing working substances for making a heat engine , we should _ not _ choose a special cycle that favours one black hole s particular properties over another .notice that this requirement requires us to make a choice that is in opposition to what is normally done : cycles are usually chosen in a way that is naturally adapted to the equation of state in order to simplify computation .so we are asking that a more difficult choice of cycle be made , by necessity .this is where the exact formula and algorithm reviewed above come in .we can pick a benchmark cycle of whatever shape seems appropriate and implement the algorithm to compute to any desired accuracy .this freedom allows us to make the following choice of benchmark : we choose the cycle to be a _ circle _ in the plane .the logic of this choice is that the circle is a simply parametrised shape which is also unlikely to favour _ any _ species of black hole ( working substance ) whatsoever . no thermodynamic variable is unchanged on any segment of the cycle , so it is , in some sense , a difficult cycle for all black holes .all that needs be specified is the origin of the circle and its radius .these properties make it an excellent choice of benchmark .the outline of this paper is as follows . in section [ sec : circle ], we set up the circular cycle as our benchmarking tool , and explain our implementation of the exact formula and algorithm of ref. for calculating the efficiency .we then discuss , in section [ sec : ideal ] , a very special case of working substance : an ideal gas " like system .it allows us to derive some exact results that help test our implementation , and which also set a new benchmark standard for later use . in section [ sec : comparison ] , we compare three examples of black holes as working substances for heat engines : charged ( reissner nordstrom like ) black holes , gauss bonnet black holes , and born infeld black holes .we conclude in section [ sec : conclusions ] with a brief discussion of future applications of our benchmarking procedure .for our circle , we implemented the algorithm and exact formula of ref. , with the aid of a computer , as follows : imagine that we have chosen the origin and radius , , of the circle in the plane .we next overlaid it onto the regular lattice of squares of total side length . for simplicity , we used even so that there are same number of squares both in the upper half and in the lower half of the circle . next we computed the pressure and volume at each corner of all the squares . using simple geometry ,we determined which squares intersect the circle .we checked for cases where two squares share a common isobar and both intersect the circle .then , if we are in the upper part of the circle , we remove the one below and keep only the upper square .we did this check in the lower half of the circle in a similar fashion .this allowed us to identify all the hot cells and cold cells of the approximation , and their coordinates .the black hole mass is a function of pressure and volume only ( with some parameters that we have already fixed ) , so we can compute its value at each corner .then we use the formula ( [ fullmass ] ) to give us the approximate for that level of granularity . increasing the value of the size of the unit cell smaller , making the path traced by the hot and cold cells a better fit to our circle , reducing the error in .indeed , we found that just as for the triangle prototype of ref. , the efficiency converges nicely for large .( see the examples in section [ sec : comparison ] . )( 100 squares ) .red lines are the tops of hot cells and blue lines are the bottoms of cold cells .as increases , these lines converge to the boundary of the circle .the dashed black lines are sample isotherms.,width=336 ] we can even do more . since temperature is also a function of and , we can compute it at each corner . then while we run over all the cells to compute , we can keep track of the maximum and the minimum temperatures ( and ) achieved in the entire cycle .hence , we can compute the carnot efficiency for this engine .this will be a check of our results because no cycle can have a greater efficiency than a carnot cycle .figure [ fig : ccn10 ] shows an example for .the green crosses show the points of the square lattice .the circle is our circular cycle .red segments are the tops of the hot cells and blue segments are the bottoms of cold cells .the black dashed lines show a few sample isotherms determined from the underlying equation of state of the system in question .( this example is a snapshot of the einstein hilbert maxwell case more fully explored in section [ ehm ] ) . in choosing our benchmark cycle to compare different black holes, we should fix the circle origin and radius .generically , the choices do nt matter , as long as they are the same across the comparison .we chose here , and in the following sections , purely arbitrarily , except for making sure that we avoided any regions where the equations of state of the black holes under comparison had any multi valuedness that would signal non trivial phase transitions .such regimes require a separate , more careful study in this heat engine context that are beyond the scope of this paper .one might worry that since the circular cycle is presumably not even close to a cycle for which one has an analytically computable result , if there was an error , it might not be noticed .the carnot test above is useful , but it is a rather weak upper bound on the efficiency .we derive some complementary tests , and a stronger ( exact ) bound , in the next section .before we proceed to study some black hole examples we briefly pause to study a simple but instructive case .it is in fact a limiting form of all of the black hole solutions we ll discuss shortly . as discussed in ref. it deserves to be called an ideal gas case , and as such , sets an additional standard by which we might assess other working substances . in dimension ,the leading large horizon radius ( ) limit of all the asymptotically anti de sitter black holes we will discuss is rather simple , with dependence for the mass and temperature as follows : where is the volume ( _ i.e. , _ surface area ) of the unit round sphere .the exact thermodynamic volume for all of the static black holes under study is : and so we have the familiar ideal gas " behaviour in this large limit : a family of hyperbolae in the plane where . this ideal gas can be obtained as a limit for any of our black holes ( in later sections ) as either a large limit or as a high temperature limit . before moving on to those cases, we can study this in its own right , taking the above as the equation of state everywhere in the plane .notice first that the efficiency of any cell such as the prototype of figure [ fig : prototype ] simplifies nicely in this case .this is because the mass is simply , and hence factors out in each mass difference , leaving only a volume difference .so for figure [ fig : prototype ] is just . turning to the efficiency of the circle, the factorization into sums of volume differences means that there is no dependence of the result on the volume coordinate of the circle s origin : any shift in the origin will cancel out everywhere .we can say even more in this case however .in fact , the terms in the sums in the algorithm ( [ fullmass ] ) are actually entirely geometrical in interpretation ! for example , for a hot cell a term is of the form .this is simply the area of the rectangular strip that starts on the axis and is bounded above by the top of the cell .this is a clue to writing an _ exact _formula for the efficiency in the case of our ideal gas .the simplest way to do it is to rewrite as the ratio of work to heat flowing in , .now while from our observation above , is , in the large limit , exactly the area underneath the upper semi circle of the circular path : , so our result is : where is the pressure at the centre of the circle .this exact formula is rather surprising .notably , in addition to being independent of it is also independent of spacetime dimension , but the real surprise is that the algorithm assembled itself into a purely geometric result that yielded an exact formula for what is , on the face of it , a difficult shape of cycle .in fact , this exact geometrical result will work for _ any cycle shape_. perhaps there can be other surprises of this sort for other systems besides this special ideal gas case .the formula is also a rather useful check on our methods for a number of reasons .the first is that the and dependence are non trivial predictions , and so we were obliged to check to see if our discrete algorithm reproduces such dependence , and indeed it did .for example , figure [ fig : effrad ] shows , for , some example points computed by inserting the ideal gas into our algorithm .the red curve is the exact result of equation ( [ eq : idealgasefficiency ] ) . .the ideal gas ( see text ) equation of state was used in the algorithm for with the circle origin at , and radius .the blue crosses plot the result for .the red curve is a plot of the exact result from equation ( [ eq : idealgasefficiency]).,width=240 ] the second reason this is a strong check is that it presents a lower upper bound on our results than the upper bound given by carnot ( discussed in the previous section ) . our black holes , in the regions where we study them ,can be thought of as perturbations of this ideal gas case , and so we should expect that the efficiencies we obtain approach ( but do not exceed ) the ideal gas result .we have , for the comparisons to come , the circle s origin at , , and its radius as , for which the ideal gas efficiency is ( to six significant figures ) .it is worth noting that using the discretisation algorithm to compute the ideal gas case gives at and at .( moving significantly beyond to see further convergence proved beyond the numerical capabilities of the system we were using . )we now apply our benchmark cycle to a sampling of different black holes acting as working substances .we will only briefly introduce the black holes since they are well known in the literature .they were used in heat engines in refs. , with some analysis and comparison presented there , but now we have a clearer , more systematic benchmarking procedure .we will work in for definiteness ( it is trivial to insert the formulae for other dimensions into our algorithm ; we saw no compelling reason to present the results for other dimensions here ) , and our benchmark circle will be centred at , , with radius . in each casewe list the bulk action in dimensions and the mass and temperature of the black hole . for static black holesthe volume is simply : . also , recall that the cosmological constant is related to pressure _ via _ , and in dimensions sets a length scale through .so in dimensions , .the mass and temperature formulae we present will have had eliminated in favour of .note that in presenting our results for the efficiency , the engine s actual efficiency will be denoted by ( without a subscript ; the surrounding text will make it clear which case is being discussed ) and the associated carnot efficiency will be denoted ( again with context making it clear as to which case is being discussed ) .this will help us avoid a proliferation of subscripts . the bulk action for the einstein hilbert maxwell system in is : . heremaxwell black holes are used as the working substance .blue crosses represent the carnot efficiency , while black squares represent . for , and to 0.6674942748 and 0.5653677678 respectively.,width=288 ] we can now write the mass and the temperature of the einstein hilbert maxwell ( _ i.e. , _ reissner nordstrom like ) black hole solution , parametrized by a charge ( which we will later choose as ) : and we can write them entirely in terms of and , using . figure [ fig : emn500 ] shows the results of the algorithm for computing and for the benchmark circle in this case . in the presence of a gauss bonnet sector ,the action becomes : where is the gauss bonnet parameter which has dimensions of .if we set in ( [ gbaction ] ) we go back to the previous case of einstein hilbert maxwell system ( [ ehmaction ] ) . . herebonnet black holes are used as the working substance .blue crosses represent the carnot efficiency , while black squares represent . for , and to 0.6674954523 and 0.5653678245 respectively.,width=288 ] the mass and temperature of the black hole , parametrized by and are : where .we will again work with and we choose a sample value of the coupling as .see figure [ fig : gbn500 ] for and from the benchmark analysis .the so called born infeld action is a non - linear generalization of the maxwell action , controlled by the parameter : if we take the limit in ( [ bisector ] ) we recover old maxwell action . the einstein hilbert born infeld bulk action in is obtained by replacing the maxwell sector in equation [ ehmaction ] with this action . .here born infeld black holes are used as the working substance .blue crosses represent the carnot efficiency , while black squares represent . for , and to 0.6674942730 and 0.5653678967 respectively.,width=288 ] the exact results for the born infeld black hole s mass and temperature are known , but for our purposes , it is enough to expand them in , keeping only leading non trivial terms . for the mass : and the temperature : the exact formulae are computationally intensive , and for any significant , there are far too many computations to allow computation of the efficiency in a reasonable amount of time ( especially in ) and so we chose to make this truncation at the outset .we worked with and in our benchmark studies , the results of which are shown in figure [ fig : bin500 ] for and . in figure[ fig : pc ] we gather all the efficiencies computed using the benchmark cycle together .the gauss bonnet and born infeld cases , thought of as perturbations of the einstein maxwell case , have higher efficiencies , although it is interesting that the differences begin to show only in the 8th significant figure , for the parameter values chosen for and .we explored other parameter values ( while making sure to stay in the physical range allowed by reality of the mass for the gauss bonnet case ) and found a very weak dependence of the efficiency as they varied .( this all matches observations made in refs. in the high temperature limit . )they all in turn have significantly lower efficiency than the ideal gas case listed at the end of section [ sec : ideal ] . with circle origin at and radius .for additional comparison , the ideal gas case of section [ sec : ideal ] has at , and is exactly.,width=192 ]we ve defined a new way of comparing different black holes , given meaning in the context of defining black hole heat engines in extended thermodynamics .our benchmarking allowed us to compare four important cases against each other , and we found results consistent with earlier studies reported in refs. , but here we ve established a more robust framework for comparison ( a standard circular cycle ) facilitated by the exact formula and algorithm of ref. . along the way , we found a fascinating case where the algorithm itself collapses to another exact result , this time the exact efficiency of an ideal gas " example. it would be fascinating to see if other exact results of this kind can be obtained for other non trivial systems. it would be interesting to study other black holes using this same benchmarking scheme in order to compare more properties of heat engine working substances .extend all this to non static cases would be particularly worthwhile .finally , the possible applications of all of this to holographically dual strongly coupled field theories is worth exploring .we hope to report on some of this elsewhere .we would like to thank the us department of energy for support under grant de cvj would like to thank amelia for her patience and support .j. d. bekenstein , `` black holes and entropy , '' http://dx.doi.org/10.1103/physrevd.7.2333[_phys.rev._ * d7 * ( 1973 ) 23332346 ] . j. d. bekenstein , `` generalized second law of thermodynamics in black hole physics , '' http://dx.doi.org/10.1103/physrevd.9.3292[_phys.rev._ * d9 * ( 1974 ) 32923300 ] .s. hawking , `` particle creation by black holes , '' http://dx.doi.org/10.1007/bf02345020[_commun.math.phys._ * 43 * ( 1975 ) 199220 ] . s. hawking , `` black holes and thermodynamics , '' http://dx.doi.org/10.1103/physrevd.13.191[_phys.rev._ * d13 * ( 1976 ) 191197 ] .m. m. caldarelli , g. cognola , and d. klemm , `` thermodynamics of kerr - newman - ads black holes and conformal field theories , '' http://dx.doi.org/10.1088/0264-9381/17/2/310[_class.quant.grav._ * 17 * ( 2000 ) 399420 ] , http://arxiv.org/abs/hep-th/9908022[arxiv:hep-th/9908022 [ hep - th ] ] .s. wang , s .- q .wu , f. xie , and l. dan , `` the first laws of thermodynamics of the ( 2 + 1)-dimensional btz black holes and kerr - de sitter spacetimes , '' http://dx.doi.org/10.1088/0256-307x/23/5/009[_chin.phys.lett._ * 23 * ( 2006 ) 10961098 ] , http://arxiv.org/abs/hep-th/0601147[arxiv:hep-th/0601147 [ hep - th ] ] .y. sekiwa , `` thermodynamics of de sitter black holes : thermal cosmological constant , '' http://dx.doi.org/10.1103/physrevd.73.084009 [ _ phys.rev . _ * d73 * ( 2006 ) 084009 ] , http://arxiv.org/abs/hep-th/0602269[arxiv:hep-th/0602269 [ hep - th ] ] . e. a. larranaga rubio , `` stringy generalization of the first law of thermodynamics for rotating btz black hole with a cosmological constant as state parameter , '' http://arxiv.org/abs/0711.0012[arxiv:0711.0012 [ gr - qc ] ] .d. kastor , s. ray , and j. traschen , `` enthalpy and the mechanics of ads black holes , '' http://dx.doi.org/10.1088/0264-9381/26/19/195011 [ _ class.quant.grav . _* 26 * ( 2009 ) 195011 ] , http://arxiv.org/abs/0904.2765[arxiv:0904.2765 [ hep - th ] ] .b. p. dolan , `` the cosmological constant and the black hole equation of state , '' http://dx.doi.org/10.1088/0264-9381/28/12/125020 [ _ class.quant.grav ._ * 28 * ( 2011 ) 125020 ] , http://arxiv.org/abs/1008.5023[arxiv:1008.5023 [ gr - qc ] ] .m. cvetic , g. gibbons , d. kubiznak , and c. pope , `` black hole enthalpy and an entropy inequality for the thermodynamic volume , '' http://dx.doi.org/10.1103/physrevd.84.024037[_phys.rev._ * d84 * ( 2011 ) 024037 ] , http://arxiv.org/abs/1012.2888[arxiv:1012.2888 [ hep - th ] ] .b. p. dolan , `` compressibility of rotating black holes , '' http://dx.doi.org/10.1103/physrevd.84.127503[_phys.rev._ * d84 * ( 2011 ) 127503 ] , http://arxiv.org/abs/1109.0198[arxiv:1109.0198 [ gr - qc ] ] . b. p. dolan , `` pressure and volume in the first law of black hole thermodynamics , '' http://dx.doi.org/10.1088/0264-9381/28/23/235017 [ _ class.quant.grav ._ * 28 * ( 2011 ) 235017 ] , http://arxiv.org/abs/1106.6260[arxiv:1106.6260 [ gr - qc ] ] .b. p. dolan , `` where is the pdv term in the fist law of black hole thermodynamics ?, '' http://arxiv.org/abs/1209.1272[arxiv:1209.1272 [ gr - qc ] ]. n. altamirano , d. kubiznak , r. b. mann , and z. sherkatghanad , `` thermodynamics of rotating black holes and black rings : phase transitions and thermodynamic volume , '' http://dx.doi.org/10.3390/galaxies2010089[_galaxies_ * 2 * ( 2014 ) 89159 ] , http://arxiv.org/abs/1401.2586[arxiv:1401.2586 [ hep - th ] ] .d. kubiznak , r. b. mann , and m. teo , `` black hole chemistry : thermodynamics with lambda , '' http://arxiv.org/abs/1608.06147[arxiv:1608.06147 [ hep - th ] ] .m. henneaux and c. teitelboim , `` the cosmological constant as a canonical variable , '' http://dx.doi.org/10.1016/0370-2693(84)91493-x[_phys.lett._ * b143 * ( 1984 ) 415420 ] . c. teitelboim ,`` the cosmological constant as a thermodynamic black hole parameter , '' http://dx.doi.org/10.1016/0370-2693(85)91186-4[_phys.lett._ * b158 * ( 1985 ) 293297 ] .m. henneaux and c. teitelboim , `` the cosmological constant and general covariance , '' http://dx.doi.org/10.1016/0370-2693(89)91251-3[_phys.lett._ * b222 * ( 1989 ) 195199 ] .m. k. parikh , `` the volume of black holes , '' http://dx.doi.org/10.1103/physrevd.73.124021[_phys.rev._ * d73 * ( 2006 ) 124021 ] , http://arxiv.org/abs/hep-th/0508108[arxiv:hep-th/0508108 [ hep - th ] ] . c. v. johnson , `` holographic heat engines , '' http://dx.doi.org/10.1088/0264-9381/31/20/205002[_class .* 31 * ( 2014 ) 205002 ] , http://arxiv.org/abs/1404.5982[arxiv:1404.5982 [ hep - th ] ] .j. m. maldacena , `` the large n limit of superconformal field theories and supergravity , '' _ adv .* 2 * ( 1998 ) 231252 , http://arxiv.org/abs/hep-th/9711200[hep-th/9711200 ] .e. witten , `` anti - de sitter space and holography , '' _ adv .* 2 * ( 1998 ) 253291 , http://arxiv.org/abs/hep-th/9802150[hep-th/9802150 ] .s. s. gubser , i. r. klebanov , and a. m. polyakov , `` gauge theory correlators from non - critical string theory , '' _ phys ._ * b428 * ( 1998 ) 105114 , http://arxiv.org/abs/hep-th/9802109[hep-th/9802109 ] .e. witten , `` anti - de sitter space , thermal phase transition , and confinement in gauge theories , '' _ adv ._ * 2 * ( 1998 ) 505532 , http://arxiv.org/abs/hep-th/9803131[hep-th/9803131 ] .o. aharony , s. s. gubser , j. m. maldacena , h. ooguri , and y. oz , `` large n field theories , string theory and gravity , '' http://dx.doi.org/10.1016/s0370-1573(99)00083-6[_phys . rept . _* 323 * ( 2000 ) 183386 ] , http://arxiv.org/abs/hep-th/9905111[arxiv:hep-th/9905111 [ hep - th ] ] . c. v. johnson , `` gauss - bonnet black holes and holographic heat engines beyond large n , '' http://arxiv.org/abs/1511.08782[arxiv:1511.08782 [ hep - th ] ] . c. v. johnson , `` born - infeld ads black holes as heat engines , '' http://arxiv.org/abs/1512.01746[arxiv:1512.01746 [ hep - th ] ] .a. belhaj , m. chabab , h. el moumni , k. masmar , m. b. sedra , and a. segui , `` on heat properties of ads black holes in higher dimensions , '' http://dx.doi.org/10.1007/jhep05(2015)149[_jhep_ * 05 * ( 2015 ) 149 ] , http://arxiv.org/abs/1503.07308[arxiv:1503.07308 [ hep - th ] ] . j. sadeghi and k. jafarzade , `` heat engine of black holes , '' http://arxiv.org/abs/1504.07744[arxiv:1504.07744 [ hep - th ] ] .e. caceres , p. h. nguyen , and j. f. pedraza , `` holographic entanglement entropy and the extended phase structure of stu black holes , '' http://dx.doi.org/10.1007/jhep09(2015)184[_jhep_ * 09 * ( 2015 ) 184 ] , http://arxiv.org/abs/1507.06069[arxiv:1507.06069 [ hep - th ] ] .m. r. setare and h. adami , `` polytropic black hole as a heat engine , '' http://dx.doi.org/10.1007/s10714-015-1979-0[_gen ._ * 47 * ( 2015 ) no . 11 , 133 ] . m. zhang and w .- b . liu , `` f(r ) black holes as heat engines , '' http://dx.doi.org/10.1007/s10773-016-3134-4[_int . j. theor . phys . _ * 55 * ( 2016 ) no . 12 , 51365145 ] . c. bhamidipati and p. k. yerra , `` heat engines for dilatonic born - infeld black holes , '' http://arxiv.org/abs/1606.03223[arxiv:1606.03223 [ hep - th ] ] .wei and y .- x .liu , `` implementing black hole as efficient power plant , '' http://arxiv.org/abs/1605.04629[arxiv:1605.04629 [ gr - qc ] ] .j. sadeghi and k. jafarzade , `` the modified horava - lifshitz black hole from holographic engine , '' http://arxiv.org/abs/1604.02973[arxiv:1604.02973 [ hep - th ] ] . c. v. johnson , `` an exact efficiency formula for holographic heat engines , '' http://dx.doi.org/10.3390/e18040120[_entropy_ * 18 * ( 2016 ) 120 ] , http://arxiv.org/abs/1602.02838[arxiv:1602.02838 [ hep - th ] ] .a. chamblin , r. emparan , c. v. johnson , and r. c. myers , `` charged ads black holes and catastrophic holography , '' _ phys ._ * d60 * ( 1999 ) 064018 , http://arxiv.org/abs/hep-th/9902170[hep-th/9902170 ] .a. chamblin , r. emparan , c. v. johnson , and r. c. myers , `` holography , thermodynamics and fluctuations of charged ads black holes , '' _ phys ._ * d60 * ( 1999 ) 104026 , http://arxiv.org/abs/hep-th/9904197[hep-th/9904197 ] .d. kubiznak and r. b. mann , `` p - v criticality of charged ads black holes , '' http://dx.doi.org/10.1007/jhep07(2012)033[_jhep_ * 1207 * ( 2012 ) 033 ] , http://arxiv.org/abs/1205.0559[arxiv:1205.0559 [ hep - th ] ] .cai , l .- m .cao , l. li , and r .- q .yang , `` p - v criticality in the extended phase space of gauss - bonnet black holes in ads space , '' http://dx.doi.org/10.1007/jhep09(2013)005[_jhep_ * 09 * ( 2013 ) 005 ] , http://arxiv.org/abs/1306.6233[arxiv:1306.6233 [ gr - qc ] ] .m. born , `` modified field equations with a finite radius of the electron , '' http://dx.doi.org/10.1038/132282a0[_nature_ * 132 * ( 1933 ) 282282 ] .m. born , `` quantum theory of the electromagnetic field , '' http://dx.doi.org/10.1098/rspa.1934.0010[_proc .lond . _ * a143 * ( 1934 ) 410437 ] .m. born and l. infeld , `` foundations of the new field theory , '' http://dx.doi.org/10.1098/rspa.1934.0059[_proc .lond . _ * a144 * ( 1934 ) 425451 ] .s. fernando and d. krug , `` charged black hole solutions in einstein - born - infeld gravity with a cosmological constant , '' http://dx.doi.org/10.1023/a:1021315214180[_gen .* 35 * ( 2003 ) 129137 ] , http://arxiv.org/abs/hep-th/0306120[arxiv:hep-th/0306120 [ hep - th ] ] .cai , d .- w .pang , and a. wang , `` born - infeld black holes in ( a)ds spaces , '' http://dx.doi.org/10.1103/physrevd.70.124034[_phys ._ * d70 * ( 2004 ) 124034 ] , http://arxiv.org/abs/hep-th/0410158[arxiv:hep-th/0410158 [ hep - th ] ] .t. k. dey , `` born - infeld black holes in the presence of a cosmological constant , '' http://dx.doi.org/10.1016/j.physletb.2004.06.047 [ _ phys .lett . _ * b595 * ( 2004 ) 484490 ] , http://arxiv.org/abs/hep-th/0406169[arxiv:hep-th/0406169 [ hep - th ] ] . | we present the results of initiating a benchmarking scheme that allows for cross comparison of the efficiencies of black holes used as working substances in heat engines . we use a circular cycle in the plane as the benchmark engine . we test it on einstein maxwell , gauss bonnet , and born infeld black holes . also , we derive a new and surprising exact result for the efficiency of a special ideal gas " system to which all the black holes asymptote . arxiv:1612 * avik chakraborty and clifford v. johnson * _ department of physics and astronomy _ _ university of southern california _ _ los angeles , ca 90089 - 0484 , u.s.a . _ avikchak , johnson1 , [ at ] usc.edu |
in this paper we consider semiparametric models defined by conditional mean and conditional variance estimating equations .models defined by estimating equations for the first and second order conditional moments are widely used in applications .see , for instance , ziegler ( 2011 ) for a recent reference . herewe consider a model that extends the framework considered by cui , hrdle and zhu ( 2011 ) . to provide some insight on the type of models we study ,consider the following semiparametric extension of the classical poisson regression model with unobserved heterogeneity : the observed variables are where denotes the count variable and is the vector of explanatory variables .let we assume that there exists such that the parameter and the function are unknown .given and an unobserved error term the variable has a poisson law of mean if and then .\label{mom2}\end{aligned}\ ] ] this model is a semiparametric single - index regression model ( e.g. , powell , stock and stoker ( 1989 ) , ichimura ( 1993 ) , hrdle , hall and ichimura ( 1993 ) , sherman ( 1994b ) ) where a second order conditional moment is specified as a nonlinear function of the conditional mean and an additional unknown parameter .this extends the framework of cui , hrdle and zhu ( 2011 ) where the conditional variance of the response is proportional to a given function of the conditional mean .our first contribution is to propose a new semiparametric estimation procedure for single - index regression which incorporates the additional information on the conditional variance of . for thiswe extend the quasi - generalized pseudo maximum likelihood method introduced by gouriroux , monfort and trognon ( 1984a , 1984b ) to a semiparametric framework .more precisely , we propose to estimate and the function through a two - step pseudo - maximum likelihood ( pml ) procedure based on _ linear exponential families _ _ with nuisance parameter _ densities .such densities are parameterized by the mean and a nuisance parameter that can be recovered from the variance .although we use a likelihood type criterion , no conditional distribution assumption on given is required for deriving the asymptotic results . as an example of application of our procedureconsider the case where is a count variable .first , write the poisson likelihood where the function is replaced by a kernel estimator and maximize this likelihood with respect to to obtain a semiparametric pml estimator of . use this estimate and the variance formula ( [ mom2 ] ) to deduce a consistent moment estimator of in a second step , estimate through a semiparametric negative binomial pml where is again replaced by a kernel estimator and the variance parameter of the negative binomial is set equal to the estimate of finally , given the second step estimate of , build a kernel estimator for the regression . for simplicity, we use a nadaraya - watson estimator to estimate .other smoothers like local polynomials could be used at the expense of more intricate technical arguments .the occurrence of a nonparametric estimator in a pseudo - likelihood criterion requires a rule for the smoothing parameter . while the semiparametric index regression literature contains a large amount of contributions on how to estimate an index , there are much less results and practical solutions on the choice of the smoothing parameter . even if the smoothing parameter does not influence the asymptotic variance of a semiparametric estimator of , in practicethe estimate of and of the regression function may be sensitive to the choice of the smoothing parameter .another contribution of this paper is to propose an automatic and natural choice of the smoothing parameter used to define the semiparametric estimator . for this, we extend the approach introduced by hrdle , hall and ichimura ( 1993 ) ( see also xia and li ( 1999 ) , xia , tong and li ( 1999 ) and delecroix , hristache and patilea ( 2006 ) ) .the idea is to maximize the pseudo - likelihood simultaneously in and the smoothing parameter , that is the bandwidth of the kernel estimator .the bandwidth is allowed to belong to a large range between and . in some sense, this approach considers the bandwidth an auxiliary parameter for which the pseudo - likelihood may provide an estimate . using a suitable decomposition of the pseudo - log - likelihoodwe show that such a joint maximization is asymptotically equivalent to separate maximization of a purely parametric ( nonlinear ) term with respect to and minimization of a weighted ( mean - squared ) cross - validation function with respect to the bandwidth .the weights of this cross - validation function are given by the second order derivatives of the pseudo - log - likelihood with respect to .we show that the rate of our ` optimal ' bandwidth is , as expected for twice differentiable regression functions .the paper is organized as follows . in section [ metoda ]we introduce a class of semiparametric pml estimators based on linear exponential densities with nuisance parameter and we provide a natural bandwidth choice .moreover , we present the general methodology used for the asymptotics .section [ rezultat ] contains the asymptotic results .a bound for the variance of our semiparametric pml estimators is also derived . in section [ twostep ]we use the semiparametric pml estimators to define a two - step procedure that can be applied in single - index regression models where an additional variance condition like ( [ mom2 ] ) is specified .section [ simulsec ] examines the finite - sample properties of our procedure via monte carlo simulations .we compare the performances of a two - step generalized least - squares with those of a negative binomial pml in a poisson single - index regression model with multiplicative unobserved heterogeneity . even if the two procedures considered lead to asymptotically equivalent estimates , the latter procedure seems preferable in finite samples .an application to real data on the frequency of recreational trips ( see cameron and trivedi ( 2013 ) , page 246 ) is also provided .section [ concl ] concludes the paper .the technical proofs are postponed to the appendix .[ metoda ] consider that the observations are independent copies of the random vector assume that there exists , unique up to a scale normalization factor , such that the single - index model ( sim ) condition holds . in this paper , we focus on single - index models where the conditional second order moment of given is a known function of ] ( denotes the derivative with respect to the argument ) recall that for any given the following identity holds : if is fixed , a lefn becomes a linear exponential family ( lef ) of densities .gouriroux , monfort and trognon ( 1984a , 1984b ) used lefn densities to define a two - step pml procedure in nonlinear regression models where a specification of the conditional variance is given . herein , we extend their approach to a semiparametric framework . in the case of the sim defined by equation ( [ mom2 ] ) , the conditional variance is given by with and in this case take which define a negative binomial distribution of mean and variance .note that the limit case corresponds to a poisson distribution . as another example ,consider with and now , take the lefn density given by and which is the density of a gamma law of mean and variance . in order to define our semiparametric pml estimator in the presence of a nuisance parameterlet us introduce some notation : given a sequence of numbers growing slowly to infinity ( e.g. , ) , let be the range from which the ` optimal ' bandwidth will be chosen .define the set , with some sequence decreasing to zero .let be some real value of the nuisance parameter typically , if the conditional variance formula ( [ var1 ] ) is correctly specified . otherwise , is some pseudo - true value of the nuisance parameter .suppose that a sequence such that , in probability , is given .set where ] for some and to be more precise , define and by little algebra , for all and where let with or without loss of generality , consider that ( since is the logarithm of a lefn density for any given and , the map attains its maximum at thus , up to a translation with a function depending only on and we may consider ) in this case we have we show that uniformly over and uniformly in provided that and on the other hand , we prove that provided that slowly enough and faster than and slower than some .( see lemma [ idic ] in the appendix ; in that lemma we distinguish two types of assumptions depending on whether is bounded or not . )deduce that is asymptotically equivalent to the maximizer of over therefore , hereafter , we simply write instead of and we consider the semiparametric pseudo - log - likelihood can be split into a purely parametric ( nonlinear ) part , a purely nonparametric one and a reminder term , where i_{a}\left ( z_{i}\right ) , \label{deco } \\ t\left ( h;\alpha ^{\ast } \right ) & = \dfrac{1}{n}\sum\limits_{i=1}^{n}\psi \left ( y_{i},\hat{r}_{h}^{i}\left ( z_{i}^{t}\theta _ { 0};\theta _ { 0}\right ) ; \alpha ^{\ast } \right ) i_{a}\left ( z_{i}\right ) , \notag \\ r\left ( \theta , h;\widetilde{\alpha } _ { n}\right ) & = \dfrac{1}{n } \sum\limits_{i=1}^{n}\left [ \psi \left ( y_{i},\hat{r}_{h}^{i}\left ( z_{i}^{t}\theta ; \theta \right ) ; \widetilde{\alpha } _ { n}\right ) -\psi \left ( y_{i},r\left ( z_{i}^{t}\theta ;\theta \right ) ; \widetilde{\alpha } _ { n}\right ) \right ] i_{a}\left ( z_{i}\right ) \notag \\& \hspace*{0.5 cm } -\dfrac{1}{n}\sum\limits_{i=1}^{n}\left [ \psi \left ( y_{i},\hat{r } _ { h}^{i}\left ( z_{i}^{t}\theta _ { 0};\theta _ { 0}\right ) ; \alpha ^{\ast}\right ) -\psi \left ( y_{i},r\left ( z_{i}^{t}\theta _ { 0};\theta _ { 0}\right ) ; \alpha ^{\ast } \right ) \right ] i_{a}\left ( z_{i}\right ) \notag\end{aligned}\ ] ] ( see hrdle , hall and ichimura ( 1993 ) for a slightly different splitting ) . given this decomposition ,the simultaneous optimization of is asymptotically equivalent to separately maximizing with respect to and with respect to , provided that is sufficiently small .a key ingredient for proving that is negligible with respect to and uniformly in and for any is represented by the orthogonality conditions = 0 \label{orth1}\ ] ] and = 0,\ ] ] that must hold for any where denotes the derivative with respect to the second argument of and is the derivative with respect to all occurrences of that is given , and ( see also sherman ( 1994b ) and delecroix , hristache and patilea ( 2006 ) for similar conditions ) . if with then and thus ( [ orth1 ] ) is a consequence of the sim condition ( [ sim ] ) . to check the second orthogonality condition note that = e\left [ \partial _ { 22}^{2}\psi \left ( y,\;r\left ( z^{t}\theta _ { 0};\theta _ { 0}\right ) ; \alpha \right ) \mid z^{t}\theta _ { 0}\right]\ ] ] and = e\left [ r^{\prime } \left ( z^{t}\theta _ { 0};\theta _ { 0}\right ) \left ( z - e\left [ z\mid z^{t}\theta _ { 0}\right ] \right ) \mid z^{t}\theta _ { 0}\right ] , \ ] ] where is the derivative of the last identity is always true under the sim condition ( e.g. , newey ( 1994 ) , page 1358) let us point out that conditions ( [ orth1])-([orth2 ] ) hold even if the variance condition ( [ var1 ] ) is misspecified . since is negligible with respect to and not contain the parameter of interest , the asymptotic distribution of will be obtained by standard arguments used for in the presence of nuisance parameters applied to the objective function .we deduce that behaves as follows :i ) if the sim condition ( [ sim ] ) holds and for some then is asymptotically normal ; ii ) if sim condition holds , the conditional variance ( [ var1 ] ) is correctly specified and then is asymptotically normal and it has the lowest variance among the semiparametric pml estimators based on lef densities . in any case , the asymptotic distribution of does not depend on the choice of let us point out that in our framework we only impose convergent in probability without asking a rate of convergence as it is usually supposed for in the presence of nuisance parameters .this because the usual orthogonality condition = 0 ] as well as the asymptotic behavior of , with defined in ( [ deeff ] ) .a consistent estimator for the asymptotic variance matrix of is proposed moreover , a lower bound for the asymptotic variance matrix of is derived . for the identifiability of the parameter of interest , hereafter fix its first component ,that is therefore , we shall implicitly identify a vector with its last components and redefine the symbol as being the vector of the first order partial derivatives with respect to the last components of let if the sim assumption and variance condition ( [ var1 ] ) hold , then for a given let and denote the first and second order derivatives of the function similarly , is the derivative of define thus , can be replaced by in the definition of the constants and ^{2}\ i_{a}\left ( z\right ) \right\ } \notag \\ c_{2 } & = -\,k_{2}\ e\left\ { \frac{1}{2}\ \partial _ { r}c\left ( r\left ( z^{t}\theta _ { 0};\theta _ { 0}\right ) ; \alpha ^{\ast } \right ) \,\,\frac{1}{f\left ( z^{t}\theta _ { 0};\theta _ { 0}\right ) } \ v\left ( z^{t}\theta_{0};\theta _ { 0}\right ) \ i_{a}\left ( z\right ) \right\ } , \notag\end{aligned}\ ] ] with , and consider the matrices ^{2}v\left ( z^{t}\theta_{0};\theta _ { 0}\right ) \partial _ { \theta } r\left ( z^{t } \theta _ { 0};\theta_{0}\right ) \partial _ { \theta } r\left ( z^{t}\theta _ { 0};\theta _ { 0}\right ) ^{t}i_{a}\left ( z\right ) \right\}\ ] ] .\ ] ] note that if the variance condition ( [ var1 ] ) holds and now , we deduce the asymptotic normality of the semiparametric pml estimator in the presence of a nuisance parameter .moreover , we obtain the rate of decay to zero of the ` optimal ' bandwidth the proof of the following result is given in appendix refproof .[ param1 ] suppose that the assumptions in appendix [ assu ] hold .define the set , , with and such that .fix if is defined as in ( [ deeff])-([deeff2 ] ) , then in probability , and if is bounded , the same conclusion remains true for any sequence . in applications is unknown and therefore it has to be consistently estimated . to this end, we propose an usual sandwich estimator of the asymptotic variance ( e.g. , ichimura ( 1993 ) ) .let denote the kernel estimator for the density of define ^{2}\left [ y_{i}-\widehat{r}_{\widehat{h}}\left ( z_{i}^{t}\widehat{\theta } ; \widehat{\theta } \right ) \right ] ^{2 } \\_ { \theta } \widehat{r}_{\widehat{h}}\left ( z_{i}^{t}\widehat{\theta } ; \widehat{\theta } \right ) \partial _ { \theta } \widehat{r}_{\widehat{h } } \left ( z_{i}^{t } \widehat{\theta } ; \widehat{\theta } \right ) ^{t}i_{\left\{z:\,\widehat{f}_{\widehat{h}}\left ( z^{t}\widehat{\theta } ; \,\widehat{\theta } \right ) \geq c\right\ } } ( z_{i})\end{gathered}\ ] ] [ estasv ] suppose that the conditions of theorem [ param1 ] hold .then , in probability .the arguments are quite standard ( e.g. , ichimura ( 1993 ) , section 7 ) . on one hand , the convergence in probability of and and , on the other hand , the convergence in probability of and uniformly over in neighborhoods shrinking to and uniformly over ( e.g. , andrews ( 1995 ) , delecroix , hristache and patilea ( 2006 ) ) imply and in probability .theorem [ param1 ] shows , in particular , that is asymptotically equivalent to the semiparametric pml based on the lef pseudo - log - likelihood as in the parametric case , we can deduce a lower bound for the asymptotic variance with respect to semiparametric pml based on lef densities .this bound is achieved by if the sim assumption and the variance condition ( [ var1 ] ) hold and the proof of the following proposition is identical to the proof of property 5 of gouriroux , monfort and trognon ( 1984a , page 687 ) and thus it will be skipped .[ bound ] the set of asymptotic variance matrices of the semiparametric pml estimators based on linear exponential families has a lower bound equal to , where ^{-1}\partial _ { \theta } r\left ( z^{t}\theta _ { 0};\theta _ { 0}\right ) \partial _ { \theta } r\left ( z^{t}\theta_{0 } ; \theta _ { 0}\right ) ^{t}i_{a}\left ( z\right ) \right\ } .\ ] ] concerning the nonparametric part , we have the following result on theasymptotic distribution of the nonparametric estimator of the regression .the proof is omitted ( see hrdle and stoker ( 1989 ) ) .[ nonpar ] assume that the conditions of theorem [ param1 ] are fulfilled .then , for any such that where . ] on the other hand , if then ( cf . property 4 , gouriroux , monfort and trognon ( 1984a , page 684 ) ) .deduce that for any , \ ] ] and is the unique maximizer .hence , condition ( [ idfg ] ) holds for any set this leads us to the following definition of a preliminary estimator. * step 1 ( preliminary step ) . *consider a sequence of bandwidths such that and for some moreover , let be a lef density .delecroix , hristache and patilea ( 2006 ) showed that , under the regularity conditions required by theorem [ param1 ] , we have using the preliminary estimate and the variance condition ( [ gmm ] ) we can build such that , in probability ( see the end of this section ) .let denote a lefn density with mean and variance consider ( e.g. , ) , define .moreover , consider , with as in theorem [ param1 ] .fix some small * step 2 . *define with and from step 1 .the following result is a direct consequence of theorem [ param1 ] .[ 2step ] suppose that the assumptions of theorem [ param1 ] hold .if and are obtained as in step 2 above , then with ^{-1}\partial _ { \theta } r\left ( z^{t}\theta _ { 0};\theta _ { 0}\right ) \partial _ { \theta } r\left ( z^{t}\theta _ { 0};\theta _ { 0}\right ) ^{t}i_{a}\left ( z\right ) \right\ } .\ ] ] moreover , in probability , where and are defined as in ( [ a1a2 ] ) with _ remark 1 ._ let us point out that simultaneous optimization of the semiparametric criterion in step 1 with respect to , and ( or with respect to and for a given ) is not recommended , even if the conditional variance is correctly specified .indeed , if the true conditional distribution of given is not the one given by the lefn density joint optimization with respect to and leads , in general , to an inconsistent estimate of ( this failure is well - known in the parametric case where is a known function ; see comments of cameron and trivedi ( 2013 ) , pages 84 - 85 . in view of decomposition ( [ deco ] ) we deduce that this fact also happens in the semiparametric framework where has to be estimated . ) in this case the matrices and defined in section [ rezultat ] are no longer equal and thus the asymptotic variance of the one - step semiparametric estimator of obtained by simultaneous maximization of the criterion in step 1 with respect to , does not achieve the bound .however , when the sim condition holds and the true conditional law of is given by the lefn density our two - step estimator and the semiparametric mle of obtained by simultaneous optimization with respect to are asymptotically equivalent ._ remark 2 ._ note that if we ignore the efficiency loss due to trimming , is equal to the efficiency bound in the semiparametric model defined _ only _ by the single - index condition when the variance condition ( [ gmm ] ) holds . to see this ,apply the bound of newey and stoker ( 1993 ) with the true variance given by ( [ gmm ] ) .our two - stage estimator achieves this sim efficiency bound ( if the variance is well - specified ) .however , this sim bound is not necessarily the two moment conditions model bound .the latter should take into account the variance condition ( see newey ( 1993 ) , section 3.2 , for a similar discussion in the parametric nonlinear regression framework ) .in other words our two - stage estimator has some optimality properties but it may not achieve the semiparametric efficiency bound of the two moment conditions model .the same remark applies for the two - stage semiparametric generalized least squares ( gls ) procedure of hrdle , hall and ichimura ( 1993 ) [ see also picone and butler ( 2000 ) ] . achieving semiparametric efficiency when the first two moments are specified would be possible , for instance , by estimating higher orders conditional moments nonparametrically .however , in this case we face again the problem of the curse of dimensionality that we tried to avoid by assuming the sim condition .to complete the definition of the two - step procedure above , we have to indicate how to build a consistent sequence . such a sequence can be obtained from the moment condition ( [ gmm ] ) after replacing by a suitable estimator .this kind of procedure is commonly used in the semiparametric literature ( e.g. , newey and mcfadden ( 1994 ) ) . for simplicity ,let us only consider the negative binomial case where , for any we have = r\left ( z^{t}\theta_{0 } ; \theta _ { 0}\right ) \left [ 1+\alpha _ { 0}r\left ( z^{t}\theta_{0};\theta _ { 0}\right ) \right ] .\ ] ] consider a set such that , for any and any we have we can write i_{b}\left ( z\right ) \right\ } = \alpha_{0 } e\left\ { r\left ( z^{t}\theta_{0};\theta _ { 0}\right ) ^{2}i_{b}\left ( z\right ) \right\ } .\ ] ] consequently , we may estimate .this is indeed confirmed by the simulation experiments we report in section [ simulsec ] . ] by i_{b}\left ( z_{i}\right ) } { \frac{1}{n } \sum_{i=1}^{n}\widehat{r}_{h_{n}}\left ( z_{i}^{t}\theta _ { n};\theta _ { n}\right ) ^{2}i_{b}\left ( z_{i}\right ) } \label{nuis}\]]with and from step 1 and the nadaraya - watson estimator with bandwidth .since deduce that in probability ( see also the arguments we used in subsection [ relax ] ) .now , let us comment on what happens with our two - step procedure if the second order moment condition is misspecified , while the sim condition still holds .in general , the sequence one may derive from the conditional variance condition and the preliminary estimate of is still convergent to some pseudo - true value of the nuisance parameter . defined in ( [ nuis ] ) is convergent in probability to -e[r\left ( z^{t}\theta _ { 0};\theta _ { 0}\right ) i_{b}(z)]}{e[r(z^{t}\theta _ { 0};\theta _ { 0})^{2 } i_{b}(z)]}.\ ] ] to ensure that the limit of is positive , one may replace by for some small but positive then , the behavior of yielded by step 2 is described by theorem [ param1 ] , that is is still normal and is still of order finally , if the sim condition does not hold , then estimates a kind of first projection - pursuit direction . in this case, our procedure provides an alternative to minimum average ( conditional ) variance estimation ( mave ) procedure of xia _ et al_. ( 2002 ) .the novelty would be that the first projection direction is defined through a more flexible pml function than the usual least - squares criterion .this case will be analyzed elsewhere .in our empirical section we consider the case of a count response variable .a benchmark model for studying event counts is the poisson regression model .different variants of the poisson regression have been used in applications on the number of patents applied for and received by firms , bank failures , worker absenteeism , airline or car accidents , doctor visits , _ etc . _cameron and trivedi ( 2013 ) provide an overview of the applications of poisson regression . in the basic setup ,the regression function is log - linear .an additional unobserved multiplicative random error term in the conditional mean function is usually used to account for unobserved heterogeneity . in this sectionwe consider semiparametric single - index extensions of such models . to evaluate the finite sample performances of our estimator and of the optimal bandwidth , we conduct a simulation experiment with 500 replications .we consider three explanatory variables with {3\times3} ] . to estimate the parameter and the regression we use two semiparametric two - step estimation procedures as defined in section [ twostep ] : i ) a procedure with a poisson pml in the first step and a negative binomial pml in the second step ; let denote the two - step estimator .ii ) a procedure with a least - squares method in the first step and a gls method in the second step ; let be the two - step estimator . note that and have the same asymptotic variance . in both two - step procedures considered , we estimate using the estimator defined in ( [ nuis ] ) .the bandwidth is equal to we also consider the parametric two - step gls method as a benchmark . in this casethe link function and the variance parameter are considered given ;let denote the corresponding estimator .table 2 . the same setup as in table 1 but with and the true conditional variance of given equal to [ cols="^,^,^,^,^,^,^,^,^,^ " , ]we consider a semiparametric single - index model ( sim ) where an additional second order moment condition is specified . to estimate the parameter of interest we introduce a two - step semiparametric pseudo - maximum likelihood ( pml ) estimation procedure based on linear exponential families with nuisance parameter densities .this procedure extends the quasi - generalized pseudo - maximum likelihood method proposed by gouriroux , monfort and trognon ( 1984a , 1984b ) .we also provide a natural rule for choosing the bandwidth of the nonparametric smoother appearing in the estimation procedure .the idea is to maximize the pseudo - likelihood of the second step simultaneously in and the smoothing parameter .the rate of the bandwidth is allowed to lie in a range between and .we derive the asymptotic behavior of , the two - step semiparametric pml we propose . if the sim condition holds , then is normal .we also provide a consistent estimator of its variance . when the sim condition holds and the conditional variance is correctly specified , then has the best variance amongst the semiparametric pml estimators .the ` optimal ' bandwidth obtained by joint maximization of the pseudo - likelihood function in the second step is shown to be equivalent to the minimizer of a weighted cross - validation function . from thiswe deduce that converges to a positive constant , in probability .in particular , our optimal bandwidth has the rate expected when estimating a twice differentiable regression function nonparametrically .we conduct a simulation experiment in which the data were generated using a poisson single - index regression model with multiplicative unobserved heterogeneity .the simulation confirms the significant advantage of estimators that incorporate the information on the conditional variance .we also applied our semiparametric approach to a benchmark real count data set and we obtain a much better fit than the standard parametric regression models for count data .let with a compact subset of with nonvoid interior . depending on the context , is considered a subset of or a subset of [ asiid]the observations are independent copies of a random vector let there exists a unique interior point of such that [ as41]for every , the random variable admits a density with respect to the lebesgue measure on .[ as42] < \infty , ] such that [ as45 ] a ) the function satisfies a lipschitz condition , that is there exists ] is a lefn density with mean and variance ^{-1}. ] and = 0 .\ ] ]let , ] and such that then the proof of lemma [ lemrate ] can be distilled from many existing results ( e.g. , andrews ( 1995 ) , sherman ( 1994b ) , delecroix , hristache and patilea ( 2006 ) ) and therefore it will be omitted . [ idic ] a ) if then where and \b ) suppose that and satisfy the assumptions of lemma [ lemrate ] for some .moreover , assume that either i ) is bounded and or , ii ) < \infty ] .then \a ) we have for any and we can write and which proves the inequality .b ) it suffices to prove that first consider the case of unbounded note that , for any and this inequality and lemma [ lemrate ] and write the other hand , we can write \\ & \leq & n\frac{e^{\lambda } e\left [ \exp \left ( \lambda \left\| z_{i}\right\| \right ) \right ] } { \exp \left ( \lambda \delta _ { n}^{1/a}/d_{n}\right ) } .\end{aligned}\]]since and deduce that if lies in a compact , condition ( [ lip1 ] ) implies that for any in the support of , with some constant independent of in this case thus provided that such that and the proofs of the following three lemmas are lengthy and technical .these proofs are provided in delecroix , hristache and patilea ( 2006 ) and therefore it will be omitted herein .the key ingredients for the three proofs are the results on uniform rates of convergence for indexed by euclidean families ; see sherman ( 1994a ) .see also pakes and pollard ( 1989 ) for the definition and the properties of euclidean families of functions .the first of the three lemmas is a refined version of a standard result for cross - validation in nonparametric regression ( e.g. , hrdle and marron ( 1985 ) ) .the result holds uniformly in , and for in , ] , for some , and ] . for and define ^{2}\!i_{\left\ { z:\,f\left ( z^{t}\theta ; \theta \right ) \geq c\right\ } } \left ( z_{i}\right ) \quad ; \]]the kernel is a continuous probability density function with the support in . ] and < \infty ] with and uniformly in for the moment that are defined by maximization of which is defined with the fixed trimming ( see equation ( [ infeasi ] ) ) .at the end of the proof we show that the same conclusions hold for defined in ( [ deeff ] ) with the data - driven trimming . _part i : _ normality of _ _ _ _ _ _ by the decomposition ( [ deco ] ) we have objective is to show that is negligible when compared to from which we deduce that behaves as the maximizer of .define i_{a}\left ( z_{i}\right ) \;\]]and use taylor expansion to write + \left [ r_{1}\left ( \theta _ { 0},h;\widetilde{\alpha } _ { n}\right ) -r_{1}\left( \theta _ { 0},h;\alpha ^{\ast } \right ) \right ] .\end{aligned}\]]apply lemma [ rr ] to obtain the order of next , note that does not depend on .deduce that \times o_{p}\left ( \left\| \theta -\theta _ { 0}\right\| \right ) \\ & & + \left [ \!o\left ( h^{2}\right ) + o_{p}\left ( \frac{1}{\sqrt{n}h^{2}}\right ) \!\right ] \times o_{p}\left ( \left\| \theta -\theta _ { 0}\right\| ^{2}\right ) \\ & & + \left\ { \text{terms not depending on } \theta \right\ } , \end{aligned}\]]uniformly in , ] , for some small .note that shrinks to the set as therefore , the constants appearing in the dominating terms of the decomposition of vanishes as , provided that = 0 $ ] .consequently , the orders are transformed in orders and thus uniformly in , and a sequence convergent to in probability , provided that .the proof is complete .newey , w.k .efficient estimation of models with conditional moment restrictions , in g.s .maddala , c.r .rao and h.d .vinod ( eds . ) _ handbook of statistics , vol .11 , _ pp .419- 454 , new - york : north - holland .newey , w.k . and mcfadden , d. ( 1994 ) . large sample estimation and hypothesis testing , in r.f .engle and d.l .mcfadden ( eds . ) _ handbook of econometrics , vol .iv , _ pp .2111- 2245 , new - york : north - holland . | we propose a two - step pseudo - maximum likelihood procedure for semiparametric single - index regression models where the conditional variance is a known function of the regression and an additional parameter . the poisson single - index regression with multiplicative unobserved heterogeneity is an example of such models . our procedure is based on linear exponential densities with nuisance parameter . the pseudo - likelihood criterion we use contains a nonparametric estimate of the index regression and therefore a rule for choosing the smoothing parameter is needed . we propose an automatic and natural rule based on the joint maximization of the pseudo - likelihood with respect to the index parameter and the smoothing parameter . we derive the asymptotic properties of the semiparametric estimator of the index parameter and the asymptotic behavior of our ` optimal ' smoothing parameter . the finite sample performances of our methodology are analyzed using simulated and real data . * keywords : * semiparametric pseudo - maximum likelihood , single - index model , linear exponential densities , bandwidth selection . -1.5 cm -1.5 cm |
it is usually not seriously discussed in normal textbooks on quantum mechanics about _ how to prepare an initial state_. it is , however , becoming an important subject not only from a view point of foundation of quantum mechanics , but also from a practical point of view , since we are rushing towards experimental realizations of the ideas for quantum information and computation . without establishing particular initial states assumed in several algorithms , we can not start any processes of the attractive ideas .state preparation is one of the key elements to quantum information processing , and there are several theoretical proposals and experimental attempts . in the ideas for quantum information and computation , quantum states with high coherence , especially _ entangled states _ , play significant and essential roles . butsuch `` clean '' states required for quantum information technologies are not easily found in nature , since many of them are fragile against environmental perturbations and suffer from _decoherence_. therefore , there would often be a demand for preparing a desired _ pure state _ out of an arbitrary _ mixed state_. several schemes have been proposed for it , which are called `` purification , '' `` distillation , '' `` concentration , '' `` extraction , '' etc . .one of the simplest and easiest ways of state preparation is to resort to a projective measurement : a quantum system shall be in a pure state after it is measured and confirmed to be in the state .such a strategy is not possible , however , in cases where the desired state can not be directly measured or where the relevant system is not available after the confirmation .this is often the case for entangled states , which are the key resources to quantum information and computation .this is why more elaborate purification protocols are required and several schemes of _ entanglement purification/ preparation _ have been proposed . recently , a novel mechanism to purify quantum states has been found and reported : _ purification through zeno - like measurements _ .a pure state is extracted in a quantum system through a series of repeated measurements ( zeno - like measurements ) on another quantum system in interaction with the former .since the relevant system to be purified is not directly measured in this scheme , it would be suitable for such situations mentioned above . in this article, we discuss this scheme in detail and explore , on a heuristic basis , its potential as a useful and effective method of purification of qubits . the examples considered here are quite simple but still possess potential and practical applicability .this article is organized as follows .first , the basic framework of the purification is described in a general setting , and the conditions for the purification and its optimization are summarized in sec .[ sec : framework ] , where some details which are not discussed in the first report are included .it is then demonstrated in sec .[ sec : single ] how it works and how it can be made optimal in a simplest example , i.e. , _ single - qubit purification _ , and a generalization to a multi - qubit case is considered in sec .[ sec : initialization ] , which would afford us a useful method of _ initialization of multiple qubits_. one of the interesting applications of the present scheme is _ entanglement purification _ , which is discussed in sec .[ sec : entanglementpurification ] and shown to be actually possible .concluding remarks are given in sec .[ sec : summary ] with some comments on possible extensions and future subjects .appendices a e are supplied in order to demonstrate detailed calculations and proofs , that are not described in the text .let us recapitulate the framework of the purification reported in .we consider two quantum systems x and a interacting with each other ( fig .[ fig : coupledsystem ] ) . the total system x+a is initially in a _ mixed _ state , from which we try to extract a pure state in a by controlling x. we first perform a measurement on x ( the zeroth measurement ) _ to confirm that it is in a state . if it is found in the state , the state of the total system is projected by the projection operator to yield where is the state of a after this zeroth confirmation and is the probability for this to happen .we then let the total system start to evolve under a total hamiltonian and repeat the same measurement on x at regular time intervals .after repetitions of successful confirmations , i.e. , after x is confirmed to be in the state _ successively _ times , the state of the total system , , is cast into the following form : [ eqn : state ] \\ & = { { |{\phi}\rangle}_{\text{x}}\hspace*{-0.2truemm}{\langle{\phi}|}}\otimes \varrho_\text{a}^{(\tau)}(n ) , \label{eqn : statetotal}\displaybreak[0]\\ \varrho_\text{a}^{(\tau)}(n ) & = \bm{(}v_\phi(\tau)\bm{)}^n\varrho_\text{a } \bm{(}v_\phi^\dag(\tau)\bm{)}^n/\tilde{p}^{(\tau)}(n ) , \label{eqn : statea}\end{aligned}\ ] ] where , defined by is a projected time - evolution operator acting on the hilbert space of a , and is the normalization factor , \nonumber\displaybreak[0]\\ & = { \mathop{\text{tr}}\nolimits}_\text{a}[\bm{(}v_\phi(\tau)\bm{)}^n\varrho_\text{a } \bm{(}v_\phi^\dag(\tau)\bm{)}^n ] .\label{eqn : yield}\end{aligned}\ ] ] note that we retain only those events where x is found in the state at _ every _ measurement ( including the zeroth one ) ; other events , resulting in failure to purify a , are discarded .the normalization factor multiplied by , i.e. , , is nothing but the probability for the _ successful events _ and is the probability of obtaining the state given in ( [ eqn : state ] ) . for definiteness ,let us restrict ourselves on finite - dimensional systems throughout this article and consider the spectral decomposition of the operator .since the operator is not a hermitian operator , we should set up both right and left eigenvalue equations the eigenvalues are complex in general and bounded as ( see appendix [ app : bound ] ) .here we assume for simplicity that the spectrum of the operator is not degenerate .in such a case , the eigenvectors are orthogonal to each other in the sense and form a complete set in the hilbert space of system a , which readily leads to the spectral decomposition of the operator , ( in the following , we also normalize the right eigenvectors as . ) even in a general situation where the spectrum of the operator is degenerate , the diagonalization ( [ eqn : spectraldecomp ] ) is possible when and only when all the right eigenvectors are linearly independent of each other and form a complete basis . otherwise , the spectral decomposition is not like ( [ eqn : spectraldecomp ] ) , but in the `` jordan canonical form '' .the diagonalizability of the operator is , however , not an essential assumption as clarified in appendix [ app : jordandecomposition ] .it is now easy to observe the asymptotic behavior of the state of a , in ( [ eqn : statea ] ) .since the eigenvalues are bounded like ( [ eqn : bound ] ) , each term in the expansion decays out and a single term dominates asymptotically as the number of measurements , , increases , _ provided _\text{\textit{unique , discrete and non\-degenerate}}. \label{eqn : condition}\end{gathered}\ ] ] [ the word `` unique '' means that there is only one eigenvalue that has the maximum modulus and `` nondegenerate '' means that there is only one right eigenvector ( and a corresponding left eigenvector ) belonging to that maximal ( in magnitude ) eigenvalue . ]thus , the state of a in ( [ eqn : statea ] ) approaches a pure state , this is the purification scheme proposed recently : extraction of a pure state through a series of repeated measurements on x. since we repeat measurements ( on x ) as in the case of the quantum zeno effect , we call such measurements `` zeno - like measurements '' . the final pure state is the eigenstate of the projected time - evolution operator belonging to the largest ( in magnitude ) eigenvalue and depends on the parameters , , and those in the hamiltonian .it is , however , independent of the initial state .the pure state is extracted from an _ arbitrary _ mixed state through the zeno - like measurements . by tuning such parameters mentioned above, we have a possibility of extracting a desired pure state .the above observation shows that the assumption of the diagonalizability in ( [ eqn : spectraldecomp ] ) is not essential but condition ( [ eqn : condition ] ) , i.e. , the existence of the _unique , discrete and nondegenerate largest ( in magnitude ) eigenvalue _ , is crucial to the purification . for our purification mechanism to work ,it is crucial that a single state is extracted and this is accomplished when these qualifications , i.e. , the uniqueness of the largest eigenvalue and the nondegeneracy of the eigenvector , are both met .the diagonalizability of is not relevant to these conditions and is not essential to the purification .this point is clarified in appendix [ app : jordandecomposition ] .furthermore , note the asymptotic behavior of the success probability : it decays asymptotically as where stands for and .the decay is governed by the eigenvalue , and therefore , an efficient purification is possible if satisfies the condition which suppresses the decay in ( [ eqn : decay ] ) to give the final ( nonvanishing ) success probability it is worth stressing that the condition ( [ eqn : optimizationi ] ) allows us to repeat the measurement as many times as we wish without running the risk of losing the success probability . in other words , high fidelity to the target state and nonvanishing success probability do not contradict each other in this scheme , but rather they can be achieved simultaneously . at the same time , if the other eigenvalues are much smaller than in magnitude , purification is achieved quickly . equations ( [ eqn : optimizationi ] ) and ( [ eqn : optimizationii ] ) are the conditions for the _ optimal purification _ , which we try to accomplish by adjusting parameters , , and those in the hamiltonian . in the following sections , we discuss the above purification scheme in more detail addressing a few specific examples , which are so simple butstill possess potential and practical applications in quantum information and computation .let us first observe how the above mechanism works in the simplest example : we consider two qubits ( two two - level systems ) x and a interacting with each other , whose total hamiltonian is given by where are the pauli operators , are the ladder operators , and the frequencies and the coupling constant are real parameters .we repeatedly confirm the state of x and purify qubit a , i.e. , we discuss a purification of a single qubit .the four eigenvalues of the total hamiltonian in ( [ eqn : hamiltoniansingle ] ) are given by \\ e^{(1)}_\pm&=(\omega_\text{x}+\omega_\text{a})/2\pm\delta , \displaybreak[0]\\ e^{(2)}&=\omega_\text{x}+\omega_\text{a},\end{aligned}\ ] ] and the corresponding eigenstates are [ eqn : eigenstatessingle ] \\ { |{e^{(1)}_\pm}\rangle}_\text{xa } = { } & \frac{1}{\sqrt{2}}\,\biggl ( \epsilon(g)\sqrt{1\pm\frac{\omega_\text{x}-\omega_\text{a}}{2\delta}}{|{\uparrow\downarrow}\rangle}_\text{xa } \nonumber\displaybreak[0]\\ & \phantom{\frac{1}{\sqrt{2}}\,\biggl ( } { } \pm\sqrt{1\mp\frac{\omega_\text{x}-\omega_\text{a}}{2\delta } } { |{\downarrow\uparrow}\rangle}_\text{xa } \biggr),\displaybreak[0]\\ { |{e^{(2)}}\rangle}_\text{xa } = { } & { |{\uparrow\uparrow}\rangle}_\text{xa},\end{aligned}\ ] ] where is the sign function , and is the eigenstate of the operator belonging to the eigenvalue with the phase convention . hence , when the state of x , , is confirmed repeatedly at time intervals , the relevant operator to be investigated , the projected time - evolution operator , reads \\ = { } & { { |{\uparrow}\rangle}_{\text{a}}\hspace*{-0.2truemm}{\langle{\uparrow}| } } e^{-i(\omega_\text{x}+\omega_\text{a})\tau}\left [ \cos^2\!\frac{\theta}{2 } + e^{i(\omega_\text{x}+\omega_\text{a})\tau/2}\left ( \cos\delta\tau + i\frac{\omega_\text{x}-\omega_\text{a}}{2\delta}\sin\delta\tau \right)\sin^2\!\frac{\theta}{2 } \right]\nonumber\displaybreak[0]\\ & { } + { { |{\downarrow}\rangle}_{\text{a}}\hspace*{-0.2truemm}{\langle{\downarrow}|}}\left [ \sin^2\!\frac{\theta}{2 } + e^{-i(\omega_\text{x}+\omega_\text{a})\tau/2}\left ( \cos\delta\tau -i\frac{\omega_\text{x}-\omega_\text{a}}{2\delta}\sin\delta\tau \right)\cos^2\!\frac{\theta}{2 } \right]\nonumber\displaybreak[0]\\ & { } -i\left ( { { |{\uparrow}\rangle}_{\text{a}}\hspace*{-0.2truemm}{\langle{\downarrow}|}}e^{-i\varphi } + { { |{\downarrow}\rangle}_{\text{a}}\hspace*{-0.2truemm}{\langle{\uparrow}|}}e^{i\varphi } \right)\frac{g}{\delta }e^{-i(\omega_\text{x}+\omega_\text{a})\tau/2 } \sin\delta\tau\sin\frac{\theta}{2}\cos\frac{\theta}{2 } , \label{eqn : vsingle}\end{aligned}\ ] ] where the state is parameterized as and the set of angles characterizes the `` direction of ` spin ' x. '' if one of the two eigenvalues of the operator ( [ eqn : vsingle ] ) is larger in magnitude than the other , the condition for purification ( [ eqn : condition ] ) is fulfilled , and qubit a is purified into the eigenstate belonging to the larger ( in magnitude ) eigenvalue .furthermore , if condition ( [ eqn : optimizationi ] ) , , is satisfied , we can purify with a nonvanishing success probability , and another condition ( [ eqn : optimizationii ] ) , , enables us to accomplish quick purification .we try to achieve these conditions by tuning the parameters .the first adjustment for the optimal purification is ( see appendix [ app : optimalityproof ] ) .actually , if we choose , the eigenvalues of the projected time - evolution operator are given by and the eigenvectors belonging to them are it is clear that the magnitude of the eigenvalue is unity and that of , is less than unity provided both conditions ( [ eqn : condition ] ) and ( [ eqn : optimizationi ] ) are thus satisfied , and according to the theory presented in sec .[ sec : framework ] , we have an optimal purification after the repeated confirmations of the state , qubit a is purified into with a _ nonvanishing _ probability .similarly , another choice in ( [ eqn : optimizationsingle ] ) , i.e. , a series of repeated confirmations of the state , drives a into with a nonvanishing probability : the final success probability for the former choice or for the latter means that the target state or contained in the initial state is fully extracted . in this sense , the purification is optimal .the second adjustment is for the fastest purification , which is realized by the condition at which in ( [ eqn : secondeigenvalue ] ) is the smallest : .we can achieve it by tuning the time interval , for instance . andsuccess probability for single - qubit purification .the pure state is extracted from the initial mixed state after repeated confirmations of the state .parameters are , , for ( a ) and , for ( b ) , in the unit such that .the time interval is tuned so as to satisfy the condition for the fastest purification ( [ eqn : fastestsingle ] ) in each case.,title="fig:",scaledwidth=45.0% ] + and success probability for single - qubit purification .the pure state is extracted from the initial mixed state after repeated confirmations of the state .parameters are , , for ( a ) and , for ( b ) , in the unit such that .the time interval is tuned so as to satisfy the condition for the fastest purification ( [ eqn : fastestsingle ] ) in each case.,title="fig:",scaledwidth=45.0% ] to be more explicit , let us demonstrate the extraction of the pure state from the initial mixed state after x is confirmed to be in the state successfully times at time intervals , the state of qubit a and the probability for the successful confirmations read ^n { { |{\downarrow}\rangle}_{\text{a}}\hspace*{-0.2truemm}{\langle{\downarrow}| } } } { 1+[1-(g/\delta)^2\sin^2\!\delta\tau]^n},\\ \displaystyle p^{(\tau)}(n ) = \frac{1}{2}\{1+[1-(g/\delta)^2\sin^2\!\delta\tau]^n\ } , \end{cases}\ ] ] respectively , which clearly confirm the limits ( [ eqn : toup ] ) unless ( ) , and the convergences are the fastest when the condition ( [ eqn : fastestsingle ] ) is satisfied .( note that for the initial state considered here . ) in fig .[ fig : fidelityyieldsingle](a ) , the success probability and the so - called fidelity to the target state , defined by \\ & = { { } _ { \text{a}}\hspace*{-0.2truemm}\langle{\uparrow}|}\varrho_\text{a}^{(\tau)}(n ) { |{\uparrow}\rangle}_\text{a},\end{aligned}\ ] ] are shown as functions of the number of measurements , , for the initial state ( [ eqn : initialstatesingle ] ) , with the parameters , , , .since the condition ( [ eqn : optimizationi ] ) , , is fulfilled , the decay of the success probability is suppressed to yield the finite value , and since the time interval is tuned so as to satisfy the condition for the fastest purification ( [ eqn : fastestsingle ] ) ( ) , the pure state is extracted after only or measurements . in an extreme casewhere is possible , the extraction is achieved just after one measurement .such a situation is depicted in fig .[ fig : fidelityyieldsingle](b ) for the same initial state as in fig .[ fig : fidelityyieldsingle](a ) with the parameter set , , .the single - qubit purification in the previous section is too simple but is easily extended for multi - qubit cases . in the above example, one may realize that the state is an eigenstate of the total hamiltonian ( [ eqn : hamiltoniansingle ] ) [ see ( [ eqn : eigenstatessingle ] ) ] and this is why the optimization condition ( [ eqn : optimizationi ] ) , , is achieved with and irrespectively of the choice of the time interval .( the same argument applies to the case there . ) in the case of a multi - qubit system in fig .[ fig : multiqubits ] , with nearest - neighbor interactions , \\ & { } + g_\text{xa}(\sigma_+^\text{x}\sigma_-^\text{a } + \sigma_-^\text{x}\sigma_+^\text{a } ) + g_\text{ab}(\sigma_+^\text{a}\sigma_-^\text{b } + \sigma_-^\text{a}\sigma_+^\text{b})\nonumber\displaybreak[0]\\ & \phantom{{}+{}}{}+\cdots \label{eqn : hamiltonianmultiple}\end{aligned}\ ] ] ( ) , the state is an eigenstate of this total hamiltonian , and it is readily expected that the pure state is extracted by repeated projections onto the state , with the optimal success probability . similarly , repeated projections onto set every qubit into state , i.e. , into , optimally .this would be useful for _ initialization of multiple qubits _ in a quantum computer . in order to make this idea more concrete ,let us discuss in detail with a three - qubit system .the important point is whether the condition for the purification ( [ eqn : condition ] ) is achievable , i.e. , whether all the eigenvalues except for the relevant one , associated with the eigenstate ( or ) , can actually be less than unity in magnitude . for simplicity , we consider the case where .the eight eigenvalues of the total hamiltonian are given by [ eqn : eigenvaluesmultiple ] \\ e^{(1)}_0&=\omega , & e^{(1)}_\pm&=\omega\pm\sqrt{2}\bar{g},\displaybreak[0]\\ e^{(2)}_0&=2\omega , & e^{(2)}_\pm&=2\omega\pm\sqrt{2}\bar{g},\displaybreak[0]\\ e^{(3)}&=3\omega,\end{aligned}\ ] ] and the corresponding eigenstates are [ eqn : eigenstatesmultiple ] \\ { |{e^{(1)}_0}\rangle}_\text{xab } = { } & \cos\chi{|{\downarrow\downarrow\uparrow}\rangle}_\text{xab } -\sin\chi{|{\uparrow\downarrow\downarrow}\rangle}_\text{xab } , \displaybreak[0]\\ { |{e^{(1)}_\pm}\rangle}_\text{xab } = { } & \frac{1}{\sqrt{2 } } ( \sin\chi{|{\downarrow\downarrow\uparrow}\rangle}_\text{xab } + \cos\chi{|{\uparrow\downarrow\downarrow}\rangle}_\text{xab } \nonumber\displaybreak[0]\\ & \phantom{\frac{1}{\sqrt{2 } } ( } { } \pm{|{\downarrow\uparrow\downarrow}\rangle}_\text{xab } ) , \displaybreak[0]\\ { |{e^{(2)}_0}\rangle}_\text{xab } = { } & \cos\chi{|{\uparrow\uparrow\downarrow}\rangle}_\text{xab } -\sin\chi{|{\downarrow\uparrow\uparrow}\rangle}_\text{xab } , \displaybreak[0]\\ { |{e^{(2)}_\pm}\rangle}_\text{xab } = { } & \frac{1}{\sqrt{2 } } ( \sin\chi{|{\uparrow\uparrow\downarrow}\rangle}_\text{xab } + \cos\chi{|{\downarrow\uparrow\uparrow}\rangle}_\text{xab } \nonumber\displaybreak[0]\\ & \phantom{\frac{1}{\sqrt{2 } } ( } { } \pm{|{\uparrow\downarrow\uparrow}\rangle}_\text{xab } ) , \displaybreak[0]\\ { |{e^{(3)}}\rangle}_\text{xab } = { } & { |{\uparrow\uparrow\uparrow}\rangle}_\text{xab},\end{aligned}\ ] ] where \\ \cos\chi=\frac{g_\text{xa}}{\sqrt{g_\text{xa}^2+g_\text{ab}^2 } } , \quad \sin\chi=\frac{g_\text{ab}}{\sqrt{g_\text{xa}^2+g_\text{ab}^2}}.\end{gathered}\ ] ] aiming at initializing qubits a and b into , we repeatedly project x onto the state at time intervals , and the relevant operator to be investigated reads \\ = { } & { { |{\downarrow\downarrow}\rangle}_{\text{ab}}\hspace*{-0.2truemm}{\langle{\downarrow\downarrow}|}}\nonumber\displaybreak[0]\\ & { } + { { |{\uparrow\downarrow}\rangle}_{\text{ab}}\hspace*{-0.2truemm}{\langle{\uparrow\downarrow}| } } e^{-i\omega\tau}\cos\sqrt{2}\bar{g}\tau \nonumber\displaybreak[0]\\ & { } + { { |{\downarrow\uparrow}\rangle}_{\text{ab}}\hspace*{-0.2truemm}{\langle{\downarrow\uparrow}| } } e^{-i\omega\tau}(\cos^2\!\chi + \sin^2\!\chi\cos\sqrt{2}\bar{g}\tau ) \nonumber\displaybreak[0]\\ & { } -i{{|{\uparrow\downarrow}\rangle}_{\text{ab}}\hspace*{-0.2truemm}{\langle{\downarrow\uparrow}| } } e^{-i\omega\tau}\sin\chi\sin\sqrt{2}\bar{g}\tau \nonumber\displaybreak[0]\\ & { } -i{{|{\downarrow\uparrow}\rangle}_{\text{ab}}\hspace*{-0.2truemm}{\langle{\uparrow\downarrow}| } } e^{-i\omega\tau}\sin\chi\sin\sqrt{2}\bar{g}\tau \nonumber\displaybreak[0]\\ & { } + { { |{\uparrow\uparrow}\rangle}_{\text{ab}}\hspace*{-0.2truemm}{\langle{\uparrow\uparrow}| } } e^{-2i\omega\tau}(\sin^2\!\chi + \cos^2\!\chi\cos\sqrt{2}\bar{g}\tau).\end{aligned}\ ] ] the target state is an eigenstate of this operator belonging to the eigenvalue , which satisfies the optimization condition ( [ eqn : optimizationi ] ) , and the other three eigenvalues are give by [ eqn : initialize2eigenvalues ] \\ & { } \mp\sin\frac{\bar{g}\tau}{\sqrt{2 } } \sqrt{\cos^4\!\chi\sin^2\!\frac{\bar{g}\tau}{\sqrt{2 } } -4\sin^2\!\chi\cos^2\!\frac{\bar{g}\tau}{\sqrt{2 } } } \biggr),\displaybreak[0]\\ \lambda_{\uparrow\uparrow } = { } & e^{-2i\omega\tau}\left ( 1 - 2\cos^2\!\chi\sin^2\!\frac{\bar{g}\tau}{\sqrt{2 } } \right ) .\label{eqn : initialize2eigenvaluesupup}\end{aligned}\ ] ] if these three eigenvalues are all less than unity in magnitude , the condition for the purification ( [ eqn : condition ] ) is satisfied , and the initialized state is extracted from an arbitrary mixed state , with a nonvanishing success probability .( note that the left eigenvector belonging to the eigenvalue is . )such a situation is realized provided which is clearly seen from fig .[ fig : eigenvaluesmultiple ] and a proof in appendix [ app : eigenvaluesmultiple ] .( solid line ) , ( dashed line ) , and ( dotted line ) in ( [ eqn : initialize2eigenvalues ] ) , as functions of . in this figure, we set .note that within each range ( ) , where ( ) is defined by , and when .,scaledwidth=45.0% ] and success probability for two - qubit initialization . through the repeated confirmations of the state ,qubits a and b are initialized into from the thermal equilibrium state of the total system at temperature , i.e. , with .parameters are , , , .the time interval is tuned so as to make the smallest , which is for the fastest initialization ( see fig . [fig : eigenvaluesmultiple]).,scaledwidth=45.0% ] the final success probability is again optimal , in the sense that the target state contained in the initial state is fully extracted .the above argument reveals the possibility of initialization at least for two qubits .initialization of two qubits into from the thermal equilibrium state of the total system at temperature , i.e. , with , is demonstrated in fig .[ fig : initialize2 ] . note that it is effective when , since in such a case , is the ground state of the total system .the analytic formula for the final success probability is ^{-1} ] becomes largest at .,title="fig:",scaledwidth=45.0% ] + and success probability for entanglement purification .the entangled state is extracted from ( a ) a product state and ( b ) the thermal state at temperature , through repeated confirmations of the state .parameters are , for ( a ) , and , , for ( b ) , in the unit such that , where is defined in the caption of fig .[ fig : eigenvaluesmultiple ] . for the initial thermal state in( b ) with , the success probability for the zeroth confirmation is given by for any set of parameters , and the final value ^{-1} ] .seeking a solution with unit magnitude , we insert into this equation to obtain \\ & \phantom{\sin\theta\,\biggl [ } { } -\left(2-\frac{1}{2}\sin^2\!\theta\right ) \sin^2\!\frac{g\tau}{\sqrt{2}}\left ( 2-\sin^2\!\frac{g\tau}{\sqrt{2 } } \right ) \biggr]=0,\displaybreak[0]\\ & \cos\theta\,\biggl [ 2\cos^2\!\theta-\left(3 - 4\sin^2\!\frac{g\tau}{\sqrt{2}}\right)\cos\theta \nonumber\displaybreak[0]\\ & \phantom{\sin\theta\,\biggl [ } { } -\left(2-\frac{1}{2}\sin^2\!\theta\right ) \sin^2\!\frac{g\tau}{\sqrt{2}}\left ( 2-\sin^2\!\frac{g\tau}{\sqrt{2 } } \right ) \biggr]\nonumber\displaybreak[0]\\ & \quad\phantom{= { } } { } + 1 - 2\sin^4\!\frac{g\tau}{\sqrt{2 } } -\frac{1}{2}\sin^2\!\theta\sin^2\!\frac{g\tau}{\sqrt{2}}\left ( 2 - 3\sin^2\!\frac{g\tau}{\sqrt{2 } } \right)\nonumber\displaybreak[0]\\ & \quad=0,\end{aligned}\ ] ] extraction of entanglement is not possible when ( [ eqn : entanglementnogoi ] ) or ( [ eqn : entanglementnogoii ] ) is satisfied , and therefore , the condition for the entanglement purification in sec .[ sec : entanglementpurification ] is given by ( [ eqn : conditionentanglement ] ) .k. vogel , v. m. akulin , and w. p. schleich , phys .. lett . * 71 * , 1816 ( 1993 ) ; a. s. parkins , p. marte , p. zoller , and h. j. kimble , _ ibid . _ * 71 * , 3095 ( 1993 ) ; b. m. garraway , b. sherman , h. moya - cessa , p. l. knight , and g. kurizki , phys .a * 49 * , 535 ( 1994 ) ; c. k. law and j. h. eberly , phys .rev . lett . * 76 * , 1055 ( 1996 ) ; b. kneer and c. k. law , phys .a * 57 * , 2096 ( 1998 ) , and references therein .j. i. cirac and p. zoller , phys .a * 50 * , 2799(r ) ( 1994 ) ; m. freyberger , p. k. aravind , m. a. horne , and a. shimony , _ ibid . _ * 53 * , 1232 ( 1996 ) ; m. b. plenio , s. f. huelga , a. beige , and p. l. knight , _ ibid ._ * 59 * , 2468 ( 1999 ) ; j. hong and h .- w .lee , phys .lett . * 89 * , 237901 ( 2002 ) ; c. marr , a. beige , and g. rempe , phys .a * 68 * , 033817 ( 2003 ) . c. cabrillo, j. i. cirac , p. garca - fernndez , and p. zoller , phys .a * 59 * , 1025 ( 1999 ) ; l .- m .duan , m. d. lukin , j. i. cirac , and p. zoller , nature ( london ) * 414 * , 413 ( 2001 ) ; a. messina , eur .j. d * 18 * , 379 ( 2002 ) ; d. e. browne and m. b. plenio , phys .a * 67 * , 012325 ( 2003 ) ; x .-feng , z .- m .zhang , x .- d .li , s .- q .gong , and z .- z .xu , phys .lett . * 90 * , 217902 ( 2003 ) ; l .- m . duan and h. j. kimble , _ ibid . _ * 90 * , 253601 ( 2003 ) ; d. e. browne , m. b. plenio , and s. f. huelga , _ ibid . _ * 91 * , 067901 ( 2003 ) .b. demarco , a. ben - kish , d. leibfried , v. meyer , m. rowe , b. m. jelenkovi , w. m. itano , j. britton , c. langer , t. rosenband , and d. j. wineland , phys .lett . * 89 * , 267901 ( 2002 ) ; a. ben - kish , b. demarco , v. meyer , m. rowe , j. britton , w. m. itano , b. m. jelenkovi , c. langer , d. leibfried , t. rosenband , and d. j. wineland , _ ibid . _* 90 * , 037902 ( 2003 ) , and references therein .f. schmidt - kaler , h. hffner , m. riebe , s. gulde , g. p. t. lancaster , t. deuschle , c. becher , c. f. roos , j. eschner , and r. blatt , nature ( london ) * 422 * , 408 ( 2003 ) ; f. schmidt - kaler , h. hffner , s. gulde , m. riebe , g. p. t. lancaster , t. deuschle , c. becher , w. hnsel , j. eschner , c. f. roos , and r. blatt , appl .b * 77 * , 789 ( 2003 ) , and references therein .y. nakamura , yu . a. pashkin , and j. s. tsai , nature ( london ) * 398 * , 786 ( 1999 ) ; t. yamamoto , yu .a. pashkin , o. astafiev , y. nakamura , and j. s. tsai , _ ibid ._ * 425 * , 941 ( 2003 ) , and references therein . c. h. bennett , g. brassard , s. popescu , b. schumacher , j. a. smolin , and w. k. wootters , phys .lett . * 76 * , 722 ( 1996 ) ; * 78 * , 2031(e ) ( 1997 ) ; c. h. bennett , d. p. divincenzo , j. a. smolin , and w. k. wootters , phys .a * 54 * , 3824 ( 1996 ) .t. yamamoto , m. koashi , .k. zdemir , and n. imoto , nature ( london ) * 421 * , 343 ( 2003 ) ; z. zhao , t. yang , y .- a .chen , a .- n .zhang , and j .- w .pan , phys .rev . lett . *90 * , 207901 ( 2003 ) ; a. vaziri , j .- w .pan , t. jennewein , g. weihs , and a. zeilinger , _ ibid . _* 91 * , 227902 ( 2003 ) . for reviews , see h. nakazato , m. namiki , and s. pascazio , int . j. mod .b * 10 * , 247 ( 1996 ) ; d. home and m. a. b. whitaker , ann .* 258 * , 237 ( 1997 ) ; p. facchi and s. pascazio , in _ progress in optics _ , edited by e. wolf ( elsevier , amsterdam , 2001 ) , vol .42 , p. 147 .it should be noted , however , that the time interval in this scheme is not necessarily small as in the ordinary zeno measurements , and the purification ( [ eqn : purification ] ) is not due to the quantum zeno effect .if the ordinary zeno limit and ( fixed ) is taken in the present scheme , a quantum zeno effect appears yielding the so - called `` quantum zeno dynamics '' , which is unitary and provides us with a quite different effect from the one discussed in this article .p. facchi , a. g. klein , s. pascazio , and l. s. schulman , phys .a * 257 * , 232 ( 1999 ) ; p. facchi , v. gorini , g. marmo , s. pascazio , and e. c. g. sudarshan , _ ibid . _ * 275 * , 12 ( 2000 ) ; p. facchi , s. pascazio , a. scardicchio , and l. s. schulman , phys .a * 65 * , 012108 ( 2001 ) ; p. facchi and s. pascazio , phys .lett . * 89 * , 080401 ( 2002 ) .s. bose , phys .* 91 * , 207901 ( 2003 ) ; l. amico , a. osterloh , f. plastina , r. fazio , and g. m. palma , phys .a * 69 * , 022304 ( 2004 ) ; v. subrahmanyam , _ ibid . _ * 69 * , 034304 ( 2004 ) ; m. christandl , n. datta , a. ekert , and a. j. landahl , quant - ph/0309131 ( 2003 ) . | a novel method of purification , _ purification through zeno - like measurements _ [ h. nakazato , t. takazawa , and k. yuasa , phys . rev . lett . * 90 * , 060401 ( 2003 ) ] , is discussed extensively and applied to a few simple qubit systems . it is explicitly demonstrated how it works and how it is optimized . as possible applications , schemes for _ initialization of multiple qubits _ and _ entanglement purification _ are presented , and their efficiency is investigated in detail . simplicity and flexibility of the idea allow us to apply it to various kinds of settings in quantum information and computation , and would provide us with useful and practical methods of state preparation . |
address the widespread problem of how to take into account differences in standards , confidence and bias in assessment panels , such as those evaluating research quality or grant proposals , employment or promotion applications and classification of university degree courses , in situations where it is not feasible for every assessor to evaluate every object to be assessed . a common approach to assessment of a range of objects by such a panel is to assign to each object the average of the scores awarded by the assessors who evaluate that object .this approach is represented by the cell labelled `` simple averaging '' ( sa ) in the top left of a matrix of approaches listed in table 1 , but it ignores the likely possibility that different assessors have different levels of stringency , expertise and bias .some panels shift the scores for each assessor to make the average of each take a normalised value , but this ignores the possibility that the set of objects assigned to one assessor may be of a genuinely different standard from that assigned to another . for an experimental scientist ,the issue is obvious : _calibration_. one is to seek to calibrate the assessors beforehand on a common subset of objects , perhaps disjoint from the set to be evaluated .this means that they each evaluate all the objects in the subset and then some rescaling is agreed to bring the assessors into line as far as possible .this would not work well , however , in a situation where the range of objects is broader than the expertise of a single assessor .also , regardless of how well the assessors are trained , differences between individuals assessments of objects remain in such ad hoc approaches ..panel assessment methods : the matrix of four approaches according to use of calibration and/or confidences .simple averaging ( sa ) is the base for comparisons .fisher s iba does not deal with varying degrees of confidence and the confidence - weighted averaging does nt achieve calibration .the method proposed herein ( cwc ) accommodates both calibration and confidences . [ cols= " < ,< , < " , ] on this basis , the most precise results are given by cwc .none of them are very precise , however .a posterior uncertainty of 8 means that we should consider values for the objects to have a chance of differing by more than 8 from the outputted values .this means that for iba and cwc , only the top three proposals of table [ ranks ] are reasonably assured of being in the top ten .as the object of the competition was only to choose the best 10 proposals to fund , rather than assign values to each proposal , it might have been more appropriate to design just a classifier system ( with a tunable parameter to make the right number in the `` fund '' class ) but our goal was to use it as a test of cwc .the fact that three different methods with roughly equal evidence lead to drastically different allocation of the grants , and with large posterior uncertainties , highlights that better design of the panel assessment was required .a moral of our analysis is that to achieve a reliable outcome , the assessment procedure needs substantial advance design .we continue a discussion of design in appendices c and f. we also tested the method on undergraduate examination results for a degree with a flexible options system and on the assessment of a multi - lecturer postgraduate module . in the former case , as surrogates for the confidences in the marks we took the number of credit accumulation and transfer scheme ( cats ) points for the module , which indicate the amount of time a student is expected to devote to the module ( for readers used to the european credit transfer and accumulation system , 2 cats points are equivalent to 1 ects point ) .the amount of assessment for a module is proportional to the cats points .if it can be regarded as consisting of independent assessments of subcomponents , e.g. one per cats point , with roughly equal variances , then the variance of the total score would be proportional to the number of cats points .as the score is then normalised by the cats points , the variance becomes inversely proportional to the cats points , making confidence directly proportional to cats points .the outcome indicated significant differences in standards for the assessment of different modules , but as most modules counted for 15 or 18 cats , this was not a strong test of the merits of including confidences in the analysis , so we do not report on it here . for the postgraduate module, there were four lecturers plus module coordinator , who each assessed oral and written reports for some but not all of the students , according to availability and expertise ( except the coordinator assessed them all ) .each assessor provided a score and an uncertainty for each assessment .the results were combined using our method and the resulting value for each student was reported as the final mark .the lecturers agreed that the outcome was fair .we have presented and tested a method to calibrate assessors , taking account of differences in confidence that they express in their assessments . from a test on simulated data we found that calibration with confidence ( cwc ) generated closer estimates of the true values than additive ncomplete lock nalysis or simple veraging . a test on some real data , suggesting that the assessment procedure for that context needed more robust design .nevertheless , cwc came ahead on posterior precision .there are a number of refinements which one could introduce to the core method .these include how to deal with different types of bias , different scales for confidence , different ways to remove the degeneracy in the equations , how to deal with the endpoints on a marking scale , and how to choose the assessment graph .some suggestions are made in the appendices , along with mathematical treatment of the robustness of the method and of computation of the bayesian evidence for the models .an advantage of type of calibration is that it does not produce the artificial discontinuities across field boundaries that tend to arise if the domain is partitioned into fields and evaluation in each field carried out separately .we suggest that a method such as this , which takes into account declared confidences in each assessment , is well suited to a multitude of situations in which a number of objects is assessed by a panel .we are grateful to the mathematics department , university of warwick , for providing us with examination data to perform an early test of the method , to the applied mathematics research centre , coventry university for funding to make a professional implementation of the method and to marcus ong and daniel sprague of spectra analytics for producing it .we also thank john winn for pointing us to the sigkdd09 method , and david mackay for pointing us to the nips method .software implementing the method is free to download from the website + .software and data for the two case studies are available from .rm conceived and developed the theory .sp tested it using an early case study .rl performed case study 1 .rk performed case study 2 .rm , sp , rk and rl discussed and interpreted the results and wrote the paper .the work of rm was supported by the esrc under the network on integrated behavioural science ( es / k002201/1 ) and the centre for evaluation of complexity in the nexus ( es / n012550/1 ) .rk was supported by the eu marie curie irses network pirses - ga-2013 - 612707 dionicos - dynamics of and in complex systems funded by the european commission within the fp7-people-2013-irses programme ( 2014 - 2018 ) .we have no competing interests .xxx meadows m. can we predict who will be a reliable marker ?manchester : aqa centre for education research and policy , 2006 .bayesian methods for calibration of examiners .british journal of mathematical and statistical psychology ( 1981 ) * 34 * , 213 - 223 .ns t , brockhoff p and tomic o. statistics for sensory and consumer science .wiley , chicester , 2010 .fisher ra .an examination of the different possible solutions of a problem in incomplete blocks .annals of eugenics ( 1940 ) * 10 * , 5275 .giesbrecht fg .analysis of data from incomplete block designs .biometrics ( 1986 ) * 42 * , 437448 .flach pa , spiegler s , golenia b , price s , guiver j , harbrich r , graepel t , zaki mj .novel tools to streamline the conference review process ; experiences from sigkdd09 , + http://research.microsoft.com/pubs/122784/reviewercalibration.pdf guiver , j. calibrating reviews of conference submissions .+ http://blogs.msdn.com/b/infernet_team_blog/archive2011/09/30/calibrating-reviews-of-conference-submissions.aspx platt j , burges c. regularised least squares to remove reviewer bias .http:// research.microsoft.com/en-us/um/people/cburges/papers/reviewerbias.pdf ge h , welling m , ghahramani z , a bayesian model for calibrating conference review scores .http://mlg.eng.cam.ac.uk/hong/nipsrevcal.pdf , 2013 .thorngate w , dawes rm and foddy m. judging merit .psychology press , new york , 2008 .hubbard dw . how to measure anything .wiley , 2007 , 2010 , 2014 .matrix analysis and applied linear algebra .siam philadelphia 2001 .golub gh and van loan cf . matrix computations .johns hopkins university press , baltimore , 1996 .parker s. a test of a method for calibration of assessors .final year undergraduate project report , university of warwick , april 2014 .chung frk , spectral graph theory ( am math soc , 1996 ) mackay djc , information theory , inference and learning algorithms ( cambridge univ press , 2003 ) .song t , wolfe ew , hahn l , less - petersen m , sanders r and vickers d. relationship between rater background and rater performance .+ http://researchnetwork.pearson.com/wp-content/uploads/ song_raterbackground_04_21_2014.pdf fuchs d and fuchs ls .test procedure bias : a meta - analysis of examiner familiarity effects .review of educational research ( 1986 ) * 56 * , 243 - 262 .we motivated the model by proposing that the noise terms be of the form with the independent zero - mean random variables with unit variance , so that the are standard deviations .nevertheless , multiplying all the confidences by the same number does not change the results of the least squares fit , nor our quantifications of robustness ( appendices c and d ) .thus the can be taken to have any variance , as long as it is the same for all assessments .it is only ratios of confidences that have significance .the fitting procedure can be extended to infer a best fit value for . even if the assessors provide confidences based on assuming , the best fit for is not 1 in general .assuming independent gaussian errors , the maximum likelihood value for comes out to be where is the residual from the least squares fit ( for and is the total number of assessments . the posterior distribution for , given a prior distribution , is obtained in appendix d.we can remove the degeneracy in the equations ( [ eq : system1 ] ) and ( [ eq : system1b ] ) in different manners equation ( [ bias ] ) used here .indeed , use of ( [ bias ] ) can lead to an average shift from the scores to the true values .this does not matter if only a ranking is required , but if the actual values are important , then a better choice of degeneracy - breaking condition is needed .a preferable confidence - weighted degeneracy - breaking condition is which from ( [ eq : system1b ] ) automatically implies , thus avoiding the possibility of such systematic shifts . from a theoretical perspective, however , the best choice of degeneracy - breaking condition is to choose a reference value ( think of a notional desired mean ) and require where using the notation in ( [ eq : vob ] ) and ( [ eq : voc ] ) this can equivalently be written as to reduce the possible average shift from confidence - weighted average scores to true values , the reference value should be chosen near the confidence - weighted average score choosing exactly equal to gives ( [ eq : avbias ] ) , which makes the confidence - weighted average bias come out to 0 and the confidence - weighted average value come out to .we will show in appendix c , however , that the results are a factor more robust to changes in the scores if is chosen to be fixed rather than dependent on the scores .here we present our approach to the quantification of the robustness of our method to small changes in the scores , using norms that take into account the confidences . for , define the operator by , \ ] ] as a shorthand for the definitions in equations ( [ referee1a ] ) and ( [ referee1b ] ) , so that the equations ( [ eq : system1 ] , [ eq : system1b ] ) can be written as = k s .\ ] ]thus , if a change is made to the scores , we obtain changes , of magnitude bounded by where is defined by restricting the domain of to ( [ bias ] ) and its range to , and appropriate norms are chosen . in this appendix, we propose that appropriate choices of norms are and the associated operator norm from scores to results for . with the confidence - weighted degeneracy - breaking condition ( [ eq : avbias ] ) instead of ( [ bias ] ) we obtain where is the second smallest eigenvalue of a certain matrix formed from the confidences ( see ( [ eq : m ] ) ) .in particular , this gives the factor of can be removed if one switches to an ideal degeneracy - breaking condition as in ( [ tightest ] ) of appendix b. as a consequence , to maximise the robustness of the results , the task for the designer of is to make none of the much smaller than the others and to make significantly larger than 0 .the former is evident ( no object should receive significantly less assessment or less expert assessment than the others ) .the latter is the mathematical expression of how well connected is the graph ( equivalently ) . to design the graph requires a guess of the confidence levels that assessors are likely to give to their assessments ( based on knowing their areas of expertise and their thoroughness or otherwise ) and a compromise between assigning an object to only the most expert assessors for that object and the need to achieve a chain of comparisons between any pair of assessors .we now go into detail , derive the above bounds and describe some computational shortcuts . one can measure the size of a change to a score by comparing it to the declared uncertainty .thus we take the size of to be .we propose to measure the size of an array of changes to the scores by the square root of the sum of squares of the sizes of the changes to each score , as in ( [ eq : scoresnorm ] ) .supremum or sum - norms could also be considered but we will stick to this choice here .it is also reasonable to measure the size of a change to a true value by comparing it to the uncertainty implied by the sum of confidences in the scores for object .thus the size of is defined to be , where is the total confidence in the assessment of object .similarly , we measure the size of a change in bias by where is the total confidence expressed by a given assessor .finally , we measure the size of a change to the vector of values and biases by the square root of sum of squares of the individual sizes , as in ( [ eq : resultsnorm ] ) .the size of the operator is measured by the operator norm from scores to results , i.e. the operator is equivalent to orthogonal projection with respect to the norm ( [ eq : scoresnorm ] ) from the scores to the subspace of the form with a degeneracy - breaking condition to eliminate the ambiguity in direction of the vector . the tightest bounds in ( [ bound ] )are obtained by choosing the degeneracy - breaking condition to correspond to a plane perpendicular to this vector with respect to the inner product corresponding to equation ( [ eq : resultsnorm ] ) .thus we choose degeneracy - breaking condition ( [ tightest ] ) .1ex * theorem * : for a connected graph and with the degeneracy - breaking condition ( [ tightest ] ) , the size of the change resulting from a given array of changes in scores is bounded by where is the second smallest eigenvalue of the matrix , \label{eq : m}\ ] ] , are the numbers of assessors and objects respectively , and for is the identity matrix of rank .1ex * proof * : firstly , the orthogonal projection in metric ( [ eq : scoresnorm ] ) from to the subspace never increases length . secondly , if with then where is the vector with components then , because we restricted to the orthogonal subspace to the null vector in results - norm and is non - negative and symmetric , where index ranges over all objects and assessors .positivity of holds as soon as the graph is connected , because is a transformation of the weighted graph - laplacian to scaled variables , so dividing by and taking the square root yields the result . 1ex the computation of the eigenvalue of can be reduced from dimension to dimension by 1ex * proposition * : if , the second smallest eigenvalue of is related to the second largest eigenvalue of by if and then . if both are 1 then .1ex * proof * : the equations for an eigenvalue - eigenvector pair of are applying to the first equation , multiplying the second by , and then substituting for in the second yields thus either or is an eigenvalue of .in the first case , equation ( [ eq : evec ] ) implies , so if then is an eigenvalue of .conversely , if is an eigenvalue - eigenvector pair for with then because is non - negative , so put to see that is an eigenvector of with eigenvalue .if and then is an eigenvalue of with eigenvector for any with , e.g. . thus there is a two - to - one correspondence between eigenvalues of not equal to 1 and positive eigenvalues of ( counting multiplicity ) : .any remaining eigenvalues are 1 for and 0 for .the degeneracy gives an eigenvector of with eigenvalue 0 and it corresponds to an eigenvalue 1 of .all other eigenvalues of are non - negative because is .all other eigenvalues of are less than or equal to 1 by the cauchy - schwarz inequality .so if the second largest eigenvalue of ( counting multiplicity ) is positive then the second smallest eigenvalue of ( counting multiplicity ) is .if then because existence of implies so has dimension at least 3 and we have only two simple eigenvalues and from the simple eigenvalue 1 of , so must have another one but any other value than 1 would give a positive ; so the same formula holds . if there is no second eigenvalue of (because ) then if the second largest eigenvalue of must be 1 by the same argument . if both and are 1 then the second largest eigenvalue of is the other one associated with the eigenvalue 1 of , namely 2 . 1ex note that is a similarity transformation of ( [ eq : aweights ] ) . as examples of second eigenvalues ,putting unit confidences on the graphs in the left column of figure [ fig1:graphs ] we calculate for cases ( a),(b),(c ) in the right column , giving , respectively .finally , a user may prefer to use the degeneracy - breaking condition ( [ eq : avbias ] ) rather than ( [ tightest ] ) , perhaps out of uncertainty about what value of to use . or a user may be happy to use ( [ tightest ] ) with equal to the confidence - weighted average score , but wants to follow this average score if changes are made to the scores .that comes out equivalent to using ( [ eq : avbias ] ) .so we extend our discussion of robustness to treat this case .we find it makes the bounds increase by a factor of only .1ex * proposition * : for connected and using degeneracy - breaking condition ( [ eq : avbias ] ) , the size of resulting from changes to the scores is at most .1ex * proof * : if the degeneracy - breaking condition ( [ tightest ] ) gives a change for a change to the scores , then switching to degeneracy - breaking condition ( [ eq : avbias ] ) just adds an amount of the null vector to achieve , i.e. in the results metric , the null vector has length .thus the correction has length . using the condition ( [ tightest ] )we can write which one can recognise as one half of the inner product of with in results - norm , so it is bounded by .thus the length of the correction vector is at most that of .the correction is perpendicular to , thus the vector sum has length at most . 1ex one may also ask about robustness with respect to changes in the confidences . if an assessor declares extra high confidence for an evaluation , for example , that can significantly skew the resulting and . the analysis is more subtle , however , because of how the appear in the equations and we do not treat it here .another point of view on robustness is the bayesian one . from a prior probability on and a model for the ,one can infer a posterior probability for , whose inverse width tells one how robust is the inference . in the case offlat prior on , prescribed , gaussian noise , and an affine degeneracy - breaking condition , the posterior is gaussian with mean at the value solving equations ( [ eq : system1 ] ) , ( [ eq : system1b ] ) and the degeneracy - breaking condition , and with covariance matrix related to .specifically , the posterior probability density for is proportional to constrained to the degeneracy - breaking hyperplane , where using ( [ eq : gtmg ] ) and ( [ eq : r ] ) , this can be written as with being the deviations of from the least squares fit .thus the covariance matrix in these scaled variables is . using the degeneracy - breaking condition ( [ tightest ] ) or equivalently ( [ eq : tightest ] ), we obtain widths for the posterior on in the eigendirections of , where are the positive eigenvalues of .thus the robustness of the inference is again determined by , but scaled by .a slightly more sophisticated approach is to consider to be unknown also . given a prior density for ( which could be peaked around 1 if the assessors are assigning confidences via uncertainties , but following jeffreys would be better chosen to be if there is no information about the scale for the confidences ) , the posterior density for is proportional to where again is the number of assessments .the maximum of the posterior probability density is determined by the least squares fit for ( which is independent of ) and the following equation for : for large , the peak of the posterior has near the previously determined maximum likelihood value . , taking jeffreys prior , the peak is at . integrating over ( with jeffreys prior )e find the marginal posterior for to be proportional to incorporating an affine degeneracy - breaking condition , this is a -variate student distribution with degrees of freedom .its covariance matrix is with and interpreted by imposing the chosen degeneracy - breaking condition .so for the degeneracy - breaking condition ( [ tightest ] ) , the robustness of the inference is given by widths for , in the eigendirections of on .in particular , the confidence - weighted root mean square uncertainty for the components of the vector is where denotes the trace and , again , is interpreted by restricting to the degeneracy - breaking plane .marginal posteriors for each and can be extracted , but it must be understood that in general they are significantly correlated . for the case of simple averaging , the root mean - square posterior uncertainty in the values , weighted by the numbers of assessors for object , is where is defined in ( [ eq : rsa ] ) of appendix e. this can be derived in an analogous fashion to ( [ eq : sigma ] ) via a student distribution again , but with .here we describe the method used in case study 2 to compare the three models .bayesian model comparison is based on computing how much evidence there is for each proposed model , e.g. ch.28 of .the evidence for a model given data is .given strength of belief in model prior to the data ( relative to other models ) , one can multiply it by the evidence to obtain the posterior strength of belief in model .it is convenient to replace multiplication by addition , thus we define the log - evidence if the model has free parameters then where is a prior probability density on .let there be objects , assessors , let be the score returned by assessor for object , the confidence in this score in the case of calibration with confidence , be the collection of scores and be their number .first we compute the evidence for simple averaging ( sa ) .then we treat calibrate with confidence ( cwc ) and lastly incomplete block analysis ( iba ) because it is a special case of cwc . for simple averaging ( sa ) ,the model is that for some unknown vector of `` true '' values , with iid normal for some unknown variance .then the probability density for the scores is with the product and sum being over the assessments that were carried out . to work out the evidence for sathe model must include a prior probability density for and .the simplest proposal would be on ] , where and .this is the product of a `` box '' prior on and jeffreys prior on ( truncated to an interval and normalised ) . for comparison with the other models ,however , it is easier to replace the box prior on by a `` ball '' prior , giving on for some anticipated average score and upper estimate of the width of the distribution of values .the normalisation is where is the gamma function . for it reasonable to choose where is the smallest change any assessor could contemplate .for it is reasonable to choose . for each object , where is the number of assessors for object , is the mean of their scores , and the residual thus where to integrate this over and , we assume the bulk of the probability distribution lies in the product of the ball and the interval , and so approximate by extending the range of integration to . integrating the exponential over produces a factor thus , integrating over all components of yields integrating this over , we obtain the evidence and the log - evidence for calibrate with confidence ( cwc ) , the model is for some unknown vectors of true values , and of assessor biases , with iid normal for some unknown variance .the uncertainties correspond to confidences by , which are considered as given ( one could propose a generative model for them too , but that would require further analysis ) . then the probability density for is for prior probability density over the parameters , we want to build in a degeneracy - breaking condition .we used in our calculation , thus we take prior `` density '' on the product of the balls ( [ eq : ballo ] ) and and interval ] , e.g. .then any model for bias really ought to be nonlinear to respect the endpoints .one way to treat this is to apply a nonlinear transformation to map a slightly larger interval onto , e.g. or apply our method to the transformed scores , scaling the confidences by the inverse square of the derivative of the transformation , and then apply the inverse transformation to the true " values . on the other hand , it may be inadvisable to specify a fixed range because it requires an assessor to have knowledge of the range of the objects before starting scoring. thus one could propose asking assessors to use any real numbers and then use equation ( [ eq : model2 ] ) to extract true values .a simpler strategy that might work nearly as well is to allow assessors to use any positive numbers but then to take logarithms and fit equation ( [ eq : model1 ] ) to the log - scores .the assessor biases would then be like logarithms of exchange rates .the confidences would need translating appropriately too .one issue with our method is that the effect of an assessor who assesses only one object is only to determine their own bias , apart from an overall shift along the null vector for the rest . to rectifythis one could incorporate a prior probability distribution for the biases ( indeed , this was done by in the form of a regulariser ) .an interesting future project is to design the graph optimally , given advance guesses of confidences and constraints or costs for the number of assessments per assessor .`` optimality '' would mean to achieve maximum precision or robustness of the resulting values .for instance , in each case of figure [ fig1:graphs ] , each assessor has the same amount of work and each object receives the same amount of attention , but ( a ) achieves full connectivity with a resulting value for of , whereas ( b ) achieves moderate connectivity and a smaller value of , and ( c ) is not even connected and has . | frequently , a set of objects has to be evaluated by a panel of assessors , but not every object is assessed by every assessor . a problem facing such panels is how to take into account different standards amongst panel members and varying levels of confidence in their scores . here , a mathematically - based algorithm is developed to calibrate the scores of such assessors , addressing both of these issues . the algorithm is based on the connectivity of the graph of assessors and objects evaluated , incorporating declared confidences as weights on its edges . if the graph is sufficiently well connected , relative standards can be inferred by comparing how assessors rate objects they assess in common , weighted by the levels of confidence of each assessment . by removing these biases , true " values are inferred for all the objects . reliability estimates for the resulting values are obtained . the algorithm is tested in two case studies , one by computer simulation and another based on realistic evaluation data . the process is compared to the simple averaging procedure in widespread use , and to fisher s additive incomplete block analysis . it is anticipated that the algorithm will prove useful in a wide variety of situations such as evaluation of the quality of research submitted to national assessment exercises ; appraisal of grant proposals submitted to funding panels ; ranking of job applicants ; and judgement of performances on degree courses wherein candidates can choose from lists of options . keywords : calibration , evaluation , assessment , confidence , uncertainty , model comparison . |
the vision of automated driving systems holds a promise to change the transportation reality .current deployments that focus on autonomous solutions pose a variety of sensors and actuators for safe driving on the road , e.g. , volvo _ drive me _ project in gothenburg and _ google car _ in california .these autonomous solutions are based on the vehicles ability to observe obstacles in their line - of - sight .vehicle - to - vehicle communication has the potential to improve the system confidence on the sensory information and support advanced vehicular coordination .e.g. , when changing lanes and crossing intersections , as well as improving the road capacity by reducing the inter - vehicle distances .however , communication failures can result in hazardous situations due to coordination based on inconsistent information shared by the participating vehicles .consider an architecture , which figure [ fig : architecture ] ( left ) depicts , for implementing cooperative driving systems .the communication protocol implements the mechanisms for exchanging information with other vehicles .the control algorithm plans the vehicle motion according to the sensory information from on - board and remote sources .note that the local control algorithms depend on the ( in general vectorial ) variable ( service level ) .thus , is a common piece of information that all vehicles share in order to establish correct cooperation .for instance , in vehicular platooning , might include the maximum acceleration levels imposed to all vehicles by the limited braking capabilities of one of them .clearly , message loss when a new value of is established may lead to an inconsistent value in one or more vehicles , and thus , result in an unsafe operation of the entire cooperative system .it is then necessary to have an additional layer , shown in figure [ fig : architecture ] right , between the communication layer and the control algorithm .we propose to base this additional layer on a timed protocol for cooperation with disagreement correction that resolves disagreements on variable among the system vehicles . specifically , we address the following research question : how can cooperative systems be used to attain the highest performance without compromising safety in the presence of communication failures ?we consider applications in which the individual vehicles estimate their ability to cooperate according to the sensory information quality and communicate their maximum supported cooperative level .the vehicular system then decides on its cooperative service level according only to the received information .however , communication failures can cause the arrival of the needed information not to occur by the deadline .this can bring the vehicles to operate at distinct levels .it is a critical issue to guarantee that the uncertainty period along vehicles occurs only in short time periods .therefore , we address problem [ pr:1 ] .[ pr:1 ] is there an upper - bound on the longest period in which the cooperative system may have inconsistent operation service level ?we note that we can not solve problem [ pr:1 ] using distributed ( uniform ) consensus algorithms . in the uniform consensus problem ,every component ( vehicle ) proposes a value and the objective is to select exactly on of these proposed values .it is well - known that this problem is not deterministically solvable in unreliable synchronous networks and any -communication rounds algorithm has probability of disagreement of at least ( theorem 5.1 and theorem 5.5 ) .therefore , when the communication failures are too frequent and severe , the uncertainty period can not be bounded since the components ( vehicles ) can disagree for an unbounded number of protocol executions .this work presents a communication protocol that guarantees the shortest possible uncertainty period , i.e. , a constant time , in the presence of communication failures .our solution is based on a communication protocol that collects values from all system components . once thisproposed set is delivered to all the components , the protocol employs a deterministic function to decide on a single value from that all system components are to use .the protocol identifies the periods in which there is a clear risk for disagreement due to temporary communication failures , i.e. , a period in which was not delivered by the due time to the entire system .once such risk is identified , the protocol triggers a correction strategy against the risk of having disagreement for more than a constant number of rounds .namely , after the occurrence of communication failures that jeopardize safety , all system components will rapidly start a period to reestablish their confidence by returning to the default value .once the network returns to be stable again , and no communication failures occur , within a constant time , the protocol behaves as if no communication failures has ever occurred . the correctness proof and its validation show that the proposed solution provides a trade - off between the uncertainty period ( in the order of milliseconds ) and the occurrence of communication failures . in other words , for shorter round length ( and consequently so it the uncertainty period ) , the vehicles experience more frequently a low service level .however , for a longer round length , the vehicles experience less frequently a low service level . however , the longer the round length is , the longer the time that vehicles spend on disagreements and therefore , the risk of having accidents increases .this paper also discusses a safety - critical application that facilitates cooperation using the proposed protocol .we assume a baseline adaptive cruise control ( acc ) that does not require communication .then , we extend it to a cooperative one that attains higher vehicle performance , but relies on higher confidence level about the position and velocity of nearby vehicles .we explain how the protocol can provide a timed and distributed mechanism for facilitating decisions about when the vehicles should plan their trajectories according to the baseline application and when according to the extended one that fully utilizes cooperative functionality .the distributed ( uniform ) consensus problem considers the selection of a single value from a set of values proposed by members of , say , a vehicular system . the solution is required to terminate within a bounded time by which all system components have decided on a common value .the use of the exact ( uniform ) vs. approximate consensus approaches is explain here , where they recommend the use of exact ( uniform ) consensus due to the simplicity of the system design from the application programmer perspective .the exact consensus approach , in contrast to the approximate one , rests on a foundation of clearly defined requirements and is amenable to formal methods and analytical validation .a number of impossibility results consider distributed consensus in general ( see ) . in , the author shows that the presence of communication failures makes impossible to deterministically reach consensus ( theorem 5.1 ) and any -round algorithm has probability of disagreement of at least ( theorem 5.5 ) .this implies that there are no guarantees that vehicles can reach consensus on bounded time since vehicle - to - vehicle communications are prone to failures .moreover , when the communication failures are too frequent and severe , vehicles can fail to reach consensus for an unbounded number of consecutive times .we therefore abandon consensus - based decision algorithms , and prefer to focus on solutions that offer early fall - back strategies against the risk of having disagreement for more than a constant number of rounds .the existing literature on distributed ( uniform ) consensus algorithms with real - time requirements does consider processor failures .however , it often assumes timed and reliable communication .for example , in the authors give an algorithm that reaches agreement in the worst case in time that is sublinear in the number of processors and maximum message delay . in ,the authors provide a time optimal consensus algorithm that reaches consensus in time in the worst case where is the maximum message delay and the maximum number of processors that can crash . in this paper , we do not assume reliable communication .thus , message drops can occur independently among processors at any time .group communication systems treat a group of participants as a single communication endpoint .the group membership service monitors the set of recently live and connected participating system components whereas the multicast service delivers messages to that group under some delivery guarantees , such as delivery acknowledgment . in this paperwe assume the existence of a membership service and a best - effort ( single round solution ) dissemination ( multicast ) protocol that has no delivery acknowledgment .there exists literature on adaptive cruise control as well as vehicle platooning . in ,the author considers vehicle platooning and lane merging , and bases his construction on distributed high level communication primitives .we consider a different failure model for which there is no deterministic implementation for these communication primitives .the studied problem is motivated by the karyon project .the karyon project aims to provide a predictable and safe coordination of intelligent vehicles that interact in inherently uncertain environments .it proposes the use of a safety kernel that enforces the service level that the vehicle can safely operate .a cooperative service level can ensure that vehicles follow the same performance level . in this paper , we study a communication protocol that implements the karyon s cooperative service level evaluator . in , we present the architecture that considers the interactions between the safety kernel , a local dynamic map and the cooperative service level evaluator .unlike the earlier abstract presentation of the cooperative service level evaluator , this paper provides in detail , the design and analysis of the communication protocol .we study an elegant solution for cooperative vehicular systems that have to deal with communication uncertainties .we base the solution on a communication protocol that , we believe , can be well understood by designers of safety - critical , automated and cyber - physical systems .we explain how the designers of fault - tolerant cooperative applications can use this solution to deal with communication failures when uniformly deciding on a shared value , such as .we consider cooperative applications that must periodically decide on a shared values .since the consensus problem can not be deterministically solved in the presence of communication failures , the system is doomed to disagreed on the value of ( in the presence communication failures that are frequent and severe ) .we bound the period in which the vehicles can be unaware of such disagreements with respect to . we prove and validate that this bound is no more than one communication round ( in a vehicular system that deploys a single - hop network of wireless ad hoc communication ) .we also study the percentage of time during which the system avoids disagreement on using ns-3 simulations .we exemplify how the proposed solution helps to guarantee safety .we consider vehicles that operate in a cooperative operational mode as long as they are aware that all the nearby vehicles are also in the same mode ( with at most one communication round period of disagreement ) .however , if at least one vehicle is suspecting that another vehicle is not , all vehicles switch , within one communication round period , to a baseline operational mode so that the safety standards are met .we list our assumptions and define the problem statement ( section [ s : sys ] ) , before providing the timed protocol for cooperation with disagreement correction ( section [ s : dcp ] ) and its correctness proof ( section [ s : c ] ) . as protocol validation study , we consider computer simulation ( section [ s : eva ] ) . we discuss cooperative vehicular application ( section [ s : cva ] ) and an example before the conclusions ( section [ s : con ] )[ s : sys ] we consider a message passing system that includes a set of communicating prone - resilient vehicles .we refer to the vehicles with i d as .we assume that all vehicles have access to a common global clock with a sub - microsecond offsets by calling the function .this could be implemented , for example , using global positioning systems ( gps ) .hence , we assume that the maximum time difference along vehicles is at most .we consider that the system runs on top of a timed and fault - tolerant , yet unreliable , dissemination protocol , such as , that uses to broadcast message from vehicle to all vehicles in .we assume that end - to - end message delay is at most time .thus , messages are either delivered within time or omitted . the constant depends on distinct factors such as the mac protocol that is used , vehicle velocity , interference , etc .for example , this bound can be set to or less using , for example , dedicated short - range communications ( dsrc ) .vehicle receives from by raising the event .we consider a fully connected network topology .however , the network can arbitrarily decide to omit messages , but not to delay them for more than time .these assumptions allow the protocol to run in a synchronous round based fashion .we consider rounds of time where .every vehicle executes a program that is a sequence of _ ( atomic ) steps_. an input event can be either the receipt of a message or a periodic timer going off triggering to start a new iteration of the do forever loop .we define the _ uncertainty period _ as the period that vehicles can disagree .we say that there was a _ communication failure _ at round if there exists a vehicle that has not received the messages from all vehicles during round .the system s task is to satisfy requirements [ prp : dis1 ] to [ prp : dis3 ] , which consider definition [ def : scp ] .[ def : scp ] a stable communication period ] .we say that a stable communication period ] , see figure [ fig : be ] .thus , in any run , the communication may go through maximal stable and unstable periods ( and then perhaps back to stable ) for an unbounded number of times .requirements [ prp : dis1 ] to [ prp : dis3 ] deal with what the system output at every vehicle should be when it goes between the different periods . ] . ][ prp : dis1 ] during a stable period no two vehicles use different values .moreover , within a bounded prefix of every stable period , there is a suffix during which no uses the default return value .[ prp : dis2 ] every unstable period has a suffix named the _ disagreement correction period _ during which no two vehicles use different values . during this periodall vehicles use the default return value .[ prp : dis3 ] the suffix of a stable period during which some vehicles may use different values is called the _uncertainty period_. we require it to be bounded .we show that any system run of the proposed solution fulfills requirements [ prp : dis1 ] to [ prp : dis3 ] .specifically , we demonstrate theorem [ th : main ] ( section [ s : dcp ] ) .[ th : main ] the proposed protocol ( algorithm [ alg : tddp ] ) fulfills requirements [ prp : dis1 ] , [ prp : dis2 ] and [ prp : dis3 ] , where the uncertainty period is bounded by one round .moreover , if vehicles do no experience communication failures , the disagreement correction holds for at most one round .[ s : dcp ] we present the communication protocol in which the participants exchange messages until a deadline .these messages can include information , for example , about nearby vehicles as well as the confidences that each vehicle has about its information .once everybody receives the needed information from each other , the participants can locally and deterministically decide on their actions . in case of a communication failure , each participant that experiences a failure imposes the default return value for one round .each vehicle executes the protocol ( that algorithm [ alg : tddp ] presents ) .it uses a do forever loop for implementing a round base solution .it accesses the global clock ( line [ ln : clock ] ) and checks whether it is time for the vehicle to send information about the current round ( line [ ln : roundclock1 ] ) .a vehicle starts sending messages at time from the beginning of each round and before of the end of each round using the interface ( line [ ln : gossipcall ] ) .recall that is the maximum time difference over the vehicles and is the longest time that a message can live in the network .next , it tests whether the current round number points to the current round in time ( line [ ln : roundclock ] ) .a new round starts when is greater than . at the beginning of every round ,the protocol first keeps a copy of the collected data and the received information , and updates the round counter , as well as nullifying and ( line [ ln : roundnull ] ) .then , it tests whether it has received all the needed information for the previous round ( line [ ln : arrivaltest ] ) .suppose that a communication failure occurred in the previous round , the protocol sets the data to be sent to the default return value ( line [ ln : assingment1 ] ) .it also writes to interface the received information as well as the default return value ( line [ ln : writeoutput1 ] ) .however , in the case that all messages of the previous round have arrived on time , the system reads the application information using interface .it also writes to interface the received information as well as the value that the deterministic function returns ( line [ ln : imposeend ] ) . the proposed protocol interfaces with the gossip ( dissemination ) protocol by sending messages ( ) and receiving them ( ) periodically .the protocol locally stores the arriving information from on each round in ] and ) .the correctness proof shows that , in the presence of a single communication failure , there could be at most one disagree round in which different system components use different values .moreover , the influence of that single failure will last for at most two rounds , which is the shortest period possible .note that algorithm [ alg : tddp ] handles well any sequence of communication failures .[ bt ] : denotes a void ( initialized ) entry , as well as the default return value . : the maximum time difference among vehicle clocks . : the maximum time that a message time can live in the network . : the length of a round .+ : current communication round . : current clock . = \{\ldots \} ] is the data received at round from member . = \{false , \ldots \} ] is true if has received ( directly or indirectly ) the message from of the current round .+ : disseminate information to the system members . : dispatch arriving messages . : return a datum to be sent . : write decided output . deterministically determines an item from .we assume that whenever , then .[ al : fun ] + + [ ln : gossipreceive ] ,ack[k ] ) \gets ( data_j[k ] , true)][ln : roundnull ] [ ln : arrivaltest ] \gets \bot ] [ ln : assingment3 ] [ ln : imposeend ][ s : c ] we prove that algorithm [ alg : tddp ] follows requirements [ prp : dis1 ] to [ prp : dis3 ] .[ th : boundeduncertainty ] let ] .the following three statements hold .\(1 ) * bounded uncertainty period*. vehicles may have disagreements at round .\(2 ) * disagreement correction*. all vehicles use the default return value during ] .let be the set of messages that vehicle receives from all the vehicles , either directly or indirectly , and that has sent during round .observe that each vehicle decides the value to be used on round based on the received information at round ( lines [ ln : writeoutput1 ] and [ ln : imposeend ] ) .we claim that for and ] .first we show that each vehicle maintains consistent its own information over each round .observe that lines between [ ln : roundnull ] and [ ln : imposeend ] are executed once during round since is set to and always returns larger values .therefore , each vehicle loads its message on the register ] when receiving a message from vehicle .since the condition ensures that it loads ] is consistent on during round .we say that a message is sent transitively , if receives from where .we show that the message transitivity maintains the consistency of the messages during a stable communication period .we argue by contradiction .assume that there are two messages , and such that .consider the first time that were sent .observe that sent the two messages . a contradictionsince maintains consistent its own information over each round .the claim follows by showing that at the end of the current round , it holds that . indeed ,since messages of each round are sent time units before the end of and after time units after the beginning of , vehicles receive messages only from the current round .recall that is the maximum difference time among vehicle clocks and is the maximum time that a message can live in the network .\(1 ) * bounded uncertainty period*. consider round .since ] to and uses it ( lines [ ln : assingment1 ] and [ ln : writeoutput1 ] , respectively ) .therefore , as long as no vehicle misses s message , the first default return value of arrives along round .thus , during round , uses a distinct value than .\(2 ) * disagreement correction*. we show that all vehicles use the default return value in round ] .assume that at round ] ( lines [ ln : assingment1 ] and [ ln : imposeend ] ) .this is due to the definition of the function ( line [ al : fun ] ) and the fact that each vehicle writes the default return value if it experiences a communication failure .\(3 ) * certainty period*. we show that during ] . it remains to show that they use the same value in each round ] since all vehicles received the information from each other vehicle .the lemma follows since vehicles decide the value to be used on round based on the received information at round ( lines [ ln : writeoutput1 ] and [ ln : imposeend ] ) using the deterministic function .it follows directly from lemma [ th : boundeduncertainty ] .[ s : eva ] we consider a cooperative system that has two service levels where the lowest one is the default service level to which the system falls - back to in the presence of communication failures .for example , this can be a vehicular system in which the cooperative service level is the highest , and the autonomous service level is the lowest ( default ) one .since we focus on communication failures , the experiments assume that every system component can always support the highest service level , and thus read input ( ) always returns the highest service level .we use computer simulation to validate the protocol as well as its efficiency . for the efficiency , we consider the _ reliability _ performance measure which we define as the percentage of communication rounds during which the protocol allows the system to run at its highest service level .first , we validate that the disagreement period is of at most one round and next the reliability of the protocol .we simulate the protocol using ns-3 .we choose ieee 802.11p as the communication channel with a log - distance path loss model and nakagami fading channel model .since dsrc technologies support end - to - end message delay of less than , we fix the message delay to .we consider a synchrony bound of , say , using gps or a distributed clock - synchronization protocol .we implement a straightforward gossip protocol in which every node retransmits message every .we validate that the disagreement period is of at most one round .we plot in figure [ fig : be ] the decision that vehicles took independently during rounds using the protocol under frequent communication failures .we set the round length to so that messages can be transmitted twice in each round .observe that at round vehicles and reduce the service level due to a communication failure , but vehicles and still continue in the highest level of service .however , at round , they lower their service level .although vehicles do not operate on distinct service levels for more than one round , the service level of some vehicles may be oscillating .we can reduce this effect by increasing the round length . however , the uncertainty period also increases .rounds using the protocol . ]note the trade - off between the upper bound on the disagreement period , which is one communication round , and the success rate of the gossip protocol , which decreases as the round length becomes shorter .the type of gossip protocol as well as the number of system components also influences this success rate .we use computer simulation to study how these trade - offs work together and present the reliability .we consider three round lengths between and with intervals of so that vehicles can transmit and messages in each round , respectively .we variate the number of vehicles between two and eight .the reliability of the system is plotted in figure [ fig : rel ] .we run each experiment for simulation seconds . during the simulations ,we observe a packet drop average of .the packet drop rate per number of vehicles is presented in table [ fig : packetdrop ] .further , the percentage of time that all vehicles agree on the highest service level is greater than with round lengths of at least with at least four vehicle .observe that the reliability is higher with more vehicles than with less .this is because of the transitivity property . .packet drop rate . [cols="^,^",options="header " , ] the protocol that algorithm [ alg : tddp ] presents . .locallos) ] , it is unaware whether continues operating on platooning .thus , continues operating on platooning and assumes that it is the last vehicle in it . at time , starts loosing messages from and consequently switches to the back - off strategy in the next round .however , since requires to brake , uses the acceleration bounds in . but continues operating on platooning during ] , since the platoon has only been proved to be safe when the headway is at most and acceleration bounds are in . *platooning using algorithm [ alg : platoonadaptivecruisecontrol ] .* from the algorithm property that the uncertainty does not hold for more than one round , and will be aware that at least one vehicle has a communication failure in the next round .therefore , all switch to the lowest service level and start opening space to keep a headway of .thus , at time they have larger inter - vehicle distances which reduce the cascade effects .observe that for an less than two round lengths , the problem also occurs in this approach .indeed , every cooperative vehicular application that relies on communication suffers from this problem .however , we believe that our approach minimizes the effects .[ s : con ] we have proposed an efficient protocol that can be used in safety - critical cooperative vehicular applications that have to deal with communication uncertainties .the protocol guarantees that all vehicles will not be exposed , for more than a constant time , to risks that are due to communication failures .we demonstrate correctness , evaluate performance and validate our results via ns-3 simulations .we also showed how vehicular platooning can use the protocol for maintaining system safety .the proposed solution can be also extended to other cooperative vehicular applications , such as intersection crossing , coordinated lane change , as we demonstrated using the gulliver test - bed during the karyon project .moreover , we have considered the simplest multi - hop communication primitive , i.e. , gossip with constant retransmissions . however , that communication primitive can be substitute with a gossip protocol that facilitate a greater degree of fault - tolerance and better performance .this work opens the door for the algorithmic design and safety analysis of many cooperative applications that use different high - level communication primitives .marcos kawazoe aguilera , grard le lann , and sam toueg . on the impact offast failure detectors on real - time fault - tolerant systems . in dahliamalkhi , editor , _ disc _ , volume 2508 of _ lecture notes in computer science _ , pages 354370 .springer , 2002 .christian berger , oscar morales ponce , thomas petig , and elad michael schiller . driving with confidence : local dynamic maps that provide los for the gulliver test - bed . in andrea bondavalli , andrea ceccarelli , and frank ortmeier , editors ,_ computer safety , reliability , and security - safecomp 2014 workshops : ascoms , decsos , devvarts , isse , resa4ci , sassur .florence , italy , september 8 - 9 , 2014 .proceedings _ , volume 8696 of _ lecture notes in computer science _ , pages 3645 .springer , 2014 .antonio casimiro , jrg kaiser , johan karlsson , elad michael schiller , philippas tsigas , pedro costa , jos parizi , rolf johansson , and renato librino .brief announcement : karyon : towards safety kernels for cooperative vehicular systems . in andraw. richa and christian scheideler , editors , _ sss _ , volume 7596 of _ lncs _ , pages 232235 .springer , 2012 .antonio casimiro , oscar morales ponce , thomas petig , and elad michael schiller .vehicular coordination via a safety kernel in the gulliver test - bed . in _34th international conference on distributed computing systems workshops ( icdcs 2014 workshops ) , madrid , spain , june 30 - july 3 , 2014 _ , pages 167176 .ieee , 2014 .antnio casimiro , jos rufino , ricardo c. pinto , eric vial , elad m. schiller , oscar morales - ponce , and thomas petig . a kernel - based architecture for safe cooperative vehicular functions . in9th ieee international symposium on industrial embedded systems ( sies14 ) _ , 2014 .se lee , e llaneras , s klauer , and j sudweeks .analyses of rear - end crashes and near - crashes in the 100-car naturalistic driving study to support rear - signaling countermeasure development . , 810:846 , 2007 .oscar morales - ponce , elad m. schiller , and paolo falcone .cooperation with disagreement correction in the presence of communication failures . in _intelligent transportation systems ( itsc ) , 2014 ieee 17th international conference on _ , pages 11051110 , oct 2014 .mitra pahlavan , marina papatriantafilou , and elad michael schiller .gulliver : a test - bed for developing , demonstrating and prototyping vehicular systems .in jos d. p. rolim , jun luo , and sotiris e. nikoletseas , editors , _ proceedings of the 9th acm international workshop on mobility management & wireless access , mobiwac 2011 , october31- november 4 , 2011 , miami beach , fl , usa _ , pages 18 .acm , 2011 .mitra pahlavan , marina papatriantafilou , and elad michael schiller .gulliver : a test - bed for developing , demonstrating and prototyping vehicular systems . in _ proceedings of the 75th ieeevehicular technology conference , vtc spring 2012 , yokohama , japan , may 6 - 9 , 2012 _ , pages 12 .ieee , 2012 .steven e shladover , charles a desoer , j karl hedrick , masayoshi tomizuka , jean walrand , w - b zhang , donn h mcmahon , huei peng , shahab sheikholeslam , and nick mckeown .automated vehicle control developments in the path program ., 40(1):114130 , 1991 . | vehicle - to - vehicle communication is a fundamental requirement for maintaining safety standards in high - performance cooperative vehicular systems . the vehicles periodically exchange critical information among nearby vehicles and determine their maneuvers according to the information quality and the established strategies . however , wireless communication is failure prone . thus , participants can be unaware that other participants have not received the needed information on time . this can result in conflicting ( unsafe ) trajectories . we present a deterministic solution that allows all participants to use a fallback strategy in the presence of communication delays . we base our solution on a timed distributed protocol . in the presence of message omission and delay failures , the protocol disagreement period is bounded by a constant ( in the order of milliseconds ) that may depend on the message delay in the absence of these failures . we demonstrate correctness and perform experiments to corroborate its efficiency . we explain how vehicular platooning can use the proposed solution for providing high performance while meeting the safety standards in the presence of communication failures . we believe that this work facilitates the implementation of cooperative driving systems that have to deal with inherent ( communication ) uncertainties . |
consider the the beta distribution , with the density function , the mean of is readily obtained by the formula , but there is no general closed formula for the median .the median function , here denoted by , is the function that satisfies , the relationship holds . only for the special cases or we may obtain an exact formula : and .moreover , when , the median is exactly .there has been much literature about the incomplete beta function and its inverse ( see e.g. for a review ) .the focus in literature has been on finding accurate numerical results , but a simple and practical approximation that is easy to compute has not been found .trivial bounds for the median can be derived , which are a consequence of the more general mode - median - mean inequality . in the case of the beta distribution with , the median is bounded by the mode and the mean : for the formula for the mode does not hold as there is no mode . if , the order of the inequality is reversed .equality holds if and only if ; in this case the mean , median , and mode are all equal to .this inequality shows that if the mean is kept fixed at some , and one of the shape parameters is increased , say , then the median is sandwiched between and , hence the median tends to . from the formulas forthe mode and mean , it can be conjectured that the median could be approximated by for some , as this form would satisfy the above inequality while agreeing with the symmetry requirement , that is , .since a variate can be expressed as the ratio where and ( both with unit scale ) , it is useful to have a look at the median of the gamma distribution . studied the median function of the unit - scale gamma distribution median function , denoted here by , for any shape parameter , and obtained , rapidly approaching as increases .it can therefore be conjectured that the distribution median may be approximated by , figure ( [ fig - betaerrors ] ) shows that this approximation indeed appears to approach the numerically computed median asymptotically for all distribution means as the ( smaller ) shape parameter . for ,the relative error is less than 4% , and for this is already less than 1% .figure ( [ fig - betaerrp ] ) shows the relative error over all possible distribution means , as the smallest of the two shape parameters varies from to .this illustrates how the relative error tends uniformly to zero over all as the shape parameters increase .the figure also shows that the formula consistently either underestimates or overestimates the median depending on whether or .however , the function approximates the median fairly accurately if some other close to ( say ) is chosen .figure ( [ fig - betadisterr ] ) displays curves of the logarithm of the absolute difference from the numerically computed median for a fixed , as the shape parameter increases .the absolute difference has been scaled by before taking the logarithm : due to this scaling , the error stays approximately constant as decreases so the picture and its scale will not essentially change even if the error is computed for other values of .the figure shows that although some approximations such as has a lower absolute error for some , the error of tends to be lower in the long run , and moreover performs more consistently by decreasing at the same rate on the logarithmic scale . in practical applications, should be a sufficiently good approximation of .another measure of the accuracy is the tail probability of a variate : good approximators of the median should yield probabilities close to .figure ( [ fig - betatail ] ) shows that as long as the smallest of the shape parameters is at least 1 , the tail probability is bound between and .as the shape parameters increase , the probability tends rapidly and uniformly to .finally , let us have a look at a well - known paper that provides further support for the uniqueness of . and provide approximations for the probability function of a variate . although they do not provide a formula for the inverse , it is the probability function at the approximate median . according to , well approximated by where is the standard normal probability function , and is a function of the shape parameters and the quantile .consider : should be close to zero and at least tend to zero fast as and increase .now assume that is fixed , varies and .the function equals , rewritten with the notation in this paper , \right ) \left(\frac{1+f(a , p;d)}{m(1-m)}\right)^{1/2},\ ] ] where the function tends to zero as increases , being exactly zero only when or .it is evident that for the fastest convergence rate to zero , one should choose .this is of the order ; if , for example if we choose the mean as the approximation of the median ( ) , the rate is at most . | a simple closed - form approximation for the median of the beta distribution is introduced : for both larger than has a relative error of less than 4% , rapidly decreasing to zero as both shape parameters increase . keywords : beta distribution , distribution median |
achieving high directivity with compact radiators has been a major concern of the antenna community since its early days .still today , many modern applications , such as automotive radars , satellite communication , millimetre - wave point - to - point communication , and microwave imaging , strive for simple and efficient low - profile antennas producing the narrowest possible beams . extending the size of the radiating aperture leads to an enhanced directivity , but only if the aperture is efficiently excited .to date , uniform illumination of large apertures is achievable with reflectors and lenses ; although these can be made compact using concurrent metamaterial concepts , they still require substantial separation between the source and the aperture , resulting in a large overall antenna size .in addition , feed blockage and spillover effects must also be considered , usually complicating the design and reducing the device efficiency .high aperture efficiencies can also be achieved using antenna arrays ; nevertheless , the requirement for elaborated feed network significantly increases the complexity of this solution and limits its compactness , and may also introduce considerable feed - network losses .leaky - wave antennas ( lwas ) , on the other hand , can produce directive beams using a low - profile structure fed by a simple single source .their typical configuration consists of a guiding structure with a small perturbation , facilitating coupling of guided modes to free - space radiation . in the much - discussed fabry - prot ( fp )lwas , a localized source is sandwiched between a perfect electric conductor ( pec ) and a partially - reflecting surface ( prs ) , forming a longitudinal fp cavity . by tuning the cavity height at the design frequency , favourable coupling of the source to a single parallel - plate waveguide modeis achieved , forming a dominant leaky wave emanating from the source ; the typical device thickness lies around half of a wavelength .the leaky mode is characterized by a transverse wavenumber whose real part corresponds to the waveguide dispersion , and is accompanied by a small imaginary part determined by the prs .this leads to an azimuthally - symmetric directive radiation through the prs towards the direction defined by , where is the free - space wavenumber , with a beamwidth proportional to .broadside radiation is achieved when is small enough such that the splitting condition is satisfied , and the peaks of the conical beam merge .another class of lwas which has received significant attention lately is based on modulated impedance metasurfaces ( mometas ) .these so - called holographic antennas use a point source to excite surface waves on a thin dielectric sheet covered with metallic patches and backed by a pec ground plane , establishing effective surface impedance boundary conditions ; guiding surface waves , these structures can be very thin , below fifth of a wavelength .similar to fp - lwas , the guiding structure is designed such that only a single surface mode is allowed to propagate ; small modulation of the surface impedance , implemented by variation of patch sizes or dielectric thickness , couples the bound modes to radiative modes . to facilitate such coupling in the case of surface waves ,whose transverse momentum is greater than that of free - space , the impedance modulation should have a periodicity comparable to the wavelength . the interaction between the surface wave and the perturbation results in an infinite number of floquet - bloch ( fb ) harmonics ; the periodicity should be designed such that one of them radiates to the desirable direction , while the others become evanescent , ensuring good directivity .the leakage rate , and correspondingly the beamwidth , are determined by the depth of the modulation .both fp - lwas and mometas have an appealing compact configuration and their radiation characteristics can be rather simply controlled by tuning the properties of the guiding structure and the perturbation . nonetheless , due to their leaky - wave nature , they suffer from a fundamental efficiency limitation when considering practical finite apertures : designing a moderate leakage rate with respect to the aperture length yields uniform illumination of the aperture ( high aperture efficiency ) but results in considerable losses from the edges ( low radiation efficiency ) ; on the other hand , large values of lead to high radiation efficiencies but in this case only a portion of the aperture is used for radiation , leading to a wider beam . to mitigate edge - taper losses , shielded fp - lwa structures have been recently proposed , using pec side walls which form a lateral cavity .nevertheless , the tight coupling between the propagation of the leaky mode inside the fp cavity and the angular distribution of the radiated power manifested by poses serious limitations on the achievable aperture efficiency .this is most prominent for antennas radiating at broadside , in which only low - order lateral modes , carrying transverse wavenumbers which are small enough to satisfy the splitting condition , can be used .consequently , such antennas are designed to excite exclusively the lateral mode , which inherently limits the aperture efficiency , defined as the relative directivity with respect to the case of uniform illumination , to about .although this problem can be partially solved by the use of artificial magnetic conductor ( amc ) side walls instead of pecs , one of the most limiting constraints on the antenna design which still remains is the requirement for mode purity . as the dominant spectral components of the field in the cavity directly translate to prominent lobes in the radiation pattern , only a single mode should be allowed to propagate in the cavity to guarantee high directivity . however , as demonstrated by , suppression of parasitic modes in a cavity is usually a very difficult problem .in particular , single mode excitation becomes increasingly challenging as the desirable aperture size increases , due to the small differences between the wavenumbers associated with low - order modes ; thus , in practice , this solution can not be used for arbitrarily - large apertures . from the discussionso far it follows that it would be very beneficial if we could optimize separately the fields inside the cavity and the fields formed on the aperture .this would allow us to achieve good illumination of the aperture without the necessity to meet restricting conditions ( e.g. , the splitting condition , or single mode excitation ) , stemming from the coupling between excitation and radiation fields . but how to achieve such a separation ?the equivalence principle suggests that for a given field exciting a surface , desirable ( arbitrary ) aperture fields can be formed by inducing suitable electric and magnetic surface currents , supporting the required field discontinuities .based on this idea , the concept of huygens metasurfaces ( hmss ) has been recently proposed as a means for versatile wavefront manipulation .huygens metasurfaces are planar structures composed of subwavelength elements ( meta - atoms ) , engineered to generate the surface currents required by the equivalence principle to achieve a prescribed functionality . in general , for a given incident field and desirable transmitted field , elements exhibiting effective loss and gain are required for the implementation .however , for certain applications , the fields can be judiciously stipulated such that the metasurface can be constructed from passive and lossless elements , i.e. electric and magnetic polarizable particles . in fact , we have recently shown that if the reflected and transmitted fields are set such that the wave impedance and the real power are continuous across the two facets of the metasurface , the aperture phase can be tailored by a passive and lossless hms to produce directive radiation towards a prescribed angle , for any given excitation source ; the design procedure is straightforward once the source plane - wave spectrum is assessed .indeed , in this paper we propose to harness the equivalence principle to efficiently convert fields excited in a cavity by a localized source to highly - directive radiation using a huygens metasurface : cavity - excited hms antenna .the device structure resembles a typical shielded fp - lwa configuration , with an electric line source surrounded by three pec walls and a huygens metasurface replacing the standard prs ( fig .[ fig : physical_configuration ] ) . for a given aperture length and a desirable transmission angle ,we optimize the fp cavity thickness and source position to predominantly excite the highest - order mode of the lateral cavity , with the hms reflection coefficient ensuring the wave impedance is equalized along the metasurface ; this guarantees the aperture is well illuminated .once the source configuration is established , we stipulate the aperture fields to follow the power profile of the cavity mode , ensuring the real power is conserved at each point , and impose the suitable linear phase to promote radiation towards . with the cavity fields and aperture fields in hand , we invoke the equivalence principle and evaluate the electric surface impedance and magnetic surface admittance required to support the resultant field discontinuity .our previous work guarantees that these would be purely reactive , hence could be implemented using passive and lossless meta - atoms . utilizing the equivalence principle as described results in formation of aperture fields , the magnitude of which follows the power distribution inside the cavity , whereas their phase is independently determined to vary in a plane - wave - like fashion .this has two important implications .first , as the power profile of the highest - order lateral mode creates hot spots of radiating surface currents approximately half a wavelength apart , a uniform virtual phased - array is formed on the hms aperture ; based on array theory , such excitation profile is expected to yield very high directivity with no grating lobes regardless of the scan angle .second , in contrast to lwas of any type , the antenna directivity does not deteriorate significantly even if other modes are partially excited , as these would merely vary the amplitude of the virtual array elements , without affecting the phase purity .this semianalytical design procedure can be applied to arbitrarily - large apertures , yielding near - unity aperture efficiencies , in agreement with full - wave simulations ; due to the pec side walls , no power is lost via the edges .this offers an effective way to overcome the efficiency tradeoff inherent to fp - lwas and mometas , while preserving the advantages of a single - feed low - profile antenna .to design the hms - based antenna , we simply apply the general methodology developed in to the source configuration of fig . [fig : physical_configuration ] ; for completeness , we recall briefly its main steps .we consider a 2d scenario ( ) with the hms at and a given excitation geometry at embedded in a homogeneous medium ( , ) . under these circumstances ,the incident , reflected and transmitted fields in the vicinity of the hms can be expressed via their plane - wave spectrum where is the inverse spatial fourier transform of , is the source spectrum , is the hms reflection coefficient , and ] [ fig .[ fig : mode_analysis_prs_hms](c ) ] .the transverse wavenumber corresponding to the lowest - order mode is small enough such that the two symmetric beams merge , which enables the prs aperture to radiate a single beam at broadside .indeed , small - aperture shielded fp - lwas utilize this mode to generate broadside radiation .however , as demonstrated by , the aperture efficiency of this mode is inherently limited to about , due to the non - optimal cosine - shaped aperture illumination of the lowest - order mode , leading to broadening of the main beam [ inset of fig .[ fig : mode_analysis_prs_hms](f ) ] .this highlights a key benefit of using an hms - based antenna , as it is clear from fig .[ fig : mode_analysis_prs_hms](f ) that we can use high - order mode excitations , which provide a more uniform illumination of the aperture , for generating narrow broadside beams with enhanced directivities . in fact , as the index of the mode exciting the hms increases , the autocorrelation of eq . drives the second harmonic peaks outside the visible region of the spectrum [ shaded region in fig .[ fig : mode_analysis_prs_hms](b),(e ) ] , funnelling all the radiated power to the broadside beam , subsequently increasing the overall directivity .this improvement in radiation properties can be explained using ordinary array theory . as seen from fig .[ fig : mode_analysis_prs_hms](d ) , the peaks of the field profile generated by the mode on the hms aperture form hot spots of radiating currents separated by a distance of .the radiation from such an aperture profile would resemble the one of a uniform array with the same element separation .as known from established array theory , to avoid grating lobes the element separation should be smaller than a wavelength . for an aperture length of , where is an integer , the hot spot separation satisfies this condition for mode indices ; specifically , for ( fig . [ fig : mode_analysis_prs_hms ] ) , grating lobes would not be present in the radiation pattern for mode indices . in agreement with this argument , fig .[ fig : mode_analysis_prs_hms](f ) shows that for grating lobes still exist , while for the highest - order mode they indeed vanish .excited by a _single _ mode as a function of the mode index .solid lines denote the respective radiation characteristics of a uniformly excited aperture and dash - dotted lines mark the hpbw ( blue ) and directivity ( red ) of _ multimode _ excitation corresponding to the hms antenna of fig .[ fig : physical_configuration ] with , , and .,width=264 ] these observations are summarized in fig .[ fig : mode_performance ] , where the radiation characteristics of an hms aperture of excited by a single mode are plotted as a function of the mode index ( only fast modes are considered ) . for comparison ,solid lines denoting the half - power beamwidth ( hpbw ) and directivity values achieved for a uniformly - illuminated aperture of the same size are presented as well .indeed , it can be seen that the lowest - order lateral mode exhibits the worst performance by far , and the performance improves as the mode index increases .while the half - power beamwidth saturates quickly , the directivity values continue to increase with until the point in which grating lobes disappear is crossed ; for mode indices the radiation characteristics of the hms aperture are comparable with those of the optimal uniformly - excited aperture . from an array theory point of view ,excitation of the highest - order mode is preferable , as the corresponding equivalent element separation approaches , implying that such aperture profile would be suitable for directing the radiation to large oblique angles without generating grating lobes .another reason to prefer excitation of the highest - order mode in the case of cavity - excited hms antennas is that the hms reflection coefficient grows larger with ; therefore , the power carried by the highest - order mode is best - trapped in the cavity , guaranteeing uniform illumination even in the case of very large apertures .nevertheless , generating a single - mode excitation of a cavity via a localized source can be very problematic .fortunately , the cavity - excited hms antenna can function very well also with multimode excitation , as long as high - order modes dominate the transmission spectrum .this is demonstrated by the dot - dashed lines in figs .[ fig : mode_analysis_prs_hms ] and [ fig : mode_performance ] , corresponding to a multimode excitation generated by the configuration depicted in fig .[ fig : physical_configuration ] with , , and .as expected from the expression for the source spectrum [ eq . ] , for a given aperture length , the field just below the aperture due to a line source would be a superposition of lateral modes , the weights of which are determined by the particular source configuration , namely the cavity thickness and source position . the multimode transmission spectrum in fig .[ fig : mode_analysis_prs_hms](b ) indicates that for the chosen parameter values , high - order modes ( ) predominantly populate the aperture spectrum , however low - order modes ( ) are present as well , to a non - negligible extent ( this takes into account the fact that the transmission coefficient is higher for lower - order modes ) .considering that the far - field angular power distribution is proportional to , the multimode excitation of the prs aperture results in a radiation pattern resembling the one corresponding to single mode excitation of the highest - order mode ( ) but with significant lobes around broadside [ fig .[ fig : mode_analysis_prs_hms](c ) ] ; consequently , even if a conical beam is desirable , multimode excitation would result in significant deterioration of the directivity .on the other hand , the same multimode excitation does not degrade substantially the performance of the hms antenna . the autocorrelated spectrum , relevant to the field induced on the hms aperture , results in merging of all spectral components into a sharp dc peak , with the high - order grating lobes pushed to the evanescent region of the spectrum [ fig .[ fig : mode_analysis_prs_hms](e ) ] .this retains a beamwidth comparable with that resulting from a single - mode excitation of the highest - order mode , with only slight deterioration of the directivity due to increased side - lobe level [ fig .[ fig : mode_analysis_prs_hms](f ) and inset ] . continuing the analogy to array theory ,such multimode excitation introduces slight variations to the magnitude of the array elements , forming an equivalent non - uniform array . the corresponding multimode hpbw and directivity valuesare marked , respectively , by blue and red dash - dotted lines in fig .[ fig : mode_performance ] , verifying that indeed , cavity - excited hms antennas achieve near - unity aperture efficiencies with a practical multimode excitation ; this points out another key advantage of the cavity - excited hms antenna with respect to shielded fp - lwas . with these observations in hand ,we are finally ready to formulate guidelines for optimizing the cavity excitation for maximal directivity . for a given aperture length , with respect to eq . , we maximize the coupling to the mode ( which exhibits the best directivity ) by tuning the cavity thickness as to minimize the denominator of the corresponding coupling coefficient ; equally important , we minimize the coupling to the mode ( which exhibits the worst directivity ) by tuning the source position as to minimize the numerator of the corresponding coupling coefficient . to achieve these with minimal device thickness we derive the following design rules this is somewhat analogous to the typical design rules for ( unshielded ) fp - lwas , the key difference is that for hms - based antennas we optimize the source configuration _regardless _ of the desirable transmission angle .this difference is directly related to the utilization of the equivalence principle for the design of the proposed device , which provides certain decoupling between its excitation and radiation spectrum [ _ cf . _[ fig : mode_analysis_prs_hms](b),(e ) ] .this decoupling becomes very apparent when the hms antenna is designed to radiate towards oblique angles , in which case the same cavity excitation yields optimal directivity as well ( see appendix [ sec : oblique_angle ] ) .two important comments are relevant when considering these design rules .first , even though following eq. maximizes the coupling coefficient of the highest - order mode and minimizes the coupling coefficient of the lowest - order mode , it does not prohibit coupling to other modes .the particular superposition of lateral modes exhibits a tradeoff between bemawidth and side - lobe level ( as for non - uniform arrays ) .thus , final optimization of the cavity illumination profile is achieved by fine - tuning the source position for the cavity thickness derived in eq . , with the aid of the efficient semianalytical formulas . in fact , the source position is another degree of freedom that can be used to optimize the radiation pattern for other desirable performance features , such as minimal side - lobe level ; this feature is further discussed and demonstrated in appendix [ sec : reduced_sll ] .second , although the optimal device thickness increases with increasing aperture length , the increase is sublinear , following an asymptotic square - root proportion factor .therefore , applying the proposed concept to very large apertures would still result in a relatively compact device , while efficiently utilizing the aperture for producing highly - directive pencil beams .we follow the design procedure and the considerations discussed in section [ sec : theory ] to design cavity - excited hms antennas for broadside radiation with different aperture lengths : , , and .the cavity thickness was determined via eq . to be , , and , respectively ; the source position was set to , , and , respectively , exhibiting maximal directivity .( ) .the electric response is controlled by the capacitor width of the electric dipole , while the magnetic response is determined by the magnetic dipole arm length ( see appendix [ sec : spider_cells_modelling ] for detailed description).,width=283 ] the required electric surface impedance and magnetic surface admittance modulations are implemented using the `` spider '' unit cells depicted in fig .[ fig : unit_cell ] . at the design frequency ( ) , the unit cell transverse dimensions are and the longitudinal thickness is .each unit cell consists of 3 layers of metal traces defined on two bonded laminates of high - dielectric - constant substrate ( see appendix [ sec : spider_cells_modelling ] ) .the two ( identical ) external layers provide the magnetic response of the unit cell , corresponding to the magnetic surface susceptance , which is tuned by modifying the arm length ( affects magnetic currents induced by tangential magnetic fields ) .analogously , the middle layer is responsible to the electric response of the meta - atom , corresponding to the electric surface reactance , which is tuned by modifying the capacitor width ( affects electric currents induced by tangential electric fields ) . by controlling the lengths of and , the spider unit cells can be designed to exhibit huygens source behaviour , with balanced electric and magnetic responses ranging from to ( see appendix [ sec : spider_cells_modelling ] ) .figure [ fig : antenna_results ] presents the design specifications , field distributions , and radiation patterns for the three cavity - excited hms antennas ; table [ tab : antenna_performance ] summarizes the antenna performance parameters ( for reference , parameters for uniformly - excited apertures are also included ) .the semianalytical predictions are compared to full - wave simulations conducted with commercially - available finite - element solver ( ansys hfss ) , where the hms was implemented using the aforementioned spider cells ( see appendix [ sec : antenna_simulations ] ) . as demonstrated by fig .[ fig : antenna_results](a)-(c ) , the realized unit cells are capable of reproducing the required surface impedance modulation , except maybe around large values of ; however , it has been shown that such discrepancies usually have little effect on the performance of huygens metasurfaces .l|c c c|c c c|c c c & & & + & full - wave & semianlytical & uniform & full - wave & semianlytical & uniform & full - wave & semianlytical & uniform + + [ cols= " < " , ] & & & + ) . ( a ) hms design specifications ( black solid line ) derived from eq . , and the realized electric surface reactance ( blue circles ) and magnetic surface susceptance ( red circles ) using the spider unit cells .( b ) radiation patterns produced by semianalytical formalism ( blue dashed line ) and full - wave simulations ( red solid line ) . ( c )field distribution produced by full - wave simulations .( d ) semianalytical prediction of ., width=151 ]the spider unit cells depicted in fig .[ fig : unit_cell ] were defined in ansys electromagnetic suite 15.0 ( hfss 2014 ) with two 25mil - thick ( ) rogers rt / duroid 6010lm laminates ( green boxes in fig .[ fig : unit_cell ] ) bonded by 2mil - thick ( ) rogers 2929 bondply ( white box in fig .[ fig : unit_cell ] ) .the electromagnetic properties of these products at , e.g. permittivity tensor and dielectric loss tangent , as were provided to us by rogers corporation , have been inserted to the model .specifically , a uniaxial permittivity tensor with and loss tangent of were considered for rogers rt / duroid 6010lm laminates , while an isotropic permittivity of and loss tangent were considered for rogers 2929 bondply .the copper traces corresponded to oz .cladding , featuring a thickness of ; the standard value of bulk conductivity was used in the model . to comply with standard pcb manufacturing processes ,all copper traces were 3mil ( ) wide , and a minimal distance of 3mil was kept between adjacent traces ( within the cell or between adjacent cells ) .this implies that the fixed gaps between the capacitor traces ( along the axis ) of the electric dipole in the middle layer , as well as between the two arms ( along the axis ) of the magnetic dipole in the top and bottom layer ( fig .[ fig : unit_cell ] ) , were fixed to a value of ( ) ; the distance from the arm edge to the edge of the unit cell was fixed to ( ) .unit cells with different values of magnetic dipole arm length and electric dipole capacitor width were simulated using periodic boundary conditions ; hfss floquet ports were placed at and used to characterize the scattering of a normally - incident plane wave off the periodic structure ( the interface between the bondply and the bottom laminate was defined as the plane ) . for each combination of and ,the corresponding magnetic surface susceptance and electric surface reactance were extracted from the simulated impedance matrix of this two - port configuration , following the derivation in .the magnetic response was found to be proportional to the magnetic dipole arm length , with almost no dependency in . thus , to create an adequate lookup table for implementing , we varied by constant increments , and for a given , plotted as a function of .the value of for which the two curves intersected corresponded to a balanced - impedance point ( ) , where the unit cell acts as a huygens source , and thus suitable for implementing our metasurface . a lookup table composed of pairs and the corresponding unit cell geometries was constructed , and refined through interpolation .the interpolated unit cell geometries were eventually simulated again , to verify the interpolation accuracy and finalize the lookup table entries , as presented in fig .[ fig : spider_cell_lookup_tables ] .finally , for a given hms with prescribed surface impedance modulation , a corresponding structure could be defined in hfss using the unit cells found via the lookup table in terms of least - squares - error . and .capacitor width values required for achieving balanced electric and magnetic responses are presented as a function of the magnetic dipole arm length for ( blue open squares ) and ( red open triangles ) radiators , as obtained by finite - element simulations .the corresponding values are denoted using a blue dashed line for ( ) and using a red solid line for ( ) ., width=302 ]to verify our semianalytical design via full - wave simulations , each of the cavity - excited hms antennas designed in this paper was defined in hfss using a single strip of unit cells implementing the metasurface , occupying the region , ( being the aperture length of the antenna ) , and ( in correspondence to the laminate and bondply thicknesses ) .the simulation domain included , , and ( being the cavity thickness ) , where pec boundary conditions were applied to the planes to form the equivalence of a 2d scenario .pec boundary conditions were also applied to the plane , and to two -thick side - walls at , forming the cavity .the line - source excitation was modelled by a -wide current sheet at , with the current aligned with the axis .radiation boundary conditions were applied to the rest of the simulation space boundaries , namely , and , allowing proper numerical evaluation of the fields surrounding the antenna . to reduce the computational effort required to solve this configuration, we utilized the symmetries of our te scenario .specifically , we placed a perfect - magnetic - conductor ( pmc ) symmetry boundary conditions at the plane , and a pec symmetry boundary conditions at the plane ( the pmc symmetry boundary conditions is only applicable for broadside radiators , _ cf . _ appendix [ sec : oblique_angle ] ) .we also noticed that adding a thin layer ( ) of copper between the electric dipole edges and the pec parallel - plates at enhanced the convergence of the simulation results . with that minor modification , all of the simulated antennas converged within less than 40 iterations ( maximum refinement of per pass ) , where the stop conditions was 3 consecutive iterations in which .several assumptions made during the derivation of the hms design formulas contribute to discrepancies between predicted and actual performance of the presented antennas .first , the predicted fields are derived assuming the hms is capable of implementing continuous surface impedance boundary conditions , with unbound surface impedance values ; nevertheless , the physical implementation requires discretization of the continuous modulation into unit - cells , and the range of achievable surface impedance values is limited ( see fig .[ fig : spider_cell_lookup_tables ] ) .second , the hms is assumed to be passive and lossless , however realistic conductors and dielectrics , used for the implementation of the devices in ansys hfss , include unavoidable losses .third , to facilitate the plane - wave - like relation between the transmitted fields on the aperture [ eq . ] , while still guaranteeing they obey maxwell s equations , we have used the approximation \right\ } \right| \label{equ : sve_1}\ ] ] which is satisfied when which is a refinement of the slowly - varying envelope ( sve ) approximation utilized in .this approximation is self - consistent with our design scheme , as when the transmitted fields are directive towards as desirable , the dominant components of the transmission spectrum are in the vicinity of , where the reflection coefficient completely vanish ( the numerator of eq . vanishes ) .interestingly , the impacts of these three assumptions can be assessed by reviewing the predicted and simulated power flow across the metasurface .the -directed power profiles below ( blue ) and above ( red ) the hms as predicted by the semaianalytical formalism ( open circles and squares , respectively ) and as extracted from full - wave simulations ( dashed and solid lines , respectively ) are presented in fig .[ fig : power_profiles ] , for the three antennas reported in section [ sec : results ] .the fact that the general trend and quantitative data of the semianalytical and simulated results compare well ( note that the profiles are plotted using a common unit scale ) , indicates that the first assumption is valid ( this is also supported by ) .the semianalytical predictions made based on a homogenized continuous surface impedance boundary conditions mostly agree with the simulation data recorded below and above the metasurface , where effective medium theory predicts discretization effects to be negligible .violations of the second assumption , regarding the lossless nature of the hms , would manifest themselves as differences between the simulated power profile below and above the metasurface , which must originate in dissipation in the unit - cell conductors and dielectrics . on the other hand , violations of the third assumption , related to the sve approximation , would manifest themselves as differences in the semianalytically predicted power profile below and above the metasurface , as they correspond to violations of local power conservation .local deviations from these two assumptions are found to be rather small , they contribute to a non - negligible reduction of the total power flow across the metasurface ( integrated over the aperture length ) .the values denoted in the legends of fig .[ fig : power_profiles ] indicate that according to full - wave simulations about of the power below the hms is dissipated in the lossy conductors and dielectrics , while the semianalytically predictions reveal about discrepancy between the power below and above the metasurface .while these relative deviations can be considered small albeit non - negligible , it seems that they actually balance each other .the theoretical derivation assumes and prescribes a lossless hms , but the minor violations of the sve approximation contribute to predicted ( maxwellian ) fields which must be supported by small losses . on the other hand ,the implemented hms does include realistically unavoidable losses , which turn out to dissipate a comparable amount of power .we hypothesize that this balance allows overcoming the minor deviations from the theoretical assumptions , facilitating the very good agreement between predicted and simulated results reported herein .61ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] link:\doibase 10.1109/jrproc.1940.228959 [ * * , ( ) ] * * , ( ) * * , ( ) link:\doibase 10.1109/tap.2009.2029277 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) `` , '' ( , , ) chap .link:\doibase 10.1109/tap.2013.2287296 [ * * , ( ) ] `` , '' ( , , ) chap .link:\doibase 10.1109/map.2015.2397154 [ * * , ( ) ] `` , '' ( , , ) chap. link:\doibase 10.1109/jproc.2010.2103530 [ * * , ( ) ] link:\doibase 10.1109/tap.2006.874350 [ * * , ( ) ] link:\doibase 10.1109/tap.2010.2055812 [ * * , ( ) ] * * , ( ) * * , ( ) link:\doibase 10.1109/tap.2014.2377718 [ * * , ( ) ] link:\doibase 10.1109/22.798001 [ * * , ( ) ] * * , ( ) in link:\doibase 10.1109/aps.2010.5560977 [ _ _ ] ( , ) pp . in link:\doibase 10.1109/eucap.2012.6206359 [ _ _ ] ( , ) pp . in link:\doibase 10.1109/eucap.2006.4584495 [ _( , ) pp . * * ( ) link:\doibase 10.1109/tap.2011.2173133 [ * * , ( ) ] in _ _( , ) pp .link:\doibase 10.1109/tap.2011.2173108 [ * * , ( ) ] in link:\doibase 10.1109/aps.2013.6711032 [ _ _ ] ( , ) pp .link:\doibase 10.1155/2014/479189 [ ( ) ] `` , '' ( , , ) chap . in _ _ ( , ) pp . * * , ( ) in _ _ ( , ) pp .link:\doibase 10.1103/physrevlett.110.197401 [ * * , ( ) ] link:\doibase 10.1364/oe.21.014409 [ * * , ( ) ] * * , ( ) link:\doibase 10.1103/physrevb.89.075109 [ * * , ( ) ] link:\doibase 10.1038/srep04971 [ * * , ( ) ] ( ) * * , ( ) * * , ( ) link:\doibase 10.1002/adom.201400584 [ ( ) , 10.1002/adom.201400584 ] link:\doibase 10.1103/physrevx.3.041011 [ * * , ( ) ] * * , ( ) link:\doibase 10.1109/tap.2014.2354419 [ * * , ( ) ] link:\doibase 10.1109/tap.2003.817560 [ * * , ( ) ] _ _ ( , ) _ _ , ed .( , , ) `` , '' ( , , ) chap . , ed .link:\doibase 10.1109/tap.2006.869925 [ * * , ( ) ] `` , '' ( ) , * * , ( ) * * , ( ) link:\doibase 10.1364/ol.39.002719 [ * * , ( ) ] link:\doibase 10.1038/nmat3839 [ * * , ( ) ] link:\doibase 10.1126/science.1253213 [ * * , ( ) ] link:\doibase 10.1364/oe.23.002293 [ * * , ( ) ] link:\doibase 10.1126/science.aaa2494 [ * * , ( ) ] link:\doibase 10.1103/revmodphys.87.347 [ * * , ( ) ] link:\doibase 10.1103/physrevb.84.115421 [ * * , ( ) ] link:\doibase 10.1109/lawp.2013.2293631 [ * * , ( ) ] | one of the long - standing problems in antenna engineering is the realization of highly - directive beams using low - profile devices . in this paper we provide a solution to this problem by means of huygens metasurfaces ( hmss ) , based on the equivalence principle . this principle states that a given excitation can be transformed to a desirable aperture field by inducing suitable electric and magnetic surface currents . building on this concept , we propose and demonstrate cavity - excited hms antennas , where the is designed to optimize aperture illumination , while the hms facilitates the current distribution that ensures phase purity of aperture fields . the hms breaks the coupling between the excitation and radiation spectrum typical to standard partially - reflecting surfaces , allowing tailoring of the aperture properties to produce a desirable radiation pattern . |
recent advances in nanofabrication technology increasingly enable the construction of devices operating in the quantum regime .however , to utilize coherence effects for practical applications such as quantum information processing and communication tasks requires the ability to engineer their dynamics with high precision .considerable progress in the area of laser technology and optimal control has shown that precise coherent manipulation of the dynamics is not infeasible for a variety of quantum systems , but such control requires accurate knowledge of the system s dynamical behavior and response to external fields , which can be used to construct accurate models from which effective control designs can be engineered .the problem is particularly acute for manufactured systems , due to inevitable variations in the manufacturing processes , which ensure that the exact behavior of each device is unique and must be individually measured and characterized . for the manufacture of large - scale practical devices ,the design and operation of each device should be as simple as possible meaning that the physical resources available to initialize and measure the state of a system are usually restricted to a single basis set defined by static electrode geometry . in normal operation , any state can be produced from an initial fiducial one by applying a suitable unitary rotation .this also enables us to effectively perform measurements in an arbitrary basis , and given both these abilities , one can perform quantum process tomography .however , the problem of characterizing a device is not trivial since initially , if one does not yet know the control response of the system , one can not generate the unitary rotations required in the first place , leading to a catch-22 situation .what is required is a method of bootstrapping the control and characterization process so that the system dynamics and response can be incrementally assessed until full control and process tomography is possible , and only using the _ in situ _ resources .hence , we have developed techniques based upon the analysis of generalized coherent oscillation data from rabi or ramsey - type experiments .there are several approaches to the analysis of such experimental data including frequency - domain and time - domain analysis . in the regime of a single system transition ,fourier analysis is effective but in the presence of multiple signals , it ceases to an optimal estimator . in previous work ,we have shown how bayesian signal analysis can be effective in determining accurate model parameters in generic two - qubit hamiltonian systems where multiple frequencies are present . in this work ,we extend the technique to systems with dephasing and use bayesian signal analysis to reconstruct the underlying dynamics , which are now non - unitary .we apply this technique to three - level ( qutrit ) systems and analyze its performance for a range of dephasing rates and find that as long as coherent dynamics dominate , which would be the case for quantum information purposes , signal parameters can generally be reliably extracted and the system effectively reconstructed .the evolution of a closed quantum system is governed by a time - dependent unitary operator obeying the schrodinger equation .the evolution of an open quantum system can be highly complicated but under certain conditions it can be described by a master equation +l_d\rho(t),\ ] ] where =ab - ba ] , where are operators on and the superoperators $ ] are defined by \rho = v_k \rho v_k^\dag -\frac{1}{2}(v_k^\dag v_k\rho + \rho v_k^\dag v_k).\ ] ] under certain conditions we can make further simplifying assumptions . for example , dissipative effects in open systems weakly coupled to an environment are often dominated by a certain types of decoherence such as pure phase relaxation or population relaxation processes such as the spontaneous emission of photons or phonons .these types of processes can be described by relatively simple master equations . in the case of pure dephasing the dissipation superoperatoris often determined by a single hermitian operator . in this case , it is easy to show that the master equation simplifies - \frac{1}{2 } [ v,[v,\rho(t)]].\ ] ] even with these simplifying assumptions on the open system dynamics we see that full system identification now requires the identification two generally independent hermitian operators and , which in general means the identification of real parameters .fortunately , dephasing often acts in the eigenbasis of the hamiltonian , in which case and commute and are simultaneously diagonalizable , i.e. , there exists a basis such that where and are real , and in this case the identification problem reduces to finding a joint eigenbasis and the corresponding eigenvalues and of and , respectively .this simplifies the problem .if , and is the representation of the state in a joint eigenbasis of and then it is easy to see that the master equation ( [ eq : lme ] ) gives where and , i.e. , we have and if is the unitary basis transformation that maps the measurement basis to the joint eigenbasis of and , then the evolution of the density operator with respect to the measurement basis is given by . thus the evolution is determined by the transition frequencies , dephasing rates and the relation between the system and measurement basis , which are to be determined .as in previous work we assume that we can prepare and measure the system in a fixed set of ( orthonormal ) computational basis states , where is the hilbert space dimension . no other measurements or resources such as non - basis states are assumed to be available initially .the basic protocol is to prepare the system in a computational state , let it evolve for a period of time , then measure the probabilities that the system ends up in one of the computational basis states , repeating it for different times and all computational basis states .the experimental data thus consists of time traces , , with , which represents the probability that the system , initially in state is measured in state after evolving under the system hamiltonian for time . if it was original initialized in . at long times , noise dominates the signal which leads to an optimal total sampling time.,scaledwidth=50.0% ] when we include dephasing in the hamiltonian eigenbasis , it can be shown that the observable probabilities are where the coefficients are [ eq : coeff ] here and are the amplitude and phase of the complex number and is the phase difference . if the hamiltonian is known to be real - symmetric in the computational basis , which is the case for many systems including atomic and molecular systems , where the off - diagonal elements of the hamiltonian are usually real transition strengths or dipole moments , and spin systems , the problem can be simplified .the eigenvectors of a real - symmetric matrix are real , thus the phases must be multiples of so that , and since the sine of a multiple of vanishes , we have . in many cases the signs of the off - diagonal matrix elements are also known , e.g. for a spin chain in an anti - ferromagnetic material , the off - diagonal elements are positive , as the case for many atomic or molecular systems .we then have with and , which further simplifies the reconstruction .this shows that the identification problem for dephasing that acts in the system s natural basis is similar to the hamiltonian identification problem except that we also have to determine the dephasing rates . from the measurement results obtained from timetraces like in fig .[ fig : datasignals ] , we must extract signal frequencies and damping rates as well as the amplitudes , and in order to be able to perform reconstruction of the system dynamics . we can do this again by bayesian estimation , maximizing the likelihood that a particular process generated the observed signal . for conveniencewe label the transition frequencies of the system , assuming , and the corresponding dephasing rates , and define the vectors , , and where range from to and from to the number of transition frequencies . according to eq .[ eq : probs ] , the traces should be linear combinations of the basis functions [ eq : case1 ] or in the case where is real - symmetric , the basis functions [ eq : case2 ] and our objective is to find parameters , , , and that maximize the likelihood of the measured data .\ ] ] we can eliminate the explicit dependence on the linear coefficients , , and the noise variances by integration over suitable priors to obtain an explicit expression for the probability of a particular model given the observed data that depends only on the transition frequencies and corresponding dephasing rates . following standard bayesian analysis obtain ^{(m_b - n)/2},\ ] ] where the averages are defined by the components are essentially the orthogonal projections of the data onto a set of orthonormal basis vectors derived from the ( non - orthogonal ) basis functions defined above , evaluated at the respective sample times , via where is a matrix whose columns are the normalized eigenvectors of the matrix with thus , the parameter estimation problem for a system with decoherence acting in the hamiltonian basis is similar to that for a hamiltonian system , except that the sine and cosine basis functions for the bayesian analysis must be modified to damped sinusoids with unknown damping rates . the objective is to find the frequencies and damping rates that maximize , or equivalently , the log - likelihood function .\ ] ] given a solution and that maximizes this log - likelihood , it can be shown that the corresponding optimal coefficients in the general case ( [ eq : case1 ] ) are where is shorthand notation for the expectation values of the linear coefficients of the basis functions , given the optimal frequencies and damping rates and the data . similarly in the special case ( [ eq : case2 ] ) since the log - likelihood function is sharply peaked with generally many local extrema , finding the global optimum using gradient - type optimization algorithms starting with a completely random guess for and is inefficient .a global optimization such as pattern search or evolutionary algorithms might circumvent this problem , but neither proved either very effective in our case , especially for higher - dimensional search spaces .alternatively , starting with a somewhat reasonable initial guess , especially for the frequencies , a standard quasi - newton optimization method with cubic line search proved generally very effective in finding the global maximum . to obtain an initial estimate for the frequencies we used the sum of the power spectra of the signals . although the peaks in the power spectrum are not optimal frequency estimators when there are multiple frequencies and the exact peak locations can be difficult to ascertain even for systems with only three frequencies , as fig .[ fig : spectrum ] shows , rough estimates of the peak locations usually seem to provide a reasonable initial guess for the gradient - based likelihood optimization routine . in principlethe damping rates could be estimated from the peaks widths as well but these estimates can be tricky , especially for overlapping and minor peaks , hence we chose multiple runs with random initial guesses for the damping rates and selected the run with the highest final likelihood ( `` global '' maximum ) . given the extracted signal parameters we have to solve two further inverse problems : ( i ) reconstructing the level structure from the frequencies and ( ii ) constructing the matrix that relates the hamiltonian basis to the computational basis .the former usually involves analyzing the relationships between the frequencies as illustrated in . in generalthis is be tricky but for a qutrit system , is analysis is essentially trivial .the basis reconstruction requires solving further optimization problems to find the coefficients such that eqs ( [ eq : coeff ] ) are satisfied given the estimates for the parameters , , and derived in the previous step .due to finite sampling and noise , the inversion may not be exact , hence we recast it as a constrained optimization problem and solve it as described in .our previous analysis also shows that we can only identify a single generic hamiltonian up to equivalence where is a diagonal unitary matrix , in the basis of the measurement . however , if the off - diagonal elements in the hamiltonian are known to be real and positive , for instance , then the hamiltonian will be uniquely determined up to a global energy level shift , at least in the generic case , since we have for . for a quantum control situation ,the system dynamics can be controlled and hence different hamiltonians can be applied , and in the case subsequent hamiltonians can be fully determined up to the gauge fixed by the initial hamiltonian . by varying control parameters and tracking the change in the system dynamics , a dynamical control model can be built of the system .we randomly generated 100 real - symmetric qutrit hamiltonians and dephasing operators with different spectral properties and the geometric average of the system -factors ranging from 12 to 72 .from these we generated various data traces corresponding to the stroboscopic sampling described in section [ sect : exp ] .we considered three cases , the zero noise case ( samples per point ) , fixed finite sampling with experimental repetitions per time point , and an adaptive sampling strategy which varies the number of samples per point to reach an estimated target signal to noise ratio of for all and with an upper limit of for each data point .we then applied our parameter estimation and reconstruction algorithms to the resulting data traces .a range of dephasing rates was studied to see the effect on the reconstruction of the hamiltonian part of the dynamics . for the purposes of control ,accurate determination of the hamiltonian is much more important than a precise determination of the dephasing rate , usually it suffices to know that they are below certain limits ..median likelihoods ( ) and error rates ( ) for qutrit systems . for the 100 qutrit systems we compared the case with and without dephasing ( superscript h ) for different samples ( ) per data point . with no sampling noise , there was a small change in the median errors . for the case ,the median errors increase due to the sampling noise , the addition of dephasing increases the final error by an order of magnitude to the region .a simple adaptive scheme does similarly .the hamiltonian is reconstructed using several runs of the optimization routine , and the solution with the minimum basis error is chosen .[ cols="<,^,^,^,^,^,>",options="header " , ] table [ tab : median ] shows the median errors for various cases . comparing the dephasing / no dephasing cases , the errors are similar in the absence of projection noise ) .the frequency estimation is slightly more accurate but estimation of the signal amplitudes is slightly less accurate since the basis functions depend on , hence errors in both and contribute to errors in the coefficients . for reduced signal to noise, dephasing decreases the maximum likelihood and increases frequency , basis and reconstructed hamiltonian errors with a marked increase in median of the amplitude errors .adaptive sampling overall increases the accuracy of the parameter estimation step and the reconstructed hamiltonian for both hamiltonian and dephasing systems but the improvement is more pronounced for dephasing systems. this may be due to adaptive sampling being more beneficial for small signal amplitudes i.e. , decaying signals .this suggests the use of adaptive sampling to increase the signal to noise ratio for samples at increasing times .alternatively , the sample data can be weighted to give precedence to earlier samples .further exploration of these methods will be the subject of future study .dephasing leads to a reduction in signal at long times which can lead us to fitting noise . for strong dephasing ,this leads to reduced accuracy in the estimation of the frequencies , and hence increased errors in the other parameters .the spread in the fourier peaks can also lead to problems for closely spaced frequencies .this in itself is not a problem _ per se _ for the bayesian parameter estimation step , except that it can lead to inaccurate initial search parameters coming from peak detection in the power spectrum .this can be obviated somewhat by trying different initial parameters assuming that either of the two remaining peaks were doublets and using the most likely result . for systems of interest for quantum information processing, the dephasing rates should be sufficiently low so that the damping of the rabi - type oscillations do not impact the scheme greatly .for very small dephasing rates , however , it can be a problem if the algorithm overestimates the dephasing rates which means that the basis functions used are not suitable , and this is reflected in errors of the estimated amplitudes . for such systems , it is a simple enough matter to test models which are purely hamiltonian to see which gives the larger likelihood. for various samplings shows a strong correlation and suggests that the total error in the hamiltonian is dominated by errors in the basis reconstruction step that comes about from the separate optimization of each basis function from the amplitude estimation , which may not lead to orthogonal data vectors.,scaledwidth=50.0% ] one factor which limits the reconstruction is that we may obtain a set of matrices , which ideally should be projectors onto orthogonal eigenspaces , but may not always form an orthogonal set of projectors .we can quantify this basis error by fig .[ fig : serror ] shows that there is a strong correlation between and the ( relative ) error in the final reconstructed hamiltonian .thus , we can use to choose the best reconstructed hamiltonian from multiple optimization runs and as a rough indication of the likely accuracy of the reconstructed hamiltonian .the data also suggests that there is little direct correlation between the likelihood and errors in the parameter estimation step and the final hamiltonian error , suggesting that the final error in the hamiltonian is dominated by errors in the basis reconstruction step .the reconstruction step obviously depends on the parameter estimates obtained in the first step , and poor estimates for the parameters will generally result in large hamiltonian errors , but in some cases the basis reconstruction produces poor results even when the individual errors in the estimated parameters are small .it should be possible to improve the reconstruction step by solving the optimization problems for the simultaneously rather than independently and enforcing orthonormality constraints for the basis vectors , but doing so would require solving a rather more complicated optimization problem with several nontrivial constraints .other researchers have also begun to address the problem of system characterization with limited resources .for example , leghtas et al . also consider estimating parameters of three - level quantum systems using weak continuous population measurements .however , in their case it is assumed that most of the system is already known including the transition frequencies and the precise structure of the hamiltonian , and there is no intrinsic decoherence .they consider extracting only two real parameters of the system , the dipole transition strengths between levels 1 - 2 and 2 - 3 , which simplifies the problem enormously .burgarth et al . also consider hamiltonian characterization with restricted resources for heisenberg spin chains where only a small subset of spins are individually addressable .the form and structure of the hamiltonian is known _ a priori _ to be of a particular class , and only the coupling strengths and anisotropy of the system hamiltonian are to be determined . the sign of the couplings is also known beforehand .characterization is achieved in this case by preparing different initial states of the first spin , letting the system evolve and then performing quantum state tomography on the accessible spins .if we consider a system of three spins , the first excitation subspace acts as a qutrit .our protocol could be applied to this problem with some modifications .our scheme does not require state tomography , only the determination of position of the up - spin , and there is no requirement to know the network topology .it would be interesting to explore bayesian analysis of the response of such systems for hamiltonian characterization , and especially the role of topology in identifiability , and whether it is possible to relax the requirement for addressability of all spins . in summary, we have shown that our current two - step procedure of bayesian parameter estimation followed by a reconstruction via optimization works in the presence of dephasing on three - level systems . however , we find that the reconstruction step is a weak point of our current implementation. it may be possible to eliminate the parameter estimation step and directly apply bayesian maximum likelihood estimation upon the dynamical system parameters .this would have the advantage of always giving admissible solutions at all steps .another direction which should be explored is adaptive sampling , not only varying experimental repetitions per data point , but also using non - uniform time - domain sampling for better frequency discrimination . | we consider how to characterize the dynamics of a quantum system from a restricted set of initial states and measurements using bayesian analysis . previous work has shown that hamiltonian systems can be well estimated from analysis of noisy data . here we show how to generalize this approach to systems with moderate dephasing in the eigenbasis of the hamiltonian . we illustrate the process for a range of three - level quantum systems . the results suggest that the bayesian estimation of the frequencies and dephasing rates is generally highly accurate and the main source of errors are errors in the reconstructed hamiltonian basis . |
most of modern equity exchanges are organized as _ order driven _ markets . in such type of markets , the price formation exclusively results from operating a _ limit order book _ ( lob ) , an order crossing mechanism where _ limit orders _ are accumulated while waiting to be matched with incoming _ market orders_. any market participant is able to interact with the lob by posting either market orders or limit orders is an order to buy ( sell ) units of the asset being traded at the lowest ( highest ) available price in the market , its execution is immediate ; a limit order of size at price is an order to buy ( sell ) units of the asset being traded at the specified price , its execution is uncertain and achieved only when it meets a counterpart market order . given a security , the _ best bid _ ( resp . _ask _ ) price is the highest ( resp .lowest ) price among limit orders to buy ( resp . to sell ) that are active in the lob .the _ spread _ is the difference , expressed in numraire per share , of the best ask price and the best bid price , positive during the continuous trading session ( see ) . ] . in this context , _ market making _ is a class of strategies that consists in simultaneously posting limit orders to buy and sell during the continuous trading session . by doing so , market makers provide counterpart to any incoming market orders : suppose that an investor wants to sell one share of a given security at time and that an investor wants to buy one share of this security at time ; if both use market orders , the economic role of the market maker is to buy the stock as the counterpart of at time , and carry until date when she will sell the stock as a counterpart of .the revenue that obtains for providing this service to final investors is the difference between the two quoted prices at ask ( limit order to sell ) and bid ( limit order to buy ) , also called the market maker s spread .this role was traditionally fulfilled by specialist firms , but , due to widespread adoption of electronic trading systems , any market participant is now able to compete for providing liquidity . moreover , as pointed out by empirical studies ( e.g. , ) and in a recent review from amf , the french regulator , this renewed competition among liquidity providers causes reduced effective market spreads , and therefore reduced indirect costs for final investors .empirical studies ( e.g. ) also described stylized features of market making strategies . first , market making is typically not directional , in the sense that it does not profit from security price going up or down .second , market makers keep almost no overnight position , and are unwilling to hold any risky asset at the end of the trading day . finally , they manage to maintain their _ inventory _ , i.e. their position on the risky asset close to zero during the trading day , and often equilibrate their position on several distinct marketplaces , thanks to the use of high - frequency order sending algorithms .estimations of total annual profit for this class of strategy over all u.s .equity market were around g ] , s.t . , independent of , and representing the random spread in tick time . the spread process in calendar timeis then defined as the time - change of by , i.e. [ dyns ] s_t & = & s_n_t , t 0 .hence , is a continuous time ( inhomogeneous ) markov chain with intensity matrix , where for , and .we assume that and are independent .the best - bid and best - ask prices are defined by : , .we consider an agent ( market maker ) , who trades the stock using either limit orders or market orders .she may submit limit buy ( resp .sell ) orders specifying the quantity and the price she is willing to pay ( resp .receive ) per share , but will be executed only when an incoming sell ( resp .buy ) market order is matching her limit order .otherwise , she can post market buy ( resp .sell ) orders for an immediate execution , but , in this case obtain the opposite best quote , i.e. trades at the best - ask ( resp . best bid ) price , which is less favorable . _ limit orders strategies ._ the agent may submit at any time limit buy / sell orders at the current best bid / ask prices ( and then has to wait an incoming counterpart market order matching her limit ) , but also control her own bid and ask price quotes by placing buy ( resp .sell ) orders at a marginal higher ( resp .lower ) price than the current best bid ( resp .ask ) , i.e. at ( resp . ) .such an alternative choice is used in practice by a market maker to capture market orders flow of undecided traders at the best quotes , hence to get priority in the order execution w.r.t .limit order at current best / ask quotes , and can be taken into account in our modelling with discrete spread of tick size .there is then a tradeoff between a larger performance for a quote at the current best bid ( resp .ask ) price , and a smaller performance for a quote at a higher bid price , but with faster execution .the submission and cancellation of limit orders are for free , as they provide liquidity to the market , and are thus stimulated . actually , market makers receive some fixed rebate once their limit orders are executed .the agent is assumed to be small in the sense that she does not influence the bid - ask spread .the limit order strategies are then modelled by a continuous time predictable control process : _ t^make & = & ( q_t^b , q_t^a , l_t^b , l_t^a ) , t 0 , where valued in ^ 2 ] , , and giving the number of stocks purchased at the best - ask price if , or selled at the best - bid price if at these times .again , we assumed that the agent is small so that her total market order will be executed immediately at the best bid or best ask price . in other words , we only consider a linear market impact , which does not depend on the order size . when posting a market order strategy , the cash holdings and the inventory jump at times by : y__n & = & y__n^- + _ n , [ sauty ] + x__n & = & x__n^- - c(_n , p__n , s__n ) [ sautx ] where c(e , p , s ) & = & ep + |e| + represents the ( algebraic ) cost function indicating the amount to be paid immediately when passing a market order of size , given the mid price , a spread , and a fixed fee .we shall denote by for , . one can also include proportional fees paid at each market order trading by considering a cost function in the form : , or fixed fees per share with . in most order - driven markets ,available data are made up of _ level 1 data _ that contain transaction prices and quantities at best quotes , and of _ level 2 data _ containing the volume updates for the liquidity offered at the first order book slices ( usually ranges from 5 to 10 ) . in this section ,we propose some direct methods for estimating the intensity of the spread markov chain , and of the execution point processes , based only on the observation of _ level 1 data_. this has the advantage of low computational cost , since we do not have to deal with the whole volume of _ level 2 data_. yet , we mention some recent work on parameters estimation from the whole order book data , but involving heavier computations based on integral transforms . * _ estimation of spread parameters ._ * assuming that the spread is observable , let us define the jump times of the spread process : _ 0 = 0 , _n+1 & = & t > _ n : s_t s_t- , n 1 . from these observable quantities , one can reconstruct the processes : n_t & = & # _ j > 0 : _ j t , t 0 , + _ n & = & s__n , n 0 . then , our goal is to estimate the deterministic intensity of the poisson process , and the transition matrix of the markov chain from a path realization with high frequency data of the tick - time clock and spread in tick time over a finite trading time horizon , typically of one day .from the observations of samples of , , and since the markov chain is stationary , we have a consistent estimator( when goes to infinity ) for the transition probability ] given by : [ estimrho ] _ ij & = & for the estimation of the deterministic intensity function of the ( non)homogeneous poisson process , we shall assume in a first approximation a simple natural parametric form .for example , we may assume that is constant over a trading day , and more realistically for taking into account intra - day seasonality effects , we consider that the tick time clock intensity jumps e.g. every hour of a trading day .we then assume that is in the form : ( t ) & = & _ k 1_t_kt < t_k+1 where is a fixed and known increasing finite sequence of with , and is an unknown finite sequence of . in other words ,the intensity is constant equal to over each period _$1.pdf[http://www.amf-france.org/documents/general/9530_1.pdf ] . | we propose a framework for studying optimal market making policies in a limit order book ( lob ) . the bid - ask spread of the lob is modelled by a markov chain with finite values , multiple of the tick size , and subordinated by the poisson process of the tick - time clock . we consider a small agent who continuously submits limit buy / sell orders at best bid / ask quotes , and may also set limit orders at best bid ( resp . ask ) plus ( resp . minus ) a tick for getting the execution order priority , which is a crucial issue in high frequency trading . by trading with limit orders , the agent faces an execution risk since her orders are executed only when they meet counterpart market orders , which are modelled by cox processes with intensities depending on the spread and on her limit prices . by holding non - zero positions on the risky asset , the agent is also subject to the inventory risk related to price volatility . then the agent can also choose to trade with market orders , and therefore get immediate execution , but at a least favorable price because she has to cross the bid - ask spread . the objective of the market maker is to maximize her expected utility from revenue over a short term horizon by a tradeoff between limit and market orders , while controlling her inventory position . this is formulated as a mixed regime switching regular / impulse control problem that we characterize in terms of quasi - variational system by dynamic programming methods . in the case of a mean - variance criterion with martingale reference price or when the asset price follows a levy process and with exponential utility criterion , the dynamic programming system can be reduced to a system of simple equations involving only the inventory and spread variables . calibration procedures are derived for estimating the transition matrix and intensity parameters for the spread and for cox processes modelling the execution of limit orders . several computational tests are performed both on simulated and real data , and illustrate the impact and profit when considering execution priority in limit orders and market orders . * keywords : * market making , limit order book , inventory risk , point process , stochastic control . |
the harmonic oscillator is the simplest approximation to a physical oscillator and , when perturbation terms are taken into account , the resulting _ anharmonic oscillator _ is governed by the nonlinear differential equation where denotes the derivative with respect to the independent time or space variable , a damping factor , a time - dependent frequency coefficient , the simplest possible anharmonic term , a forcing term . as to the anharmonicity exponent , it can be either real if is real positive , which is the case for lane - emden gas dynamics equilibria , rational , like in the thomas and fermi atomic model , or more usually integer : for the ermakov or pinney equation , for the duffing oscillator . for generic values of the coefficients ,this equation is equivalent to a third order autonomous dynamical system , which generically admits no closed form general solution .the purpose of this article is to review all the nongeneric situations for which there exist exact analytic results , such as a first integral or a closed form solution , either particular or general .this can only happen when the coefficients satisfy some constraints .the paper is organized as follows . in section [ sectionlh ], we give a lagrangian and a hamiltonian formulation for any value of the coefficients .this generalizes all the previous particular results , obtained for values of equal to : , ( * ? ? ?* eq . ( 3.7 ) ) , , , , , ( * ? ? ?* section 6.74 , vol . 1 ) .in section [ sectionparticularfirstintegral ] , we provide two conditions on which are sufficient to ensure the existence of a first integral . in section [ sectioninterpretation ] , we give a natural interpretation of these two conditions .finally , in section [ sectionpa ] , we perform the painlev analysis of ( [ eq1 ] ) . most of this work has already been done by painlev and gambier . indeed , the ordinary differential equation ( ode ) ( [ eq1 ] ) belongs , at least for specific values of and maybe after a change of the dependent variable in case is not an integer , to the class of second order odes which they studied and classified .however , as opposed to these classical authors , we do not request the full painlev integrability of the ode , only some partial integrability , and this requires some additional work .in particular , we compute the condition for the absence of any infinite movable branching , a multivaluedness which occurs at a location depending on the initial conditions . such a condition , like for linear odes ,arises from any integer value of the difference of the two fuchs indices , whether positive or negative , and we check that this condition is a differential consequence of the two conditions for the existence of a particular first integral .this detailed painlev analysis of equation ( [ eq1 ] ) happens to be an excellent example for several features of painlev analysis which are most of the time overlooked . for convenience, we use the notation and the convention that function implicitly contains an arbitrary multiplicative constant ; letter , with or without subscript , denotes an arbitrary constant .function frequently occurs , for the way to suppress term in ( [ eq1 ] ) is to perform the change of function .for every value of , including the logarithmic case , the anharmonic oscillator can be put in lagrangian form or in hamiltonian form as shown by the explicit expressions for + \frac{1}{2 } \left ( h u^2 \right ) ' , \label{eq5 } \\ & & h(q , p , x ) = g_1 \left[u'^2 + 2 g_3 \int_0^u u^n \d u + g_2 u^2 + 2 g_4 u \right ] - \frac{1}{2 } h ' u^2 , \label{eq6 } \\ & & q = u,\ p=2 g_1 u ' + h u , \label{eq7}\end{aligned}\ ] ] in which is an arbitrary gauge function of .according to noether theorem , one can find first integrals by looking at the infinitesimal symmetries of the lagrangian . for a detailed review of thislie symmetries approach to the anharmonic oscillator , the interested reader can refer to .since the dependence of ode ( [ eq1 ] ) in is rather simple , let us determine under which conditions on parameters there exists a particular first integral containing the same kind of terms than the hamiltonian in which the six functions of are to be determined .eliminating between and , we obtain out of the nine monomials , only eight are linearly independent since , thus generating eight linear homogeneous differential equations in six unknowns , hence generically two conditions on .note that , even in the logarithmic case , the first generated equation is .functions to are given by and function must be a nonzero solution common to the three linear equations f_1 + ( n+3 ) g_3 f_1 ' = 0 , \label{eq11a } \\ & & { \hskip -1.0truecm } ( -2 g_1 g_2 + 2 g_1 g_1 ' -g''_1 + g_2 ' ) f_1 + ( g_1 ^ 2 + 2 g_2 - \frac{5}{2 } g_1 ' ) f_1 ' - \frac{3}{2 } g_1 f_1 '' + \frac{1}{2 } f_1 ' '' = 0 , \label{eq11b } \\ & & { \hskip -1.0truecm } ( -2 g_1 g_4 + 2 g'_4 ) f_1 + 3 g_4 f_1 ' = 0 .\label{eq11c}\end{aligned}\ ] ] each above equation can be integrated once , \d x , \label{eq12b } \\ & & k_3=f_1 ^ 3 g_1^{-2 } g_4 ^ 2. \label{eq12c}\end{aligned}\ ] ] whatever be , the function can always be computed from ( [ eq11b ] ) ; depending on , it is also given by '=0 \label{eq13c}\end{aligned}\ ] ] where the constants and have been absorbed in the definition of .the only case in which equation ( [ eq13c ] ) needs to be considered is , and its solution can be found in ( * ? ? ?* eq . e12 ) .once is determined , and are given by ( [ eq10a ] ) , ( [ eq10d ] ) , and by the three following expressions , corresponding to cases ( [ eq13a ] ) , ( [ eq13b ] ) , ( [ eq13c ] ) respectively , parameters must satisfy the conditions , polynomial in , resulting from the elimination of between the three linear equations ( [ eq11a])-([eq11c ] ) .there are two such conditions when is nonzero , and only one when it is zero .the simplest choice of these two conditions is ( the labelling refers to the contributing s ) : uniquely defined as , respectively , the condition independent of and the one independent of . by elimination ,one obtains the condition independent of and the one independent of , = 0 , \label{eq15c } \\g_4 \not= 0 : c_{234 } \equiv & & { \hskip -0.4 truecm }n^3 g_2 ' + n^2 ( -g_2 \gamma_3'- \gamma_3'''+\frac{3}{2 } \gamma_3 '' \gamma_4 ' + \frac{1}{2 } \gamma_3 ' \gamma_4 '' -2 \gamma_4 ' \gamma_4 '' + \gamma_4 ' '' ) \nonumber \\ & & { \hskip -0.4 truecm } -\frac{3}{2 } \gamma_3'^2 \gamma_4 ' + \frac{1}{2 } \gamma_3'^3 -n^2(n-1 ) g_2 \gamma_4 ' \nonumber \\ & & { \hskip -0.4 truecm } -\frac{1}{2 } ( n^2 - 3 ) \gamma_3 ' \gamma_4'^2 + \frac{1}{2 } ( n^2 - 1 ) \gamma_4'^3 = 0 . \label{eq15d}\end{aligned}\ ] ] for , any two of the above four conditions are functionally independent . for ,one has and independent conditions are and .all above conditions admit an integrating factor , a natural consequence of the integrated forms ( [ eq12a])([eq12c ] ) .this is evident for ; for each of the three others , it is sufficient to integrate it as a first order linear inhomogeneous ode in , g_1^{-8/3 } g_4^{-4/3 } , \label{eq16b } \\k_{123 } \equiv & & { \hskip -0.5truecm } [ ( n+3)^2 g_2 - ( n+3 ) ( 2 g_1 ' + \gamma''_3 ) -2(n+1)g_1 ^ 2 \nonumber \\ & & { \hskip -0.5truecm } -(n-1 ) g_1 \gamma'_3 + \gamma'^2_3]^{n+3 } g_1^{2(n-1 ) } g_3^{-4 } , \label{eq16c } \\g_4 \not= 0 : k_{234 } \equiv & & { \hskip -0.5truecm } \left [ g_2 + \frac { ( n+2 ) \gamma_3 ' \gamma_4 ' -\gamma_3'^2-(n+1 ) \gamma_4'^2 + 2 n ( \gamma_3 '' - \gamma_4'')}{2 n^2 } \right ] \times \nonumber \\ & & { \hskip -0.5truecm } g_3^{-4/3 } g_4^{4/n}. \label{eq16d}\end{aligned}\ ] ] in the duffing case , condition has already been given , together with its integrated form .a very simple interpretation can be given for the two conditions .indeed , the form of equation ( [ eq1 ] ) is invariant under the simultaneous change of dependent and independent variables where and are two arbitrary gauge functions .the transformed ode reads u ' + \frac{1}{\xi'^2 } \left[g_2+g_1\frac{\alpha'}{\alpha}+\frac{\alpha''}{\alpha } \right ] u \nonumber \\ & & + \frac{\alpha^{n-1}}{\xi'^2 } g_3 u^n + \frac{1}{\alpha \xi'^2}g_4 = 0 . \label{eq18}\end{aligned}\ ] ] let us adjust the two functions so as to make two of the four new coefficients as simple as possible .one of the three possible ways is to cancel the damping term by the choice , which reduces ode ( [ eq18 ] ) to u + \alpha^{n+3 } g_1 ^ 2 g_3 u^n + \alpha^3 g_1 ^ 2 g_4 = 0 . \label{eq19}\end{aligned}\ ] ] canceling the new coefficient amounts to solving the general linear second order ode for , which is possible ( from the point of view of painlev , adopted here ) but does not lead to an explicit value of .this reduced form is then and this means that one can freely set in ( [ eq1 ] ) without altering its global properties ( existence of first integrals , painlev property , etc ) . instead of that , one can make constant either the reduced coefficient iff by choosing , or the reduced coefficient iff by the choice ( let us recall that implicitly contains an arbitrary multiplicative constant ) .we are thus led to the reduced forms \times \nonumber \\ & & { \hskip -0.4truecm } \phantom{g_2 \mapsto } g_1^{-8/3 } g_4^{-4/3},\ \nonumber \\ & & { \hskip -0.4truecm } g_3 \mapsto g_3 g_1^{-2n/3 } g_4^{-(n+3)/3 } , \nonumber \\ n\not=-3 : & & { \hskip -0.4truecm } g_1 \mapsto 0,\ g_3 \mapsto 1,\ \label{eq20b } \\ & & { \hskip -0.4truecm } g_2 \mapsto \big [ g_2 -\frac{1}{n+3 } ( 2 g_1 ' + \gamma_3 '' ) + \frac{1}{(n+3)^2 } ( -2(n+1)g_1 ^ 2 \nonumber \\ & & { \hskip -0.4truecm } \phantom{g_2 \mapsto } -(n-1 ) g_1 \gamma_3 ' + \gamma_3'^2 ) \big ] g_1^{2(n-1)/(n+3 ) } g_3^{-4/(n+3)},\ \nonumber \\ & & { \hskip -0.4truecm } g_4 \mapsto g_4 g_1^{2n/(n+3 ) } g_3^{-3/(n+3 ) } , \nonumber \\ n=-3 ,g_4=0 : & & { \hskip -0.4truecm } g_1 \mapsto 0,\ g_3 \mapsto g_3 g_1 ^ 2,\ \nonumber \\ & & { \hskip -0.4truecm } g_2 \mapsto \left[g_2+g_1\frac{\alpha'}{\alpha}+\frac{\alpha''}{\alpha } \right ] \alpha^4 g_1^{2 } \mapsto 0 .\label{eq20c}\end{aligned}\ ] ] then the interpretation is obvious : any reduced coefficient distinct from or is the of one of the integrated conditions ( [ eq16a])([eq16d ] ) .conversely , any integrated condition is one of the remaining coefficients when two coefficients have been made constant by a choice of gauge .for instance , is the reduced coefficient associated to reduced coefficients and equal to unity .this can also be seen in a more elementary way . in a gauge that , an expression for the first integral is and , from the relation one deduces that the two other coefficients and or must be constant .the hamiltonian ( [ eq6 ] ) is a first integral if and only if and all other s are constant .painlev set up the problem of finding nonlinear differential equations able to define functions , just like the first order elliptic equation defines the elliptic function of weierstrass , a doubly periodic function which includes as particular cases the well known trigonometric and hyperbolic functions . for a tutorial introduction , see the books .a by - product of this quest for new functions has been the construction of exhaustive lists of nonlinear differential equations , the general solution of which can be made singlevalued ( in more technical terms , without movable critical singularities , this is the so - called _ painlev property _ ( pp ) ) , which implies that their general solution is known in closed form .in particular , the list of second order first degree algebraic equations , i.e. with rational in , algebraic in , analytic in , which possess the pp has been established by painlev and gambier .these classical results apply to our problem only for those values of for which eq .( [ eq1 ] ) , maybe after a monomial change of the dependent variable , belongs to the class ( [ eqgambierclass ] ) . these values , which include at least all the integers , are determined below .then , the way those classical results can be applied is twofold . 1 .require the pp for our equation or its transform under .2 . restricting to the values of for which the first integral ( [ eq8 ] ) exists , check that the two conditions for the existence of this first integral imply the identical satisfaction of the necessary condition that eq .( [ eqgambierclass ] ) have no movable logarithmic branch points .indeed , this is a classical result of poincar that the movable singularities ( those which depend on the initial conditions ) of first order algebraic odes can only be algebraic , , and never logarithmic , with some term .let us do that without too many technical considerations .the above mentioned necessary condition that eq .( [ eqgambierclass ] ) have no movable logarithmic branch points can only be computed after performing the following steps ( for the unabribged procedure , see ) ._ step 1_. for each _ family of movable singularities _ determine the _ leading behaviour _ .this is achieved by balancing the highest derivative with a nonlinear term. therefore , there exist two leading behaviours , denoted `` family '' ( balancing of and ) and `` family '' ( balancing of and ) ^{\displaystyle{\frac{1}{n-1}}},\ n\not=-1,\ \\( g_4 ) \ : \ & & p=2,\ u_0=-\frac{1}{2 } g_4,\ g_4 \not=0.\end{aligned}\ ] ] _ step 2_. for each family , compute the fuchs indices , the roots of the indicial equation of the linear equation obtained by linearizing ( [ eq1 ] ) near its leading behaviour , and require every fuchs index to be integer .this linearized equation is and the fuchs indices are obtained by requiring the solution , the diophantine condition that be integer has a countable number of solutions since we have not yet put restrictions on . _step 3_. for each family , compute all the necessary conditions for the absence of movable logarithms ( in short , no - log conditions ) , which might occur when one computes the successive coefficients of ( [ eqlaurentu ] ) .one can check that the family can never generate such no - log conditions .these conditions need not be computed on the original equation ( [ eq1 ] ) , they can be computed on any algebraic transform if this proves more convenient ( indeed , movable logarithms are not affected by an algebraic transform on ) , such as the transformed powers are , and the fuchs indices are unchanged .the computation of the no - log conditions is impossible unless there exists a making all the powers of in ( [ eq125 ] ) at least rational . in order to avoid the technical complications of dealing with rational values of the leading exponent , we restrict to those values of for which there exists a making and , if is nonzero , integer .the useful transforms are which are polynomial if and only if the original ode ( [ eq1 ] ) is identical to ( [ eq126a ] ) for and to ( [ eq126b ] ) for . to summarize , let us compute the no - log condition on the ode for ( [ eq126a ] ) .unfortunately , one does not know how to obtain the dependence of on , since must first be given a numerical value before is computed ; this makes uneasy the comparison with conditions ( [ eq15a])([eq15b ] ) , which depend on .to fix the ideas , a list of useful values of is displayed in table [ table1 ] .= 1.5truemm = 0.5truemm = 0.8truemm .values of for integer $ ] . [ cols="^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^",options="header " , ] [ table1 ]the computation of for positive values of is classical . denoting for shortness ,one finds the following expressions for the indicated values of , ^{-1/2 } [ ( \gamma_3 ' - g_1 ) c_1 - c_1 ' ] + \frac{1}{6 } c_2 , \label{eq26.4 } \\ ( \frac{7}{3},0 ) : q_5= & & { \hskip -0.4 truecm } ( .g_1 ^ 2 + .g_1 \gamma'_3 + .\gamma_3'^2 ) c_1 + ( .\gamma_3 ' ) c'_1 + .c''_1 , \label{eq26.5 } \\ ( 2 , g_4 ) : q_6= & & { \hskip -0.4 truecm } .c_1 ^ 2 + .c'_2 , \label{eq26.6 } \\( \frac{9}{5},0 ) : q_7= & & { \hskip -0.4 truecm } .c_1 + \dots + .c_1^{(4 ) } , \label{eq26.7 } \\ ( \frac{5}{3},0 ) : q_8= & & { \hskip -0.4 truecm } .c_1 + \dots + .c_1^{(5)}. \label{eq26.8}\end{aligned}\ ] ] where dots stand for rational numbers when and polynomials of , , , when .similar relations have been checked for and ( thomas - fermi case ) but are not reproduced here .condition contains a sign arising from the two possible choices for and is equivalent to the two conditions we therefore check the property that each is indeed a differential consequence of the two conditions for the existence of a first integral ( [ eq8 ] ) for negative values of the fuchs index , the results are the following : the family never generates any no - log condition , and , for the family , a no - log condition arises from the fuchs index , and this condition is a differential consequence of conditions ( [ eq15a])([eq15b ] ) , at least for the examples handled .this is also an experimental verification of and this relation can not be reversed , as proven by painlev and gambier .for instance , in the case of the duffing oscillator , condition implies the reducibility of to the second painlev transcendent whereas the stronger conditions imply the reducibility of to an elliptic function . _remark_. when one includes the contribution of the schwarzian in the definition of the gradient of the expansion variable , as done in the invariant painlev analysis , all the computed no - log conditions , equations ( [ eq26.1])([eq26.8 ] ) , are independent of this schwarzian , as opposed e.g. to the lorenz model .this certainly indicates some hierarchy between the level of nonintegrability of these two dynamical systems . _remark_. for some small values of , there is equivalence between the no - log condition and ( [ eq15a])([eq15b ] ) .this nongeneric situation occurs only for the following values of , , i.e. the ermakov - pinney equation , , i.e. an equation considered by lane and emden , chandrasekhar and logan , , an equation which could deserve more study .this work generalizes all previous results on the partial integrability of the anharmonic oscillator .it gives a natural interpretation of the two conditions for the existence of a particular first integral , in terms of reduced coefficients .finally , this system is an excellent example to study several features of painlev analysis . a good, recent bibliography can be found in ref . .the author wishes to thank m. musette for fruitful discussions during the completion of this work and for her encouragement to publish these lecture notes , a first draft of which was delivered at a meeting in dijon .r. conte , unification of pde and ode versions of painlev analysis into a single invariant version , _ painlev transcendents , their asymptotics and physical applications _, 125144 , eds .d. levi and p. winternitz ( plenum , new york , 1992 ) .r. conte , the painlev approach to nonlinear ordinary differential equations , _ the painlev property , one century later _, 77180 , ed .r. conte , crm series in mathematical physics ( springer , new york , 1999 ) .solv - int/9710020 .r. conte and m. musette , a simple method to obtain first integrals of dynamical systems , _ solitons and chaos _ ( research reports in physics nonlinear dynamics ) 125128 , eds .i. a. antoniou and f. j. lambert ( springer , berlin , 1991 ) .e. l. ince , _ ordinary differential equations _ ( longmans , green , and co. , london and new york , 1926 ) . reprinted ( dover , new york , 1956 ) .see errata in cosgrove 1993 , preprint .russian translation ( gtiu , kharkov , 1939 ) .e. kamke , _ differentialgleichungen : lsungsmethoden und lsungen _ ,vol . 1 , 243 pages ; vol . 2 , 668 pages .akademische verlagsgesellschaft , geest & portig k .-g . , leipzig 1947 . reprinted ( chelsea , new york , 1948 ) . | we consider the anharmonic oscillator with an arbitrary - degree anharmonicity , a damping term and a forcing term , all coefficients being time - dependent : its physical applications range from the atomic thomas - fermi model to emden gas dynamics equilibria , the duffing oscillator and numerous dynamical systems . the present work is an overview which includes and generalizes all previously known results of partial integrability of this oscillator . we give the most general two conditions on the coefficients under which a first integral of a particular type exists . a natural interpretation is given for the two conditions . we compare these two conditions with those provided by the painlev analysis . [ firstpage ] |
one dimensional single chirp signal , defined as for , is frequently used in different field of sciences , for example , sonar , radar , communications systems , as well as in oceanography and geology .one may see abatzoglou ( 1986 ) , kumaresan and verma ( 1987 ) , djuric and kay(1990 ) , gini et al .( 2000 ) , lin and djuric ( 2000 ) , lahiri et al .( 2012 , 2014 ) and the references cited therein for details .recently various types of parameter estimation techniques and their various properties have been studied for the signal ( [ eq1:model 1 ] ) , for example see kumaresan and verma ( 1987 ) , djuric and kay ( 1990 ) , gini et al .( 2000 ) , nandi and kundu ( 2004 ) , kundu and nandi(2008 ) , lahiri et al .( 2014 ) , saha and kay ( 2002 ) and references cited therein .kumerasan and verma ( 1987 ) used rank reduction technique for estimating parameters of the model .djuric and kay ( 1990 ) proposed a linear regression technique after phase unwrapping .gini et al .( 2000 ) used maximum likelihood ( ml ) technique as one of their estimation technique .saha and kay ( 2002 ) used ml technique on superimposed chirp signals .they have used mcmc importance sampling for find maximum likelihood estimates .lin et al .( 2004 ) has found the maximum likelihood estimates of the parameters of chirp signal using simulated annealing technique .it is seen that most of the methods concentrated on ml technique in recent past .recently some other techniques have drawn attention to the statistics community .for example , nandi and kundu ( 2004 ) first provided the asymptotic properties of least square estimates ( lse ) of the parameters involved in one dimensional chirp signal with i. i. d. error structure .kundu and nandi ( 2008 ) extended those result in case of linear stationary errors with known auto covariance function .lahiri et al .( 2014 ) has used the least absolute deviation ( lad ) technique to find the estimates of the parameters involved in the model .they also gave the asymptotic properties of lad estimates under i. i. d. error structure .although , similar to kundu and nandi ( 2008 ) , lahiri et al .( 2014 ) assumed that the error variance is known .therefore , it is seen that considerable amount of classical estimation techniques have been used for estimating the parameters of the chirp signal and their theoretical properties have studied in different circumstances for a while .some bayesian analysis of the chirp signal are found in the literature also .lin and djuric ( 2000 ) has done estimation of parameters of multiple of chirp signal using mcmc technique .however , they have only taken i. i. d. error structure into account . moreover , it is important to mention that none of the methods , proposed so far , has taken the prediction issue into consideration .here we have analysed the one dimensional single chirp signal for forecasting in bayesian paradigm . to be precise , our main aim , in this paper ,is to predict a future observation through the bayesian analysis of one dimensional single chirp signal .the advantage of using the bayesian analysis for purpose of prediction is that it gives not only a single value or an interval , but also a complete density , which is known as posterior predictive density .it is also well known that the posterior predictive density is used for checking whether the model and the prior give a reasonable clarification of the uncertainty in a study ( see box e. p. george and tiao c. george ( 1973 ) and bickel j. peter and doksum a. kjella ( 2007 ) ) .to achieve posterior predictive density we have used mcmc technique suitably and in the path of getting posterior predictive density , posterior densities of the parameters involved in the model have been found as by product . using these posterior densities one may perform the bayesian inference of the parameters involved in the model , when required .the first part of the work mainly focuses on the i. i. d. error structure where we have simulated four different samples from the model ( [ eq1:model 1 ] ) and have illustrated the mcmc based bayesian analysis of the model .moreover , this mcmc based bayesian method is applied on three different real data sets , obtained from http://archive.ics.uci.edu/ml , to see how our method is performing in practice , and in particular one of these three data set is used for multiple step forecasting . in second part , we deal with the dependent error structure though mcmc based methodologies with the same goal of forecasting .kundu and nandi ( 2008 ) has dealt with the model ( 1 ) assuming stationary error structure in great detail using classical inference , focusing estimation of the model parameters . however , in their numerical studies they have assumed that the auto covariance function ( acf ) is completely known . in our discussionit is assumed that the covariance structure of the error is exponentially decaying but unknown . in discrete time, it is known that exponentially decaying acf corresponds to stationary auto regressive process of order one ( ar(1 ) ) and kundu and nandi ( 2008 ) has presented the ar(1 ) example in their paper in numerical studies . with the same choice of the parameter values ,as done in kundu and nandi ( 2008 ) , a simulation study is done in our paper for purpose of illustration .the remaining part of the paper is designed as follows . in section 2we describe the parameter spaces and give a overview of mcmc based bayesian methodology . in subsection ( 2.1.1 )we provide the required details for gibbs sampling , used in getting sample from joint posterior of the parameters . in the next subsection ( 2.1.2 )prior specifications for the parameters are made and the full conditional density functions , which are required for gibbs sampling , are evaluated for the cases where the closed form of the conditional densities are available . in all other cases random walk mcmc is proposed ( see gamerman and lopes ( 2006 ) and liu , s. jun ( 2008 ) ) to update the parameters . in section 3we give the results of simulation studies based on our method , and in section 4 we show the performance of our method when applied to real data .section 5 deals with the dependent error structure where we assume an exponentially decaying covariance function with respect to time .finally we give conclusion and future work in section 6 .one dimensional single chirp signal ( defined as equation ( [ eq1:model 1 ] ) ) , assuming be random with = 0 , and var( ) = for all , has 5 parameters , namely , , , , and .following lahiri et .al ( 2012 ) we assume the following conditions on the parameters , , and : 1 . + , for some known real number .2 . , . for purpose of ease in computation , we further reparametrize the above structure as follows .we take = and = , with and ] . * , . * .it needs to be noted that lahiri et .al ( 2012 , 2014 ) assumed is known , unlike us .although nandi and kundu ( 2004 ) provided an estimate of in their theoretical study , but for numerical studies they took to be known .for purpose of bayesian analysis , we assume that the parameters are random and each having a prior distribution .our main goal is to get the posterior predictive distribution of given the data in bayesian paradigm .we assume that \sim n(0,\sigma^2_{\mbox{\scriptsize }}) ] , the conditional distribution of given the data = .using augmentation technique y \epsilon \epsilon y \epsilon ] , ] , the corresponding samples drawn from ] , gibbs sampler method is used . in the next subsectionwe give a brief description how we apply gibbs sampler technique in the present situation .we denote the prior densities of , and as ] , ] and \epsilon y \epsilon ] , are needed and given by \propto [ r][{\mbox{\boldmath{ } } } |r , \theta , \alpha , \beta , \sigma_{\mbox{\scriptsize }}^2],\end{aligned}\ ] ] \propto [ \theta][{\mbox{\boldmath{ } } } |r , \theta , \alpha , \beta , \sigma_{\mbox{\scriptsize }}^2],\end{aligned}\ ] ] \propto [ \alpha][{\mbox{\boldmath{ } } } |r , \theta , \alpha , \beta , \sigma_{\mbox{\scriptsize }}^2],\end{aligned}\ ] ] \propto [ \beta][{\mbox{\boldmath{ } } } |r , \theta , \alpha , \beta , \sigma_{\mbox{\scriptsize }}^2],\end{aligned}\ ] ] \propto [ \sigma_{\mbox{\scriptsize }}^2][{\mbox{\boldmath{ } } } |r , \theta , \alpha , \beta , \sigma_{\mbox{\scriptsize }}^2].\end{aligned}\ ] ] we assume the following prior distributions on the parameters & \sim \mbox{uniform}(0,m)\\[1ex ] [ \theta ] & \sim \mbox{uniform}(0,2\pi)\\[1ex ] [ \alpha ] & \sim \mbox{vonmises}(\alpha_0,\alpha_1)\\[1ex ] [ \beta ] & \sim \mbox{vonmises}(\beta_0,\beta_1)\\[1ex ] [ \sigma^2_{\mbox{\scriptsize } } ] & \sim \mbox{inverse gamma}(\sigma_0,\sigma_1)\end{aligned}\ ] ] the closed form of the full conditional densities of , , can not be obtained in closed form .so , we have used random walk mcmc to update these parameters .however , the conditional density of given all the others , i.e. , \epsilon ] follows inverse gamma distribution with the parameters and .in this section we have done four simulation studies to illustrate our method .we have given the true values of the parameters of simulated samples taken for our experiment in the table [ table1:simu details ] .in each of four samples we keep the last observation for purpose of prediction .so , we have basically 100 observations for first three samples and 19 observation for last sample .we have applied the random walk mcmc algorithm for updating parameters , , and . for all practical purposes the true values of not known so , we decide to take a sufficiently large value of to be in safe side .we choose to be equal to .to run mcmc simulations it is needed to choose the prior parameters appropriately . for choosing mean directions in the prior distributions of , special technique is used .loglikelihood function is maximized using simulated annealing technique with respect to the parameters and ( for details see robert , p. and casella , g. ( 2004 ) and liu , s. ( 2008 ) ) , separately , and these values are used as initial values for mcmc iterations as well , for and , respectively . below we discuss about the choice of prior parameters for , and in details ..description of parameters for simulated samples [ cols="^,^,^,^,^,^,^ " , ] with the above choice of hyper parameters , mcmc iteration have been done with burning period .we use the normal random walk proposal with variance to update , , and , for all these four simulated samples . the choice of this variance is set based on a pilot run of mcmc iteration .we mention here that once the sample observations are obtained from and , we transform the sample values to that of = and = .details about the results of mcmc iteration for each of the sample are discussed here .posterior densities along with the true values are provided in the figures ( [ fig : post of a , b , alpha , beta , sigma for sample1 ] ) , ( [ fig : post of a , b , alpha , beta , sigma for sample2 ] ) , ( [ fig : post of a , b , alpha , beta , for sample3 ] ) and ( [ fig : post of a , b , alpha , beta , for sample4 ] ) , for sample 1 , 2 , 3 and 4 , respectively . except for in the figure ( [ fig : post of a , b , alpha , beta , for sample4 ] ) , all other true values are well within the high probability region .we have taken only 20 observations for sample 4 .so , it is not unusual to notice such an incident , specially when the posteriors are not unimodal .it can also be noted that as soon as the number of observations are increased to , the problem of is solved ( figure ( [ fig : post of a , b , alpha , beta , for sample3 ] ) ) .true signals along with 95% credible intervals , obtained based on mcmc simulations , are provided in the figure ( [ fig : fit for sample1,2,3,4 ] ) .it is seen that in all the cases the true signal falls well within the 95% credible intervals . finally , posterior predictive densities for observations of samples 1 , 2 , 3 and observation of sample 4 , are given in the figure ( [ fig : post predictive for sample1,2,3,4 ] ) .it is seen that true future values are well within the 95% credible interval in each of the cases , which is our main aim for this paper .three real data sets have been taken from http://archive.ics.uci.edu/ml of which two are of type sonar rocks and one is of type sonar mines .each signal contains 60 observations .bache , k. and lichman , m. ( 2013 ) mainly used the data for classification purpose .they got sonar signals from two different substances one is mine , other is rocks . herewe use these data sets for showing performance of our method for purpose of one step and multiple step forecasting .first we consider two different signals , the one from sonar mine and one of the two signals from sonar rock and keep the last observations for purpose of prediction .therefore , we use observations for our analysis for the two above mentioned signals .we analyse another sonar rock signal in a different mode , in the sense that we keep last 5 observations for purpose of multiple step forecasting .that means that for this data set we only use observations for analysis . for first two signals ( the sonar mine and one of the sonar rock , for which observations are considered for analysis ) , we give the 95% credible interval based on sample observations obtained from mcmc simulations for purpose of fitting and the posterior predictive densities for purpose of prediction . for the last sonar rock signal five posterior predictive densitiesare given to show how more than one true future values being captured by 95% credible intervals .we follow the same path for choosing the prior parameter values for as we have done in the case of simulated samples . here in particularwe choose the value of ( prior mean ) to be , and for the three data sets respectively . has set to be as earlier , so that the variance becomes half of the square of the mean , for all the data sets considered here .accordingly we find the values of for each cases .the above choice of means have been done after running a pilot mcmc iterations . for as well as for ,the scale parameters for vonmises distributions have been chosen to be for each of the data sets .the mean directions of vonmises for have been set to be and for the sonar mine signal and the first sonar rock signal , respectively .similarly , for , we choose the mean directions of vonmises distributions to be and for the sonar mine signal and the first sonar rock signal , respectively .these values are obtained based on a small iteration ( iterations ) of simulated annealing technique , separately , on and for each of the data sets .finally , for the second sonar rock signal ( in which case observations are considered for analysis ) , the choice of the mean directions for and are taken to be and , obtained as a result of small number of iterations ( iterations ) of simulated annealing . for ,the value of needs to be given however , the true value of is not known here so , we choose a large value of , , for all these real data sets . with these choices of the prior parameters we run mcmc iterations with burning period , and the following results are noted .figure ( [ fig : sonar_data_mines_1_2_rocks_3_fit ] ) provides the 95% fit for the sonar mine signal , and the first sonar rock signal based on 59 observations .there are 60 observations for each of these signals .we have taken 59 observations for purpose of fitting and have kept 60th observation for prediction purpose .figure ( [ fig : sonar_data_1_2_posterior_predictive_rocks_3_predictive ] ) gives the posterior predictive densities of 60th observations for the sonar mine signal and the first sonar rock signal .we have noted from figure ( [ fig : sonar_data_mines_1_2_rocks_3_fit ] ) that 95% credible intervals mostly contain the true signals in both the two cases .95% credible completely contains the true sonar rock signal .however , three true observations ( 5th , 29th and 54th ) fall outside the 95% credible interval for the sonar mine signal ( first graph of figure ( [ fig : sonar_data_mines_1_2_rocks_3_fit ] ) ) . at the same timeit is noticed that the pattern of the signal has been best captured for the sonar mine signal . on the other hand , from figure ( [ fig : sonar_data_1_2_posterior_predictive_rocks_3_predictive ] ) it is observed that the true values of 60th observation fall well within the credible intervals for each of the two signals , the sonar mine signal and the first sonar rock signal .the second sonar rock signal , consisting of 60 observations , is analysed as follows .we keep first 55 observations as the known data and last 5 observations for purpose of multiple step prediction , as discussed earlier .now , in figures ( [ fig : sonar rocks predictions_1st three ] ) and ( [ fig : sonar rocks predictions_last two ] ) the five posterior predictive densities are given for last five observations , respectively .it is interesting to observe that true values of 56th , 57th , 58th , 59th and 60th observations fall well within the 95% credible region .it is notable to see that even with only observations we can predict next observations in a reasonable way .in this section we assume that = , given and has a multivariate normal distribution with mean and covariance matrix where = is the correlation matrix of order , with the following structure \exp{(-\rho |i - j| ) } & \mbox { otherwise } , \end{cases } \label{eq30:correlation of epsilon}\ ] ] with . under the above assumptions ] can be written as = \int \int \int \int \int \int [ y_{t+1}|{\mbox{\boldmath{ } } } , r,\theta,\alpha,\beta,\sigma_{\mbox{\scriptsize }},\rho ] [ r,\theta,\alpha,\beta,\sigma_{\mbox{\scriptsize }},\rho|{\mbox{\boldmath{ } } } ] \ , dr\ , d\theta\ , d\alpha\ , d\beta\ , d\sigma_{\mbox{\scriptsize }}\ , d\rho,\ ] ] as done in equation ( [ eq3:posterior predictive ] ) for independent error structure .it has to be noted that now the number of parameter increases to from ( the number of parameters present in the i. i. d. case ) .as mentioned earlier in section ( [ 2.1:mcmc mth ] ) , it is not possible to get an analytical form of the above integration .the same simulation technique , as done in ( [ 2.1:mcmc mth ] ) is implemented here .it is easily seen that ] , the corresponding samples drawn from ] as well for multiple step forecasting , with a little generalization of augmentation technique , adding each simulated to the previous set of data to get , denoted by , for = .then the above mcmc technique can be used . to be precise , at each augmentation stagea single mcmc sample is required to draw from ] , denoted as ] will follow a normal distribution with mean and variance where is the mean at time , is the expectation of , is the correlation matrix of , and is a vector of order containing the covariances between and .we give details of simulations in one step forecasting here which can be easily generalized for multiple step forecasting . to get the sample from ] can be written as ] ] ] ] .the choice of the prior distributions for and is taken to be the same as done in section ( [ 2.1.2:priors ] ) . for , we assume that \sim \mbox{gamma}(\rho_0,\rho_1)\ ] ] and obtain the full conditional density of as \propto [ \rho ] [ { \mbox{\boldmath{ } } } |\ldots].\ ] ] the forms of the full conditional distributions of , , , and remain the same as equations ( [ eq7:posterior of r ] ) , ( [ eq8:posterior of theta ] ) , ( [ eq9:posterior of alpha ] ) , ( [ eq10:posterior of beta ] ) , ( [ eq11:posterior of sigma ] ) , respectively . in the current scenarioalso , the closed form of the full conditional densities are available only for and . for rest of the parameters we use the normal random walk mcmc with variance , as earlier , for updating .the full conditional distribution of , ]. then ] , follows a truncated normal distribution with truncation between and with the mean parameter and variance parameter as specified in equation and , respectively .we note that & \propto [ r][{\mbox{\boldmath{ } } } |\ldots ] \\ & \propto \exp \left[-\frac{1}{2\sigma^2_{\mbox{\scriptsize }}}\sum_{t=1}^{t}\left\{y_t - r(\cos \theta \cos(\alpha t+\beta t^2)+\sin \theta\sin(\alpha t+\beta t^2))\right\}^2\right]\chi_{(0,m)}(r)\end{aligned}\ ] ] we simplify the exponent term ( without ) below .therefore , & \propto \exp \left[-\frac{1}{\sigma_r^2 } \left\ { r-\frac{\sum_{t=1}^{t } y_t \left(\cos \theta \cos(\alpha t+\beta t^2)+\sin \theta \sin(\alpha t+\beta t^2)\right)}{\sum_{t=1}^{t } \left(\cos \theta \cos(\alpha t+\beta t^2)+\sin \theta \sin(\alpha t+\beta t^2)\right)^2}\right\}^2\right]\chi_{0,m}(r),\end{aligned}\ ] ] where hence the proof follows. [ thm2:full conditional of r in dependent case ] let , where = , with = , , and be the correlation matrix or order , with the elements as specified in equation .also we assume that , for some known real and \sim \mbox { uniform } ( 0,m) ] , denoted as \epsilon y b y b b b \epsilon y b b b $ } } } _ { t}}\right)^2\right]\chi_{(0,m)}(r)\end{aligned}\ ] ] hence the proof follows . , , , and for sample 1 of table 1 , where true values are indicated with vertical lines.,title="fig:",width=144,height=144 ] , , , and for sample 1 of table 1 , where true values are indicated with vertical lines.,title="fig:",width=144,height=144 ] , , , and for sample 1 of table 1 , where true values are indicated with vertical lines.,title="fig:",width=144,height=144 ] , , , and for sample 1 of table 1 , where true values are indicated with vertical lines.,title="fig:",width=144,height=144 ] , , , and for sample 1 of table 1 , where true values are indicated with vertical lines.,title="fig:",width=144,height=144 ] , , , and for sample 2 of table 1 , where true values are indicated with vertical lines.,title="fig:",width=144,height=144 ] , , , and for sample 2 of table 1 , where true values are indicated with vertical lines.,title="fig:",width=144,height=144 ] , , , and for sample 2 of table 1 , where true values are indicated with vertical lines.,title="fig:",width=144,height=144 ] , , , and for sample 2 of table 1 , where true values are indicated with vertical lines.,title="fig:",width=144,height=144 ] , , , and for sample 2 of table 1 , where true values are indicated with vertical lines.,title="fig:",width=144,height=144 ] , , , and for sample 3 of table 1 , where true values are indicated with vertical lines.,title="fig:",width=144,height=144 ] , , , and for sample 3 of table 1 , where true values are indicated with vertical lines.,title="fig:",width=144,height=144 ] , , , and for sample 3 of table 1 , where true values are indicated with vertical lines.,title="fig:",width=144,height=144 ] , , , and for sample 3 of table 1 , where true values are indicated with vertical lines.,title="fig:",width=144,height=144 ] , , , and for sample 3 of table 1 , where true values are indicated with vertical lines.,title="fig:",width=144,height=144 ] , , , and for sample 4 of table 1 , where true values are indicated with vertical lines.,title="fig:",width=144,height=144 ] , , , and for sample 4 of table 1 , where true values are indicated with vertical lines.,title="fig:",width=144,height=144 ] , , , and for sample 4 of table 1 , where true values are indicated with vertical lines.,title="fig:",width=144,height=144 ] , , , and for sample 4 of table 1 , where true values are indicated with vertical lines.,title="fig:",width=144,height=144 ] , , , and for sample 4 of table 1 , where true values are indicated with vertical lines.,title="fig:",width=144,height=144 ] th observations for sample 1,2,3 and that of observation for sample 4 of table 1 , where true values are indicated with long vertical lines and 95% credible intervals are shown with short vertical lines.,title="fig:",width=144,height=144 ] th observations for sample 1,2,3 and that of observation for sample 4 of table 1 , where true values are indicated with long vertical lines and 95% credible intervals are shown with short vertical lines.,title="fig:",width=144,height=144 ] th observations for sample 1,2,3 and that of observation for sample 4 of table 1 , where true values are indicated with long vertical lines and 95%credible intervals are shown with short vertical lines.,title="fig:",width=144,height=144 ] th observations for sample 1,2,3 and that of observation for sample 4 of table 1 , where true values are indicated with long vertical lines and 95% credible intervals are shown with short vertical lines.,title="fig:",width=144,height=144 ] , , , , and for the numerical example of dependent error with exponentially decaying covariance structure , where true values are indicated with vertical lines.,title="fig:",width=144,height=144 ] , , , , and for the numerical example of dependent error with exponentially decaying covariance structure , where true values are indicated with vertical lines.,title="fig:",width=144,height=144 ] , , , , and for the numerical example of dependent error with exponentially decaying covariance structure , where true values are indicated with vertical lines.,title="fig:",width=144,height=144 ] , , , , and for the numerical example of dependent error with exponentially decaying covariance structure , where true values are indicated with vertical lines.,title="fig:",width=144,height=144 ] , , , , and for the numerical example of dependent error with exponentially decaying covariance structure , where true values are indicated with vertical lines.,title="fig:",width=144,height=144 ] , , , , and for the numerical example of dependent error with exponentially decaying covariance structure , where true values are indicated with vertical lines.,title="fig:",width=144,height=144 ] | chirp signals are frequently used in different areas of science and engineering . mcmc based bayesian inference is done here for purpose of one step and multiple step prediction in case of one dimensional single chirp signal with i. i. d. error structure as well as dependent error structure with exponentially decaying covariances . we use gibbs sampling technique and random walk mcmc to update the parameters . we perform total five simulation studies for illustration purpose . we also do some real data analysis to show how the method is working in practice . * key words and phrases * : bayesian inference , chirp signal , gibbs sampling , posterior predictive density , random walk mcmc . |
the technique of interferometry has been widely used in radio astronomy to image the sky using arrays of antennas . by correlating the complex voltage signals between pairs of antennas ,the field - of - view of a single element can be sub - divided into `` synthesized beams '' of higher angular resolution . in the small - angle approximation ,the interferometer forms the fourier transform of the sky convolved with the autocorrelation of the aperture voltage patterns .in standard radio interferometric data analysis , as described for example in the text by and the proceedings of the nrao synthesis imaging school , the correlations or visibilities are inverse fourier transformed back to the image plane . however , there are applications such as estimation of the angular power spectrum of fluctuations in the cosmic microwave background ( cmb ) where it is the distribution of and correlation between visibilities in the aperture or -plane that is of most interest . in standard cosmological models ,the cmb is assumed to be a statistically homogeneous gaussian random field . in this case , the spherical harmonics of the field are independent and the statistical properties are determined by the power spectrum where labels the component of the legendre polynomial expansion ( and is roughly in inverse radians ) . showed that in cold dark matter inspired cosmological models , there would be features in the cmb power spectrum that reflected critical properties of the cosmology .recent detections of the first few of these `` acoustic peaks '' at in the power spectrum have supported the standard inflationary cosmological model with .measurement of the higher- peaks and troughs , as well as the damping tail due to the finite thickness of the last scattering surface , is the next observational step .interferometers are well - suited to the challenge of mapping out features in the cmb power spectrum , with a given antenna pair probing a characteristic proportional to the baseline length in units of the observing wavelength ( a projected baseline corresponds to , see [ sec : basic ] ) .there are many papers in the literature on the analysis of cmb anisotropy measurements , estimation of power spectra , and the use of interferometry for cmb studies .general issues for analysis of cmb datasets are discussed in . present a bayesian method for the analysis of cmb interferometer data , using the 3-element cosmic anisotropy telescope data as a test case .a description of analysis techniques for interferometric observations from the degree angular scale interferometer ( dasi ) were presented in , while report on the power spectrum results from first - season of dasi observations . discusses cmb interferometry with application to the proposed amiba instrument . have recently presented an approach similar to ours , and demonstrate their technique on simulated very small array ( vsa ) data ; a brief comparison of their algorithm with ours is given in appendix [ app : hobson ] . in this paper, we describe a fast gridded method for the -plane analysis of large interferometric data sets .the basis of this approach is to grid the visibilities and perform maximum likelihood estimation of the power spectrum on this compressed data .our use of gridded estimators is significantly different from what has been done previously .in addition to power spectrum extraction , this procedure has the ability to form optimally filtered images from the gridded estimators , and may be of use in interferometric observations of radio sources in general .we have applied our method to the analysis of data from the cosmic background imager ( cbi ) .the cbi is a planar interferometer array of 13 individual 90-cm cassegrain antennas on a 6-m pointable platform .it covers the frequency range 2636 ghz in 10 contiguous 1 ghz channels , with a thermal noise level of in 6 hours , and a maximum resolution of limited by the longest baselines .the cbi baselines probe in the range 5003900 .the 90-cm antenna diameters were chosen to maximize sensitivity , but their primary beamwidth of ( fwhm ) at 31 ghz limits the instantaneous field of view , which in turn limits the resolution in .this loss of aperture plane resolution can be overcome by making mosaic observations , i.e. observations in which a number of adjacent pointings are combined . in the cbi observations , mosaicing a field several times larger than the primary beam has resulted in an increase in resolution in by more than a factor of 3 , sufficient to discern features in the power spectrum .the first cbi results were presented in , hereafter paper i , using earlier versions of the software that did not make use of -plane gridding , and were far too slow to be used on larger mosaiced data sets .it was therefore essential to develop a more efficient analysis method that would be fast enough to carry out extensive tests on the cbi mosaic data .the software package described below has been used to process the first year of cbi data . in the companion papers (* hereafter paper ii ) and ( * ? ? ?* hereafter paper iii ) , the results from passing cbi deep - field data and mosaic data respectively through the pipeline are presented .this paper is paper iv in the series . the output from this pipeline is then used to derive constraints on cosmology ( * ? ?* hereafter paper v ) .finally , analysis of the excess of power at high- seen in results shown in paper ii in the context of the sunyaev - zeldovich effect is carried out , again using the method presented here , in ( * ? ? ?* hereafter paper vi ) . an introduction to the properties of the cmb power spectrum ,the response of an interferometer to the incoming radiation , and the computation of the primary beam are given in sections [ sec : pscmb ] , [ sec : basic ] , and [ sec : gauss ] respectively .the gridding process is presented in [ sec : gridmethod ] , followed by a description of the likelihood function and construction of the various covariance matrices in [ sec : liklhd ] .details on the maximum likelihood solution and the calculation of window functions and component bandpowers is given in [ sec : relax ] , while [ sec : image ] presents our method for making optimally filtered images from the gridded estimators .finally , a description of the cbi implementation of this method and the performance of the pipeline , including demonstrations using simulated cbi datasets , is given in [ sec : estliklhd ] , followed by a summary and conclusions in [ sec : conclude ] .at small angles , the curvature of the sky is negligible and we can approximate the spherical harmonic transform of the the temperature field in direction as its fourier transform , where is the conjugate variable to .we adopt the fourier convention of , , and . in terms of the multipoles , which we simplify to for the of interest in this paper . for the low levelsanisotropy seen in the cmb on these scales , it is convenient to give in units of and thus will be in units of . because the cmb is assumed to be a statistically homogeneous gaussian random field , the components of its fourier transform are independentgaussian deviates . where . because is real, its transform must be hermitian , with , and therefore note that it is common to write the cmb power spectrum in a form .constant corresponds to equal power in equal intervals of .although the power spectrum is defined in units of brightness temperature , the interferometer measurements carry the units of flux density ( janskys , 1 jy = w m hz ) . in particular , the intensity field on the sky has units of specific intensity ( w m hz sr or jy / sr ) , and thus to convert between and we use with the planck factor where corrects for the blackbody spectrum .note that we have treated the temperature as small fluctuations about the mean cmb temperature k , and thus the appropriate to is used with at ghz .we are not restricted to modeling the cmb .for example , we might wish to determine the power spectrum of fluctuations in a diffuse galactic component such as synchrotron emission or thermal dust emission . in this case, one might wish to express in jy / sr , but take out a power - law spectral shape where is the spectral index , and is the conversion factor that normalizes to the intensity at the fiducial frequency .note that this normalization is particularly useful for fitting out centimeter - wave foreground emission , which tends to have a power - law spectral index in the range that is significantly different from that for the thermal cmb ( ) .in addition , foregrounds will also tend to have a power spectrum shape different from that of cmb , which must be included in the analysis ( see [ sec : foreground ] below ) .a visibility formed from the correlation of an interferometer baseline between two antennas with projected separation ( in the plane perpendicular to the source direction ) meters observed at wavelength meters measures ( in the absence of noise ) the fourier transform of the sky intensity modulated by the response of the antennas where is the primary beam , and is the conjugate variable to . for angular coordinates in radians , has the dimensions of the baseline or aperture in units of the wavelength .the fourier domain is also referred to as the _ uv_-plane or aperture plane in interferometry for this reason .we define the direction cosines between the point at right ascension and declination and the center of the mosaic . for the cbi , data are taken keeping the phase center fixed on the pointing center by shifting the phase with the beam and rotating the platform to maintain constant parallactic angle during a scan , so that the response to a point source at the center of the field is constant , and thus in equation ( [ eq : point1 ] ) , where is the normalized primary beam response at the observing frequency of visibility . then , by application of the fourier shift theorem , it is easy to show that where is the fourier transform of the primary beam , and is the sky brightness field ( expressed in units such as jy / sr ) with transform . the instrumental noise on the complex visibility measurement is represented by .the _ uv_-plane resolution of an interferometer in a single pointing is thus limited by the convolution with .however , these sub - aperture spatial frequencies can be recovered by using the phase gradient in the exponential \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta d d ] evaluated at maximum likelihood is the covariance matrix of the parameters .the diagonals {bb} d d ] are written out . if desired , the bandpower window functions ( [ sec : bwin ] ) can be computed if cbigridr was run to produce narrow - bin .the component bandpowers , and ( [ sec : comp ] ) can also be computed at this time .finally , filtered images using the formalism of [ sec : image ] can be computed from the estimators , the ( at maximum likelihood ) , and the component covariance matrices .results from this are shown below and in paper vi .the timing for cbigridr depends upon the degree of parallelization as well as the processor speed on a given machine , and the number of visibilities gridded , number of foreground sources , and number of bins for the bandpowers . as an example , the processing of the 14-h mosaic field of paper iii ( the largest of the datasets ) involved gridding 228819 visibilities from 65 separate nights of data in 41 fields to 2352 complex estimators .a total of 916 sources were gridded into three source covariance matrices .a total of 7 different binnings for were run at this time from the same gridding .the execution time using the parallel version of cbigridr was running on 22 processors on a 32-processor alpha gs320 workstation at the canadian institute for theoretical astrophysics .it then took on the same computer for mlikely to process 4704 double - precision real estimators in 16 bands , with 3 matrices , one and one .this included the time needed to calculate the component bandpowers , but not the window functions .the speed of this fast gridded method has allowed us to carry out numerous tests on both real and simulated dataset , which would not have been possible carrying out maximum likelihood ( e.g. using even the optimized mlikely ) on the 200000-plus visibilities .the performance of the method was assessed by applying it to mock cbi datasets .simulated cbi datasets were obtained by replacing the actual visibilities from the data files containing real cbi observations of the various fields used in paper ii and paper iii with the response expected for a realization of the cmb sky drawn from a representative power spectrum , plus uncorrelated gaussian instrumental noise with the same variance as given by the scatter in the actual cbi visibilities .the differencing of the lead and trail fields used in cbi observations was included ( e.g. [ sec : difference ] ) .this mock dataset had the same _ uv _ distribution as the real data , and gives an accurate demonstration of expected sensitivity levels and the effect of cosmic variance .the power spectrum chosen for these simulations was for a model that fit the cobe and boomerang data .figure [ fig : mockdeep ] shows the power spectrum estimation derived following the procedure detailed above .the mock datasets were drawn as realizations for the 08h cbi deep field from paper ii .the binning of the signal covariance matrix was chosen to be uniform in with bin width . because a single realization of the sky drawn from the model power spectrum will have individual mode powers that deviate from mean given by the power spectrum due to this intrinsic so - called `` cosmic variance '' plus the effect of the thermal instrumental noise , we analyze 387 realizations each taken from a different realization of the sky and a different set of instrumental noise deviates .the mean for each band converge to , which is obtained by integrating the model over the window functions ( e.g. eq.[[eq : cbpred ] ] ) , within the sample uncertainty for the realizations .furthermore , the standard deviation of the from the mean for each band agrees with the value obtained from the diagonals of the inverse of the fisher matrix .the choice of the bin size is driven by the trade off between the desired narrow bands for localizing features in the power spectrum and the correlations between bins introduced by the transform of the primary beam .there is an anti - correlation between adjacent bands seen in {bb'}$ ] at the level of % to % for with a single field .we have found that correlations up to about % give plots of the that are more visually appealing than those made with narrower band and higher correlation levels due to the increasing scatter in the bandpowers about the mean values .bins of this size do not achieve the best possible -resolution , and thus our cosmological parameter runs use finer - binned bands since the correlations are taken into account in the analyses . the band window functions are shown in the lower panel of figure [ fig : mockdeep ] , and were computed using narrow binnings ( e.g. eq.[[eq : wbbf ] ] ) with .the small - scale structures seen in the window functions , particularly visible around the peaks , are due to the differencing which introduces oscillations ( see [ sec : difference ] ) . as shown in equation ( [ eq : winortho ] ) , a window function is normalized to sum to unity within the given band , and to sum to zero in the other bands , and thus there must be compensatory positive and negative `` sidelobes '' of the window function outside the band .figure [ fig : mockmos ] shows the power spectrum derived for a simulated mosaic of fields separated by using the actual cbi 20h mosaics fields from paper iii as a template .this mosaic field was chosen as it had incomplete mosaic coverage , and thus would be the most difficult test for the method .the binning for shown used , which gave adjacent band anti - correlations of to in the .again the mean of the 117 realizations converges to the value expected within the error bars , showing that there is no bias introduced by the method , even in the presence of substantial holes in the mosaic ( see paper iii for the mosaic weight map ) .furthermore , the rms scatter in the realizations converges to the mean of the inverse fisher error bars , as in the single - field case . as in the previous figures ,the bandpower window functions are shown in the lower panels . in figure[ fig : mockmosruns ] are shown three randomly chosen realizations from the ensemble , plotted along with the input power spectrum .this shows the level of field - to - field variations that we might expect to see in cbi data .there are noticeable deviations from the expected bandpowers in individual realizations , particularly at low where cosmic variance and the highly - correlated bins conspire to increase the scatter .these differences are within the expected scatter when bin - bin correlations and limited sample size is taken into account , but care must be exercised in interpreting single field power spectra . in particular ,the acoustic peak structures are obscured by the sample variations .however , the average bandpowers for the 3 runs ( shown in figure [ fig : mockmosruns ] as open black circles ) are better representations of the underlying power spectrum .although this is not a proper `` joint '' maximum likelihood solution ( e.g. [ sec : comblike ] ) as is done for the real cbi mosaic fields , the improvement seen using the 3-field average leads us to expect that the combination of even 3 mosaic fields damps the single field variations sufficiently to begin to see the oscillatory features in the cmb power spectrum .while we do not show the equivalent plots of the deep fields from figure [ fig : mockdeep ] , the same behavior is seen ( with even larger field - to - field fluctuations in the relatively unconstrained first bin , though still consistent with the error bars ) . the effect of adding point sources to the mock fields , and then attempting power spectrum extraction , is shown in figure [ fig : mockdeepsrc ] .a set of 200 realizations were made in the same manner as in the runs in figure [ fig : mockdeep ] , but the list of point source positions , flux densities and uncertainties , and spectral indices from lower frequency used in the analysis in paper ii ( the `` nvss '' sources ) was used to add mock sources to the data .the flux density of the sources actually added to the data were perturbed using the stated uncertainties as 1- standard deviations .the errors used were 33% of the flux density except for a few of the brighter sources which were put in with 100% uncertainties .we then used the methodology described in [ sec : src ] to compute the constraint matrices .the first method of correction used was to subtract the ( unperturbed ) flux densities from the visibilities , and build the from built using the uncertainties ( shown as the red triangles ) .in addition , we also did no subtraction , but built from using the full ( unperturbed ) flux densities ( shown as the blue squares ) .this is equivalent to assuming a 100% error on the source flux densities , and thus canceling the average source power in those modes . in both casesthe factor was used .the simulations show that both methods are effective , with no discernible bias in the reconstructed cmb bandpowers .finally , the production of images using the gridded estimators described in [ sec : image ] is demonstrated in figure [ fig : image ] .the series of plots show the effect of wiener filtering using the noise and various signal components on an image derived from one of the mock 08h cbi deep field realizations with sources from the ensemble shown in figure [ fig : mockdeepsrc ] .the planck factor weighting of equation ( [ eq : qtherm ] ) was used during gridding to optimize for the thermal cmb spectrum , though in practice this makes little difference due to the restricted frequency range of the cbi .the estimators for this realization were computed by subtracting the mean values of the source flux densities and putting the standard deviations into with ( the red triangles in figure [ fig : mockdeepsrc ] ) .the filtering down - weights the high spatial frequency noise seen in the unfiltered image , and effectively separates the cmb and source components as shown by comparing panels ( c ) and ( d ) to the total signal in panel ( b ) .the signal in this realization is dominated by the residuals from two bright point sources that had 100% uncertainties put in for their flux densities and thus escaped subtraction .the effectiveness of in picking out the sources in the image plane underlines its utility as a constraint matrix in the power spectrum estimation .we have outlined a maximum likelihood approach to determining the power spectrum of fluctuations from interferometric cmb data .this fast gridded method is able to handle the large amounts of data produced in large mosaics such as those observed by the cbi .software encoding this algorithm was written , and tested using mock cbi data drawn from a realistic power spectrum .the results of the code were shown to converge as expected to the input power spectrum with no discernible bias . for small datasets , this code was also tested against independently written software that worked directly on the visibilities .in addition , the pipeline was run with gridding turned off as described in [ sec : liklhd ] , again for small test data sets .no bias or significant loss in sensitivity was seen in these comparisons .this software pipeline was used to analyze the actual cbi data , producing the power spectra presented for the deep fields and mosaics in paper ii and paper iii respectively .the output of the pipeline also was used as the input for the cosmological parameter analysis in paper v and the investigation of the sunyaev - zeldovich effect in paper vi .this method is of interest for carrying out power spectrum estimation for interferometer experiments that produce a large number of visibilities but with a significantly smaller number of independent samples of the fourier plane ( such as close - packed arrays such as vsa or dasi ) .the cbi pipeline analysis is carried out in two parts , the gridding and covariance matrix construction from input uv - fits files in cbigridr and the maximum likelihood estimation of bandpowers using quadratic relaxation in mlikely .the software for the pipeline is available by contacting the authors .we close by noting that our formalism can be extended to deal with polarization data . in the case of cmb polarization ,there are as many as six different signal covariance matrices of interest in each band , with estimators ( or visibilities ) for parallel - hand and cross - hand polarization products , and thus development of a fast method such as this is critical . in september 2001 polarization capable versions of cbigridr and mlikelywere written and tested .we describe the method , the polarization pipeline , and results in the upcoming paper ( myers et al .2002 , in preparation ) .stm was supported during the early years of the cbi by a alfred p.sloan fellowship from 1996 to 1999 while at the university of pennsylvania .genesis of this method by stm greatly benefited by a stay in july 2000 at the itp in santa barbara , supported in part by the national science foundation under grant phy99 - 07949 .the national radio astronomy observatory is a facility of the national science foundation operated under cooperative agreement by associated universities , inc .the cbi was funded under nsf grants ast-9413934 , ast-9802989 and ast-0098734 , with contributions by maxine and ronald linde , and cecil and sally drinkward , and the strong support of the california institute of technology , without which this project would not have been possible .in addition , this project has benefited greatly from the computing facilities available at cita , and from discussions with other members of the group at cita not represented as authors on this paper .suppose we were to construct a simple linear `` dirty '' mosaic on the sky obtained by a linear combination of the dirty ( not deconvolved ) images of the individual fields ( e.g. ) .in the _ uv _ plane , this reduces to summing ( integrating ) up the visibilities from each mosaic `` tile '' with some weighting function , e.g. where for the time being we ignore the contribution from the complex conjugates of the visibilities ( see below ) . for illustrative purposes ,let us consider only a single frequency channel and write the estimator as a function , where , which in the absence of instrumental noise is given by with kernel , sky and aperture plane sampling given by , and where is the visibility at pointing position and _ uv _ locus from equation ( [ eq : visi ] ) .in practice , the sampling function is just a series of delta functions over the measured visibilities each with weight . as an ansatz , we let the mosaicing kernel have the form where is the interpolating kernel . furthermore , let us assume that the _ uv_-plane coverage is the same for all mosaic pointings , and thus is separable where and are the sampling and weighting in the two domains . combining these and rearranging terms , we get where in equation ( [ eq : mos2 ] ) we used the fact that the final right - hand side integral in equation ( [ eq : mos2a ] ) is the fourier transform of the mosaic function . for an infinite continuous mosaic , and thus we wish to recover in this limit , then with normalization will fulfill our requirements .we have chosen as the _ uv _ kernel as it reproduces the least - squares estimate of the sky brightness in the linear mosaic .then , equation ( [ eq : mos2 ] ) becomes which has a width controlled by the narrower of the width of or the width of .thus , by widening the mosaic to a larger area than the beam , we will fill in the desired information inside the smeared patches in the _ uv_-plane .thus , a properly sampled mosaic will fill in a sub - grid within each _ uv _ cell you would have normally had for a single pointing , and thus an mosaic consisting of `` images '' each is equivalent to a _uv _ super - grid of size ( e.g. ) .note that for a non - continuous mosaic , there will be `` aliases '' in the _ uv _ plane separated by the inverse of the mosaic spacing in the sky . ideally , we would like the separation between aliased copies to be larger than the extent of the beam transform , which is satisfied for which for cm corresponds to at ghz and only at ghz , the centers of the extremal cbi bands .the spacing used in the cbi mosaics is a compromise between the aliasing limits over the bands and the desire to have a fewer number of pointings on a convenient grid .we chose to observe with pointing centers separated by , which is sub - optimal above ghz .however , the effect of aliasing is small , with the overlap point occurring at the point of at 31 ghz , and the point for the highest frequency channel at ghz .we obtain the gridding kernel of equation ( [ eq : linest ] ) corresponding to equation ( [ eq : qvu ] ) by using the discrete sampling in equation ( [ eq : fsamp ] ) with visibility weights , and normalization factor .the discrete form of the normalization derived in equation ( [ eq : zu ] ) is then , is the weighted sum of visibilities used for the estimators . note that because , there are also visibilities for which lies within the support range around , i.e. .thus , we should add in the complex conjugates to do this , we construct another kernel which will gather the appropriate , giving for the final form of our linear estimator . for estimated visibility variances , the optimal weighting factor ( in the least - squares estimation sense ) is given by but may also include factors based on position in the mosaic or frequency channel .for example , up until now we have neglected the frequency dependence of the observed visibilities .if we are reconstructing an intensity field with a uniform flux density spectrum , then no changes need be made .if there is a frequency dependence , such as that for a power - law foreground ( e.g. eq.[[eq : alpha ] ] ) or the thermal spectrum of the cmb ( e.g. eq.[[eq : fplanck ] ] ) , then the visibilities should be scaled and weighted by the appropriate factor when gridded in order to properly estimate or respectively .for example , for the cmb using equation ( [ eq : fplanck ] ) for the spectrum , we find in practice for the cbi , the frequency range of the data is not great enough for the spectral weighting factor to matter , and we therefore use the default weighting given in equation ( [ eq : viswt ] ) .this will therefore be slightly non - optimal in the signal - to - noise sense ( it will not be the minimum - variance estimator ) but it will not introduce a bias in the bandpowers .the choice of the normalization is somewhat arbitrary , as it only determines the units of the and not the correlation properties .however , this can be important if we wish to use the estimators to make images using the formalism of [ sec : image ] .for instance , the normalization given in equation ( [ eq : zi ] ) has the drawback of diverging in cells where all the are vanishingly small ( such as the innermost and outermost supported parts of the _ uv_-plane ) , and will produce images with heightened noise on short and long spatial wavelengths .it is therefore more convenient to use the alternate normalization which when inserted into equation ( [ eq : linest2 ] ) will properly normalize the weighted sums of visibilities .this will then produce images with the desired units of janskys per beam ( see [ sec : image ] ) .we therefore use equation ( [ eq : zk1 ] ) for the normalization in our cbi pipeline .we wish to calculate ( cf .eq.[[eq : creskk ] ] ) using equation ( [ eq : resampint ] ) with . if is independent of flux density , then where we have left in the possibility that the upper flux density cutoff will depend on spectral index ( see below ) and set the lower flux density cutoff to zero ( the results for realistic power - law counts with are insensitive to the lower cutoff , but one can easily be included ) . as an example for the calculation of the fluctuation power due to residual sources in the gaussian limit , consider power - law integral source counts where is the mean number density of sources with flux density _ greater _ than at frequency , and a gaussian spectral index distribution at frequency first consider the case where there is a fixed flux density upper cutoff at the frequency where the number counts are defined .the two parts of equation ( [ eq : creskk ] ) separate easily , where the source count part of the integral is for the distribution in equation ( [ eq : gausalpha ] ) , the integral over becomes where , and where is the mean of the extrapolated spectral index distribution which remains a gaussian , and the effective spectral index for the spectral component is shifted from the mean spectral index of the input distribution by the combination of the scatter in the and the lever arm from the frequency extrapolation . putting these together ,we get one can also deal with the case where there is an upper flux density cutoff imposed at a frequency other than where the distribution is defined . in this case , the flux density cutoff in equation ( [ eq : resampdef ] ) is where , and is the cutoff extrapolated to using .then , and thus with where gives the modification of the effective spectral index due to the change in the frequency at which the cutoff is done .one often has an upper flux density cutoff at two different frequencies .for example , sources that are extrapolated to be bright at the cmb observing frequency will have been detected and subtracted . if there is a flux density cutoff of imposed at a frequency as before , but an additional upper cutoff of at another frequency , then there is a critical spectral index above which the effective cutoff of equation ( [ eq : snumax ] ) changes from that appropriate to to that at ( assuming ) .thus , the integral over in equation ( [ eq : resampcut ] ) will be broken into two pieces , where . the quantities in are as defined in equations ( [ eq : snumax ] ) through ( [ eq : alphaeffhat ] ) , and the parameters in are defined in the same way but using the higher frequency .the truncated gaussian integrals are just the integrated probabilities for the normal distribution with the error function .then , \label{eq : j2}\end{aligned}\ ] ] where and . as an example , consider the source counts presented in paper ii ( 4.3.2 ) , with above mjy at ghz and , which gives as the raw source power . in the analysis described there , mason et al .find that a gaussian 1.4 ghz to 31 ghz spectral index distribution with and fits the observed data .the cbi and ovro direct measurements have a cutoff of mjy at 31 ghz ( sources brighter than this have been subtracted from the cbi data and have residual uncertainties placed in a source covariance matrix ) , and sources above mjy at ghz have already been accounted for in a second source matrix .therefore , the critical spectral index is from equation ( [ eq : alpcrit ] ) . for , the 31 ghz cutoff holds .since the cutoff and source distribution are at the same frequency as the observations , there is no extrapolation factor and the spectral index distribution is unchanged ( ) .then , and , so = 0.003\,{\rm jy}^2\,{\rm sr}^{-1}\ ] ] for the flat - spectrum tail of the spectral index integral .the rest of the integral uses the 1.4 ghz cutoff , which we extrapolate using the mean spectrum to 31 ghz using equation ( [ eq : snumax ] ) , because , we have to modify the quantities in equation ( [ eq : alphaeffhat ] ) by explicitly expanding the terms in , and canceling remaining terms in , giving which can then be inserted into equation ( [ eq : j1 ] ) , giving for , , and thus we expect for the amplitude of the residual sources in the cbi fields . in paperii , it is noted that there is a 25% uncertainty on , and more importantly the power - law slope of the source counts could conceivably be as steep as . taking the extreme of , we get using the above procedure . we thus conservativelyestimate a 50% uncertainty on the value of derived in this manner .note that in paper ii we actually use the value of derived using a monte - carlo procedure emulating the integrals in equation ( [ eq : intj ] ) but using the actual observed distribution of source flux densities and spectral indices .the agreement between these two estimates shows the efficacy of this procedure in practice .recently , ( * ? ? ? * hm ) have independently proposed a binned _ uv_-plane method that is somewhat similar to ours , though it is more directly related to the `` optimal maps '' of .hm use a gathering mapping ( in their notation ) rather than our scattering kernel of equation ( [ eq : linop ] ) . in the hm method, the vector can be thought of as a set of ideal pixels in the _they show that the likelihood depends upon binned visibilities and noise where the hm kernel is chosen to equal 1 if the of visibility lies in cell , though other more complicated kernels could be imagined .the hm method will also give a calculational speedup through the reduction in number of independent gridded estimators , and the use of the method is demonstrated using simulated vsa data in their paper . .note that both and its conjugate have overlapping support for visibility , and this must be taken into account in computing the covariance matrix element.[fig : support],width=624 ] | we describe an algorithm for the extraction of the angular power spectrum of an intensity field , such as the cosmic microwave background ( cmb ) , from interferometer data . this new method , based on the gridding of interferometer visibilities in the aperture plane followed by a maximum likelihood solution for bandpowers , is much faster than direct likelihood analysis of the visibilities , and deals with foreground radio sources , multiple pointings , and differencing . the gridded aperture - plane estimators are also used to construct wiener - filtered images using the signal and noise covariance matrices used in the likelihood analysis . results are shown for simulated data . the method has been used to determine the power spectrum of the cosmic microwave background from observations with the cosmic background imager , and the results are given in companion papers . |
the filamentous fungi are the most diverse of any eukaryotic organisms , thriving as mutualists , decomposers and pathogens the world over .one suspected contributor to their tremendous ecological success is their unusual mode of life .unlike plant and animal cells , the cells of filamentous fungi are generally multinucleate , and can even harbor genetically different nuclei , bathed by a common cytoplasm . as the tube - like hyphae grow , each extending incrementally at its tips , nuclei flow from the colony interior to fill the free space created at the tips .we show that the flow is driven by pressure gradients across the colony , and that nuclei follow complex multi - directional trajectories , reminiscent of cars traveling through a city .we hypothesize that the complexity of nuclear paths is a deliberate effort by the fungus to keep genetically different nuclei well mixed . disrupting the exquisite hydraulic engineering of the cell( e.g. by knocking out the ability of hyphae to fuse , to make the multi - connected network ) causes genetically different nuclei to become un - mixed during growth .our fluid dynamics video includes 11 short segments : 1 .a time lapse sequence , accelerated 7500 fold showing a colony invading a small block of agar .2 . confocal imaging of hh1-gfp transformed nuclei flowing toward a growing tip .as we pan deeper into the colony , we see how these tip hyphae are fed as branches of trunk hyphae , each supplying nuclei to many tips .the flow speed in a tip hyphae is 0.1 m/s , flow speed in trunk hyphae can be 30 or 80 times greater .3 . in the colony interior , nucleiflowing through the complex network of hyphae follow torturous and even multidirectional paths , looking in confocal microscopy , like the headlights of cars navigating a microscopic city .flow speeds in the colony interior can reach 10 - 20 m/s .4 . just like cars in a city , speeds vary between hyphal roads .some roads remain grid - locked while in others nuclei flow at speeds of up to 10 - 20 m/s .5 . under some conditions , in small colonies ,the nuclei form spontaneous traffic jams .6 . what drives the fluid flow ?we measured the variation in nuclear speeds across hyphae .when collapsed by hyphal diameter and axial flow speed , we see a common poiseuille - flow profile in each hypha , indicating that the flows are hydrodynamic : driven by pressure gradients across the colony . 7 .we can manipulate the pressure gradients by applying hyper - osmotic solutions to the colony .these treatments apply a uniform pressure gradient counter to the normal direction of flow .flow in each fungal hyphae is transiently reversed , indicating that the hyphal network architecture far from being random is finely tuned to create mixing flows from spatially coarse pressure gradients .what is the function of these flows ?since real fungi can harbor many genetically different nucleotypes , we hypothesize that physical mixing , associated with the flows , helps to preserve nuclear diversity as the fungus grows .we show a sequence of nuclear mixing in a chimeric colony formed by fusing two _ n. crassa _ colonies ; one with hh1-dsred and one with hh1-gfp transformed nuclei .we can manipulate the hyphal architecture genetically .we show micrographs of conidiophores ( spore - bearing hyphae ) of chimeric wild type and _ so _ mutant colonies .both colonies have dsred ( red ) and gfp ( green ) expressing nucleotypes . in the wild type colony ,the red and green nuclei remain well mixed in the colony . in the _ so _ colonythey segregate out , so that conidiophores end up almost exclusively red or exclusively green ( shown ) ._ so _ mutations stop the hyphae from fusing to create an interconnected network . 10 .conidiophores are branched structures ( without interconnections ) so pose particular challenges for mixing .taylor dispersion ( enhanced diffusion due to fluid shear across a hypha ) can keep nuclei well - mixed but only if flow speeds in the conidiophore exceed 200 m/s . to achieve such fast flows each conidiophore must feed into at least 2000 hyphal tips .11 . a time - lapse video showing the growth and collapse of conidiophores , accelerated 7500 fold . 2000 separately growing hyphae places impossible weight upon the conidiophore , causing it to collapse .since spores are thought to be dispersed by air - flows across the colony and collapsed conidiophores are likely to experience reduced wind speeds , rates of spore liberation are reduced .loss of spore dispersal effectiveness represents the high price paid by the colony for maintaining mixing flows . | the syncytial cells of a filamentous fungus consist of a mass of growing , tube - like hyphae . each extending tip is fed by a continuous flow of nuclei from the colony interior , pushed by a gradient in turgor pressure . the myco - fluidic flows of nuclei are complex and multidirectional , like traffic in a city . we map out the flows in a strain of the model filamentous fungus _ n. crassa _ that has been transformed so that nuclei express either hh1-dsred ( a red fluorescent nuclear protein ) or hh1-gfp ( a green - fluorescent protein ) and report our results in a fluid dynamics video . |
frequency division multiplexing ( ofdm ) has been widely adopted in wireless communications due to its high - speed data transmission and inherent robustness against the inter - symbol interference ( isi ) . however , traditional rectangularly pulsed ofdm exhibits large spectral sidelobes , resulting in severe out - of - band power leakage . in this case , traditional ofdm will introduce high interference to the adjacent channels . for alleviating this problem ,various methods have been proposed for sidelobe suppression in ofdm - .more specifically , the windowing technique extends the guard interval at the cost of spectral efficiency reduction .cancellation carriers will consume extra power and incur a signal - to - noise ratio ( snr ) loss .precoding methods - require complicate decoding processing to eliminate the interference . against this background ,nc - ofdm is a class of efficient sidelobe suppression techniques by making the ofdm signal and its first _n _ derivatives continuous .however , a disadvantage of nc - ofdm is the high implementation complexity in the transmitter . for reducing the complexity ,some simplified schemes have been proposed in and . in ,time - domain nc - ofdm ( td - nc - ofdm ) was proposed for reducing the complexity of nc - ofdm , by transforming traditional frequency - domain processing into the time domain . on the other hand, traditional nc - ofdm will introduce severe interference so as to degrade the bit error rate ( ber ) performance . for alleviating this problem , _n_-continuous symbol padding ofdm ( ncsp - ofdm ) was recently developed in at the cost of increased complexity in the transmitter .meanwhile , to enable low - complexity signal recovery in nc - ofdm , several techniques - have been proposed .however , those techniques will also result in complexity load or efficiency reduction . for reducing the interference of nc - ofdm and avoiding complex signal recovery at the receiver ,this letter proposes a low - interference nc - ofdm scheme by adding an improved smooth signal in the time domain .firstly , in the proposed scheme , the smooth signal is generated in a novel way as the linear combination of designed basis signals related to rectangular pulse .secondly , the basis signals are smoothly truncated by a preset window function to obtain the short - duration smooth signal with shorter length than a cyclic - prefixed ofdm symbol and low interference .thirdly , the short - duration smooth signal is overlapped onto only part of the ofdm symbol to reduce the interference .furthermore , we give an asymptotic expression of the spectrum feature of the low - interference nc - ofdm signal , to show that the proposed scheme can maintain the similar sidelobe suppression to traditional nc - ofdm .lastly , the complexity analysis denotes that the proposed scheme can significantly reduce the complexity of nc - ofdm .in conventional nc - ofdm , the _ _ i__th transmit symbol follows and where is the precoded symbol on the _ _ r__th subcarrier with the subcarrier index set , is the __ n__th - order derivative of with , the frequency spacing is , is the symbol duration , and is the cyclic prefix ( cp ) duration . for satisfying eq ., the _ n_-continuous processing can be summarized as where ^t ] , is the identity matrix , , , with , and - nc - ofdm is recently proposed in for transforming the processing of nc - ofdm into the time domain . in td - nc - ofdm, the smooth signal is added with the time sampling interval where _ m _ is the length of the symbol . following the basic idea of td - nc - ofdm, the proposed low - interference scheme also adds a smooth signal onto the ofdm signal in the time domain , given as where and is the length of a cp .thus , to make the ofdm signal _n_-continuous , should satisfy to construct with low interference , is just added in the front part of each cp - prefixed ofdm symbol .thus , the proposed linear - combination design of is described as where indicates the location of with length _l _ , and the basis signals belong to the basis set , defined as ^t , \tilde{n}\in \mathcal{u}_{2n}\bigg\ } , \label{eqn7 } \end{aligned}\ ] ] where . for achieving the low - interference smooth signal in eq . , the design of the basis signals and the linear combination coefficients ^t$ ] will be specified as follows . on the one hand ,the time duration of is truncated by a preset window function , which is considered as a smooth and zero - edged window function , such as triangular , hanning , or blackman window function .then , the truncated basis signals can be given by where denotes the unit - step function , and is calculated by the rectangularly ofdm pulse , given as on the other hand , by substituting eqs .- into eq . , the coefficients can be calculated as where is a symmetric matrix , given as and . finally , is added onto only part of the cp - inserted ofdm symbol to achieve the _n_-continuous symbol , as & 0\leq i\leq m_{\rm s}-1 , \\\mathbf{q}_{\tilde{f}}{\mathbf{b}}_i & i = m_{\rm s } , \end{matrix}\right .\label{eqn11}\ ] ] where , is the number of the ofdm symbols , and is initialized as since the back edge of equals to zero .in general , the proposed low - interference nc - ofdm scheme is illustrated as follows .initialization : , , , , , ; + according to the above processing , the main advantage of the improved smooth signal is that its length has been effectively truncated to _l _ , by a careful design in eq . .furthermore , for , only parts of the ofdm signal are overlapped with the interference .more especially , since the front part of the ofdm symbol is cp , the interference to the real data is further mitigated . in sectionv , we will show that the influence of has been effectively reduced .assume that the first _n_-1 derivatives of the smoothed ofdm signal are continuous , and the _ _ n__th - order derivative has finite amplitude discontinuity . thus , according to the definition of the power spectrum density ( psd ) and the relationship between spectral roll - off and continuity in , the psd of the low - interference nc - ofdm signal is expressed by where .indicates that the spectrum of the low - interference nc - ofdm signal is related to the expectation of multiplied by . in this letter ,the conventional blackman window function is used as an example , given as where . by substituting eqs . and into eq ., the psd of the smoothed ofdm signal is expressed by where , , is the binomial coefficient , with , and .shows that the power spectral roll - off of the smoothed signal , whose first _ n_-1 derivatives are continuous , decays with .moreover , the expression of reveals that the sidelobe is affected by the length of , so that the selection of _ l _ is important to balance ber and sidelobe suppression performance .[ fig2 ] compares the theoretical and simulation results of the low - interference scheme with _ _l__==144 , _ _m__=2048 , where the length of is equal to that of cp . consideringthe ratio of cp is only 7% of the whole symbol , the assumption is reasonable for practical wireless systems such as lte .it is shown in fig .[ fig2 ] that the simulation results match well with the theoretical analyses with varying highest derivative order _ n_. in section v , with the above parameters , we will show the proposed scheme can maintain the similar spectral roll - off to traditional nc - ofdm . .the complexity comparison , quantified by the numbers of real additions and multiplications , among nc - ofdm , ncsp - ofdm , and the low - interference scheme is shown in table i. compared to other methods , the proposed low - interference scheme has notable complexity reduction .l _ is equal to or smaller than the length of cp , the complexity is significantly reduced in the transmitter .for example , when _ _l__==144 , _ _k__=256 , _ _n__=2 , and _ _ m__=2048 , at the transmitter side , the complexity of the proposed low - interference scheme is , respectively , 47.4% and 20.3% of those of nc - ofdm and ncsp - ofdm .8.85cm|m1.4cm|m2.69cm|m3.45cm| * scheme & * number of real multiplications & * number of real additions + * nc - ofdm & & + * ncsp - ofdm & & + * low - interference scheme & & + * * * * * *simulations are performed in a baseband - equivalent ofdm system with _ _k__=256 , , and 16-qam digital modulation .the carrier frequency is 2ghz , the subcarrier spacing is , and the time - domain oversampling factor is 8 .the psd is evaluated by welch s averaged periodogram method with a 2048-sample hanning window and 512-sample overlap . in order to show the system performance in the multi - path fading environment , lte extended vehicular a ( eva ) channel with 9 paths of rayleigh fading channels is considered . fig .[ fig3 ] compares the psds between nc - ofdm and the proposed low - interference scheme with different _n_. the low - interference scheme can obtain the sidelobe suppression performance similar to traditional nc - ofdm .moreover , as _ l _ increases , the sidelobe suppression performance of the proposed scheme is improved .for example , _l _ is increased from 36 to 72 and 144 . in general , we show that _ _l__==144 is a good choice for maintaining the sidelobe suppression performance as traditional nc - ofdm . .[ fig4 ] shows the ber performance of nc - ofdm , ncsp - ofdm , and the low - interference scheme when _ _ n__=3 and _ _ l__==144 in 3gpp lte eva fading channel .it is shown that traditional nc - ofdm will introduce severe interference to the transmit signal and hence to influence the ber performance , while ncsp - ofdm and our proposed low - interference scheme can both effectively mitigate the interference so as to achieve similar ber performance to ofdm , and to save the complexity for signal recovery at the receiver .however , the proposed low - interference scheme is with much lower complexity compared to other schemes according to table i. in general , we show that the proposed scheme can make a promising tradeoff among ber performance , sidelobe suppression performance and computational complexity . .in this letter , a low - interference nc - ofdm was proposed to reduce the interference and complexity as opposed to the original nc - ofdm .the main idea is to generate the time - domain low - interference smooth signal in a novel way as the linear combination of carefully designed rectangularly pulsed basis signals .both analyses and simulation results showed that the low - interference scheme was capable of reducing the interference to a negligible extent , while maintaining similar sidelobe suppression to traditional nc - ofdm but with much lower complexity .t. weiss , j. hillenbrand , a. krohn , and f. jondral , mutual interference in ofdm - based spectrum pooling systems , " in _ proc .( vtc ) _ , milan , italy , may 2004 , pp .1873 - 1877 .s. brandes , i. cosovic , and m. schnell , reduction of out - of - band radiation in ofdm systems by insertion of cancellation carriers , " _ ieee commun .10 , no . 6 , pp .420 - 422 , jun . 2006 .m. ma , x. huang , b. jiao , and y. j. guo , optimal orthogonal precoding for power leakage suppression in dft - based systems , " _ ieee trans .844 - 853 , mar .2011 . c. d. chung , spectrally precoded ofdm , " _ ieee trans .2173 - 2185 , dec . 2006 .j. van de beek and f. berggren , _ n_-continuous ofdm , " _ ieee commun ._ , vol . 13 , no. 1 , pp . 1 - 3 ,jan . 2009 .j. van de beek and f. berggren , evm - constrained ofdm precoding for reduction of out - of - band emission , " in _ proc .( vtc ) _ , anchorage , usa , sep . 2009 .j. van de beek , sculpting the multicarrier spectrum : a novel projection precoder , " _ ieee commun .881 - 883 , dec . 2009 .m. ohta , a. iwase , and k. yamashita , improvement of the error characteristics of an _ n_-continuous ofdm system with low data channels by slm , " in _ proc .( icc ) _ , kyoto , japan , jun .2011 , pp . 1 - 5. m. ohta , m. okuno , and k. yamashita , receiver iteration reduction of an _n_-continuous ofdm system with cancellation tones , " in _ proc .ieee global telecommun .( globecom ) _ , kathmandu , nepal , dec .2011 , pp . 1 - 5 . h. kawasaki , m. ohta , and k. yamashita , _ n_-continuous symbol padding ofdm for sidelobe suppression , " in _ proc .( icc ) _ , sydney , australia , jun .2014 , pp .5890 - 5895 .p. wei , l. dan , y. xiao , and s. li , a low - complexity time - domain signal processing algorithm for _ n_-continuous ofdm , " in _ proc .conf . commun .( icc ) _ , budapest , hungary , jun .2013 , pp .5754 - 5758 .r. bracewell , _ the fourier transform and its applications _ , 2nd ed .new york : mcgraw - hill , 1978 , ch .143 - 146 ._ user equipment ( ue ) radio transmission and reception ( release 12 ) _ , 3gpp ts 36.101 , v12.3.0 , mar . 2014 . [ online ] .available : http : //www.3gpp.org/. n. c. beaulieu and m. o. damen , parametric construction of nyquist - i pulses , " _ ieee trans .2134 - 2142 , dec . 2004 .p. d. welch , the use of fast fourier transform for the estimation of power spectra : a method based on time averaging over short , modified periodograms , " _ ieee trans .audio electroacoustics _ , vol .70 - 73 , jun . | _ n_-continuous orthogonal frequency division multiplexing ( nc - ofdm ) was demonstrated to provide significant sidelobe suppression for baseband ofdm signals . however , it will introduce severe interference to the transmit signals . hence in this letter , we specifically design a class of low - interference nc - ofdm schemes for alleviating the introduced interference . meanwhile , we also obtain an asymptotic spectrum analysis by a closed - form expression . it is shown that the proposed scheme is capable of reducing the interference to a negligible level , and hence to save the high complexity of signal recovery at the receiver , while maintaining similar sidelobe suppression performance compared to traditional nc - ofdm . _ n_-continuous orthogonal frequency division multiplexing ( nc - ofdm ) ; sidelobe suppression ; time - domain _ n_-continuous ofdm ( td - nc - ofdm ) . |
the equilibrium problem ( ep ) which was considered as the ky fan inequality is very general in the sense that it includes , as special cases , many mathematical models such as : variational inequalities , fixed point problems , optimization problems , nash equilirium point problems , complementarity problems , see and the references therein .many methods have been proposed for solving eps .the most solution approximations to eps are often based on the resolvent of equilibrium bifunction ( see , for instance ) in which a strongly monotone regularization equilibrium problem ( rep ) is solved at each iterative step .it is also called the proximal point method ( ppm ) .this method was first introduced by martinet for variational inequalities , and then it was extended by rockafellar for finding a zero point of a monotone operator . in 2000 , konnov further extended ppm to ky fan inequalities for monotone or weakly monotone bifunctions . a special case of ep is the variational inequality problem ( vip ) .the projection plays an important role in constrained optimization problems .the simplest method for vips is the gradient projection method in which only a projection on the feasible set is computed .however , in order to obtain the convergence , the method requires the restrictive assumption that operators are strongly ( or inverse strongly ) monotone .to overcome this , korpelevich introduced the extragradient method ( double projection method ) where two metric projections onto the feasible set be implemented at each iteration .the convergence of the extragradient method was proved under the weaker assumption that operators are only monotone ( even , pseudomonotone ) and - lipschitz continuous . some extragradient - like algorithms proposed for solving vipscan be found in and the references therein .however , the projection is only found easily if the constrained set has a simple structure , for instance , as balls , hyperplanes or halfspaces .so , several modifications of the extragradient method have been proposed in various ways .for instance , the authors in replaced the second projection onto the feasible set in the extragradient method by one onto a half - space and proposed the subgradient extragradient method for vips in hilbert spaces .in recent years , korpelevich s extragradient method has been naturally extended to eps for monotone ( more general , pseudomonotone ) and lipschitz - type continuous bifunctions and widely studied both theoretically and algorithmically . in the extended extragradient methods to eps, we need to solve two strongly convex optimization programs on a closed convex constrained set ( see , algorithms [ vsh2013a ] , [ vsh2013b ] and [ dhm2014 ] in section [ algor ] ) .they are generalizations of two projections in korpelevich s extragradient method .the advantage of the extragradient method is that two optimization programs are solved at each iteration which seems to be numerically easier than the non - linear inequality ( or rep ) in ppm .however , this might still be costly and affects the efficiency of the used method if the structure of feasible set and equilibrium bifunction are complex. moreover , we are not aware of any modification of the extragradient method for eps . in this paper , motivated by the _hybrid method without the extrapolation step _ for variational inequalities , the _ extragradient method _ and the _ hybrid method _, we have proposed a new hybrid algorithm for solving eps . in this algorithm , by constructing a specially cutting - halfspace in the hybrid method , we only need to solve a strongly convex optimization program onto the feasible set at each iteration .the absence of an optimization program in our algorithm ( compare with the extragradient method ) can be considered an improvement of the results in .the remainder of the paper is organized as follows : section introduce our algorithm and some related works . in section , we collect some definitions and preliminary results used in the paper . section deals with proving the convergence of the algorithm .some applications of our algorithm to gato differentiable eps and multivalued variational inequalities are presented in section [ appl ] . finally , in section we provide some numerical examples to illustrate the convergence of the proposed algorithm and compare it with others .let be a real hilbert space , be a nonempty closed convex subset of and be a bifunction with for all .the equilibrium problem ( ep ) for the bifunction on is to find such that the solution set of ep is denoted by . in this paper , we introduce the following hybrid algorithm for solving ep . [ h2015 ] where are two specially constructed half - spaces ( see algorithm in section below ) . in the special case , where is a nonlinear operator then ep becomes the following variational inequality problem ( vip ) : find such that then , our algorithm ( algorithm ) becomes the following _ hybrid algorithm without extrapolation step _ which was introduced in for vips .[ ms2015 ] in 2008 , quoc et al . extended korpelevich s extragradient method to eps in euclidean spaces in which two optimization programs are solved at each iteration .recently , nguyen et al . also have done in that direction and proposed the general extragradient method which consists of solving three optimization programs on the feasible set . in euclidean spaces , the convergence of the sequences generated by the extragradient methods was proved under the assumptions of pseudomonotonicity and lipschitz - type continuity of equilibrium bifunctions .the problem which arises in infinite dimensional hilbert spaces is how to design an algorithm which provides the strong convergence . in 2012 , vuong et al . used the extragradient method in and the hybrid ( outer approximation ) method to obtain the following strong convergence hybrid algorithm [ vsh2013a ] in 2013 , another hybrid algorithm ( * ? ? ?* algorithm 1 ) was also proposed in this direction as [ vsh2013b ] the authors in proved that the sequences generated by algorithms and converges strongly to . note that , the set in algorithm , in general , is not easy to construct . in 2014 , in order to avoid the condition of the lipschitz - type continuity the bifunction , dinh et al . replaced the second optimization problem in the extragradient method by the armijo linesearch technique and obtained the following hybrid algorithm [ dhm2014 ] where and .arcording to algorithm , we still have to solve an optimization program on for , find an optimization direction for and compute a projection onto for at each step .we emphasize that the projection in algorithms , and deals with the constrained set , while the sets and in algorithm are two half - spaces , and so can be expressed by an explicit formula ( see , for instance ) .in this section , we recall some definitions and results for further use .let be a nonempty closed convex subset of a real hilbert space .we begin with some concepts of the monotonicity of a bifunction ( see for more details ) .a bifunction is said to be * strongly monotone on if there exists a constant such that * monotone on if * pseudomonotone on if * lipschitz - type continuous on if there exist two positive constants such that from the definitions above , it is clear that a strongly monotone bifunction is monotone and a monotone bifunction is pseudomonotone , i.e. , for solving ep , we assume that the bifunction satisfies the following conditions : * is pseudomonotone on and for all ; * is lipschitz - type continuous on with the constants ; * is weakly continuous on ; * is convex and subdifferentiable on for every fixed it is easy to show that under the assumptions , the solution set of ep is closed and convex ( see , for instance ) . in this paper , we assume that the solution set is nonempty. the metric projection is defined by since is nonempty , closed and convex , exists and is unique .it is also known that has the following characteristic properties , see for more details .[ lem.propertypc ] let be the metric projection from onto .then * is firmly nonexpansive , i.e. , * for all , * if and only if let be a function . the subdifferential of at is defined by we recall that the normal cone of at is defined by a function is called weakly lower semicontinuous at if for any sequence in converges weakly to then it is well - known that the functional is convex and weakly lower semicontinuous .any hilbert space has the kadec - klee property ( see , for instance ) , i.e. , if is a sequence in such that and then as .finally , we have the following technical lemma . [lem.technique ] let , , be nonnegative real sequences , and for all the following inequality holds if and then .in this section , we present our algorithm for more details and prove its convergence .[ algor1 ] * initialization . * chose and set .the parameters and satisfy the following conditions * step 1 . *solve a strongly convex program if then stop .+ * step 2 . * compute where and .set and go back * step 1 .* we have the following result . [ lem2 ]if algorithm finishes at the iteration step , then .assume that .from the definition of , thus , from ( * ? ? ? * proposition 2.1 ) , one has .the proof of lemma is complete .we need the lemma below which is an infinite version of theorem 27.4 in and is similarly proved by using moreau - rockafellar theorem to find the subdifferential of a sum of a convex function and the indicator function to in a real hilbert space .[ lem.equivalent_minpro ] let be a convex subset of a real hilbert space h and be a convex and subdifferentiable function on .then , is a solution to the following convex optimization problem if and only if , where denotes the subdifferential of and is the normal cone of at .based on lemma , we obtain the following central lemma which is used to prove the convergence of algorithm .[ lem1 ] assume that .let be the sequences generated by algorithm .then , there holds the relation where is defined by step 2 of algorithm . from the definition of and lemma , thus , there exist and such that hence , this together with the definition of implies that by , from the last two inequalities , we obtain similarly , by replacing by , we also have substituting onto and a straightforward computation yield substituting onto we also obtain since and , .thus , from the pseudomonotonicity of one has .this together with implies that by the lipschitz - type continuity of , thus , the relations and lead to this together with the relation implies that thus , we have the following fact by the triangle , cauchy - schwarz and cauchy inequalities , this together with implies that thus , combining and we obtain thus , from the definition of we obtain lemma is proved .[ lem4 ] let be the sequences generated by algorithm .then , there hold the following relations * for all . * ( i ) . from the definitions of and , we see that they are the half - spaces .thus , and are closed and convex for all .lemma and the definition of ensure that for all .it is clear that .assume that for some .from and lemma (iii ) we see that for all .this is also true for all because . from the definition of , or . by the induction , for all .since is nonempty , so is .thus , is well - defined .+ ( ii ) . from the definition of and lemma (iii . ) , . thus , from lemma (ii ) we have onto , one has , therefore , are bounded .substituting onto , one also has this implies that is non - decreasing .hence , there exists the limit of .by , passing the limit in the last inequality as , we obtain thus , from the definition of and , set , , , , and . from the definition of , .thus , from , from the hypothesises of and , we see that and .lemma and imply that , or this together with the relation and the inequality implies that in addition , the sequence is also bounded because of the boundedness of .lemma is proved .+ thanks to lemma , we see that if algorithm terminates at the iterate then a solution of ep can be found .otherwise , if algorithm does not terminate then we have the following main result .[ theo.1 ] let be a nonempty closed convex subset of a real hilbert space .assume that the bifunction satisfies all conditions .in addition the solution set is nonempty .then , the sequences , generated by algorithm converge strongly to . from lemma , the sequence is bounded .assume that is any weak cluster point of .without loss of generality , we can write as . thus , because .now , we show that . from , we get passing the limit in as and using lemma (ii ) , the bounedness of and we obtain for all .thus , . from the inequality , we get where . by the weak lower semicontinuity of the norm and , we have by the definition of , and .thus , . by the kadec - klee property of the hilbert space , we have as . from lemma , we also see that converges strongly to .theorem is proved .in this section , we introduce several applications of algorithm [ algor1 ] to gato differentiable eps and multivalued variational inequalities .we consider eps for gato differentiable bifunctions .we denote by the gato derivative of the function at . for solving ep , we assume that the bifunction satisfies the following conditions : * is monotone on and for all ; * is convex and gato differentiable on ; * there exists a constant such that * for all .if ep is reduced to vip for the operator then the condition is equivalent to the lipschitzianity of with the constant .we need the following results .* lemma 2)[lem3 ] suppose that the conditions hold .then , * the operator is monotone on . * .[ lem4 ] assume that is a l - lipschitz continuous operator and the bifunction is defined by for all .then * is lipschitz - type continuous on with . * if and only if , where .\i . from the - lipschitz continuity of , the cauchy - schwarz and cauchy inequalities, we have this implies that is lipschitz - type continuous on with .+ ii . from the definition of , we have in which the third equality is followed from the fact that and the last equality is true because of the definition of the metric projection .lemma [ lem4 ] is proved .thanks to lemma [ lem3 ] , instead of ep ( [ eq : ep ] ) we solve vip ( [ vip ] ) for the operator onto . it is emphasized that and are slightly strong conditions .however , in this case , we can use the existing methods for vips to solve eps .for instance , using the subgradient extragradient method ( * ? ? ?* algorithm 3.6 ) we obtain the following hybrid algorithm for solving ep ( [ eq : ep ] ) where .if the conditions hold for all then generated by ( [ cgr2011 ] ) converges strongly to . in this subsection , we introduce the following strong convergence result .[ theo3 ] let be a nonempty closed convex subset of a real hilbert space .assume that the bifunction satisfies all conditions such that is nonempty .let be the sequence generated by the following manner : and where are defined as in algorithm [ algor1 ] with .then , the sequence converges strongly to .set for all , where .lemma .ii .ensures that .the bifunction satisfies the conditions and automatically . from lemma [ lem3].i ., we see that is monotone , and so it is also pseudomonotone or satisfies the condition .lemma [ lem4].i . and ensure that the condition holds for the bifunction . from step 1 of algorithm and lemma [ lem4].ii ., .thus , theorem [ theo3 ] is directly followed from theorem [ theo.1 ] for . in this subsection, we consider the following multivalued variational inequality problem ( mvip ) where is a multivalued compact operator . for a pair , we put it is easy to show that is a solution of mvip ( [ mvip ] ) if and only if is a solution of ep for the bifunction on .we recall the following definitions . a multivalued operator is said to be : * monotone on if * pseudomonotone on if * - lipschitz continuous if there exists a positive constant such that if we denote by the hausdorff distance between two sets and then the definition iii .means that we can easily check that if is pseudomonotone and - lipschitz continuous then is also pseudomonotone and lipschitz - type continuous with two constants .note that , when is singlevalued then algorithm becomes the _ hybrid algorithm without the extrapolation step _ for variational inequalities .when is multivalued then algorithm can be applied for the bifunction defined by ( [ eq : f ] ) .a disadvantage of performing algorithm in this case is that it is not easy to chose an approximation of the bifunction .in fact , we can prove the strong convergence of the following algorithm where are defined as in algorithm [ algor1 ] with .in this section , we consider two previously known academic numberical examples in euclidean spaces .the purpose of these experiments is to illustrate the convergence of algorithm and compare its efficiency with algorithms , and . of course , there are many mathematical models for eps in infinite dimensional hilbert spaces , see , for instance and the norm convergence of algorithms is more necessary than the weak convergence . the ability of the implementation of these algorithms has been discussed in sections and [ algor ] .note that algorithm , in general , is difficult to compute numerical experiments because of the complexity of the sets .however , in the examples below , the feasible set is a polyhedron expressed by , where is a matrix , is a vector .thus , from the definition of in algorithm , we see that it is also a polyhedron and with .after that can be sequentially constructed by adding a linear inequality constraint to the set of constraints of .this is performed in matlab version 7.0 and the number of constraints increases when increases . the sets in algorithms and are simply constructed more at each step . while the sets in algorithm are two half - spaces , so we use the explicit formula in to compute . all convex quadratic optimization programs and the projections on polyhedronscan solved easily by the malab optimization toolbox where the projections are equivalently rewriten to the distance optimization programs .the algorithms are performed on a pc desktop intel(r ) core(tm )m cpu @ 2.50ghz 2.50 ghz , ram 2.00 gb . for a given tolerance , we compare numbers of iterates ( iter . ) and execution time ( cpu in sec . ) of mentioned algorithms above with chosing different starting points ._ example 1 ._ we consider the bifunction in proposed in ( * ? ? ?* example 3 ) as and the feasible set \times[0,1]$ ] .it is easy to show that is monotone ( so pseudomonotone ) and lipschitz - type continuous with .the solution set of ep for on is . in this example , for a starting point then the sequence generated by algorithms , , and converges strongly to which is easily known because is explicit .the termination criterion in all algorithms is .the parameters are chosen as follows . in algorithm , we chose .the results are shown in table ..results for given starting points in _example 1_. [ cols="^,^,^,^,^,^,^,^,^ " , ] although , the study of the numerical examples here is preliminary and it is clear that ep depends on the structure of the feasible set and the bifunction .however , the results in tables and show the convergence of our proposed algorithm and compare its efficiency with the others .the paper proposes a novel algorithm for solving eps for a class of pseudomonotone and lipschitz - type continuous bifunctions . by constructing the specially cutting halfspaces ,we have designed the algorithm without the extra - steps .this is the reason which explains why our algorithm can be considered as an improvement of some previously known algorithms . the strong convergence of the algorithm is proved and its efficiency is illustrated by some numerical experiments .it is also emphasized that we still have to solve exactly an optimization problem in each step .this , in general , is a disadvantage of the algorithm ( also , of the extragradient methods and the armijo linesearch methods ) when equilibrium bifunctions and feasible sets have complex structures .however , contrary to several previous algorithms , our algorithm does not only avoid using the extra - steps which , in general , are inherently costly but also is numerically easer at its last step because the projection is only performed onto the intersection of two half - spaces .the paper also help us in the design and analysis of more practical algorithms to be seen .finally , it seems to be that the algorithm also has competitive advantage .99 anh , p.k . ,hieu , d.v . :parallel hybrid methods for variational inequalities , equilibrium problems and common fixed point problems .vietnam j. math .( 2015 ) , doi:10.1007/s10013 - 015 - 0129-z blum , e. , oettli , w. : from optimization and variational inequalities to equilibrium problems , math. program . * 63 * , 123 - 145 ( 1994 ) ceng , l. c. , hadjisavvas , n. , wong , n. c. : strong convergence theorem by a hybrid extragradient - like approximation method for variational inequalities and fixed point problems .. optim . * 46 * , 635646 ( 2010 ) ceng , l.c ., yao , j.c . : an extragradient - like approximation method for variational inequality problems and fixed point problems .* 190 * , 205 - 215 ( 2007 ) combettes , p. l. , hirstoaga , s. a. : equilibrium programming in hilbert spaces. j. nonlinear convex anal . *6*(1 ) , 117 - 136 ( 2005 ) censor , y . , gibali , a. , reich , s. : the subgradient extragradient method for solving variational inequalities in hilbert space .theory appl .* 148*(2 ) , 318 - 335 ( 2011 ) censor , y. , gibali , a. , reich , s. : strong convergence of subgradient extragradient methods for the variational inequality problem in hilbert space , optim .methods softw .* 26*(4 - 5 ) , 827 - 845 ( 2011 ) daniele , p. , giannessi , f. , maugeri , a. : equilibrium problems and variational models , kluwer , ( 2003 ) dinh , b.v ., hung , p.g ., muu , l.d .: bilevel optimization as a regularization approach to pseudomonotone equilibrium problems .* 35 * ( 5 ) , 539563 ( 2014 ) goebel , k. , reich , s. : uniform convexity , hyperbolic geometry , and nonexpansive map - pings .marcel dekker , new york ( 1984 ) goebel , k. , kirk , w.a . :topics in metric fixed point theory .cambridge studies in advanced math ., vol . * 28*. cambridge university press , cambridge , ( 1990 ) hieu , d. v. : a parallel hybrid method for equilibrium problems , variational inequalities and nonexpansive mappings in hilbert space .j. korean math .* 52 * , 373 - 388 ( 2015 ) hieu , d. v. : the common solutions to pseudomonotone equilibrium problems .iranian math .( 2015 ) ( accepted for publication ) he , b .-s . , yang , z .- h . , yu an , x .-m . : an approximate proximal - extragradient type method for monotone variational inequalities .* 300 * , 362374 ( 2004 ) fan , k. : a minimax inequality and applications . in : shisha ,o. ( ed . ) inequality iii , pp .103 - 113 . academic press , new york ( 1972 ) konnov , i.v . :combined relaxation methods for variational inequalities .springer , berlin ( 2000 ) korpelevich , g. m. : the extragradient method for finding saddle points and other problems , ekonomikai matematicheskie metody . * 12 * , 747 - 756 ( 1976 ) lyashko , s. i. , semenov , v. v. and voitova t. a. : low - cost modification of korpelevich s methods for monotone equilibrium problems , cybernetics and systems analysis , * 47*(4 ) , 146 - 154 ( 2011 ) mastroeni , g. : on auxiliary principle for equilibrium problems . publ .* 3 * , 12441258 ( 2000 ) muu , l.d . , oettli , w. : convergence of an adative penalty scheme for finding constrained equilibria .nonlinear anal .18*(12 ) , 1159 - 1166 ( 1992 ) yu . v. malitsky , v. v. semenov : a hybrid method without extrapolation step for solving variational inequality problems . j. glob . optim . * 61 * , 193 - 202 ( 2015 ) martinet , b. : r d in variationelles par approximations successives .rech . op ._ * 4 * , 154159 ( 1970 ) mordukhovich , b. , panicucci , b. , pappalardo , m. , passacantando , m. : hybrid proximal methods for equilibrium problems .* 6 * , 1535 - 1550 ( 2012 ) nadezhkina , n. , takahashi , w. : strong convergence theorem by a hybrid method for nonexpansive mappings and lipschitz - continuous monotone mappings .siam j. optim .* 16 * , 1230 - 1241 ( 2006 ) nadezhkina , n. , takahashi , w. : weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings , j. optim .theory appl .* 128 * , 191 - 201 ( 2006 ) nguyen , t. t. v. , strodiot , j. j. , nguyen , v. h. : hybrid methods for solving simultaneously an equilibrium problem and countably many fixed point problems in a hilbert space .theory appl .( 2013 ) doi 10.1007/s10957 - 013 - 0400-y nguyen , t. p. d , strodiot , j. j. , nguyen , v. h. , nguyen , t. t. v. : a family of extragradient methods for solving equilibrium problems . journal of industrial and management optimization .11(2 ) , 619 - 630 ( 2015 ) .doi:10.3934/jimo.2015.11.619 quoc , t.d . ,muu , l.d . ,hien , n.v .: extragradient algorithms extended to equilibrium problems .optimization * 57 * , 749 - 776 ( 2008 ) rockafellar , r.t . : convex analysis .princeton , nj : princeton university press , 1970 rockafellar , r.t .: monotone operators and the proximal point algorithm .siam j. control optim .* 14 * , 877898 ( 1976 ) solodov , m. v. , svaiter , b. f.:forcing strong convergence of proximal point iterations in hilbert space .math . program .* 87 * , 189 - 202 ( 2000 ) solodov , m. v , svaiter , b. f. : a new projection method for variational inequality problems .siam j. control .optim . * 37 * , 765776 ( 1999 ) strodiot , j. j. , nguyen , t. t. v , nguyen , v. h. : a new class of hybrid extragradient algorithms for solving quasi - equilibrium problems , j. glob . optim . * 56 * ( 2 ) , 373 - 397 ( 2013 ) takahashi , s. , takahashi , w. : viscosity approximation methods for equilibrium problems and fixed point in hilbert space . j. math .appl . * 331*(1 ) , 506 - 515 ( 2007 ) takahashi , w. , toyoda , m. : weak convergence theorems for nonexpansive mappings and monotone mappings .theory appl .* 118 * , 417428 ( 2003 ) vuong , p. t. , strodiot , j. j. , nguyen , v. h. : extragradient methods and linesearch algorithms for solving ky fan inequalities and fixed point problems .theory appl .* 155 * , 605627 ( 2012 ) jaiboon , c. , kumam , p. : strong convergence theorems for solving equilibrium problems and fixed point problems of -strict pseudo - contraction mappings by two hybrid projection methods . j. comput .math . * 234 * , 722 - 732 ( 2010 ) | in this paper , we introduce a new hybrid algorithm for solving equilibrium problems . the algorithm combines the extragradient method and the hybrid ( outer approximation ) method . in this algorithm , only an optimization program is solved at each iteration without the extra - steps like as in the extragradient method and the armijo linesearch method . a specially constructed half - space in the hybrid method is the reason for the absence of an optimization program in our algorithm . the strong convergence theorem is established and several numerical experiments are implemented to illustrate the convergence of the algorithm and compare it with others . van hieu dang ( communicated by the associate editor name ) |
kernel methods have proven to be a highly successful technique for solving many problems in machine learning , ranging from classification and regression to sequence annotation and feature extraction . at their heartlies the idea that inner products in high - dimensional feature spaces can be computed in implicit form via kernel function : here is a feature map transporting elements of the observation space into a possibly infinite - dimensional feature space .this idea was first used by to show nonlinear separation .there exists a rich body of literature on reproducing kernel hilbert spaces ( rkhs ) and one may show that estimators using norms in feature space as penalty are equivalent to estimators using smoothness in an rkhs .furthermore , one may provide a bayesian interpretation via gaussian processes .see e.g. for details . more concretely , to evaluate the decision function on an example , one typically employs the kernel trick as follows this has been viewed as a strength of kernel methods , especially in the days that datasets consisted of ten thousands of examples .this is because the representer theorem of states that such a function expansion in terms of finitely many coefficients must exist under fairly benign conditions even whenever the space is infinite dimensional .hence we can effectively perform optimization in infinite dimensional spaces .this trick that was also exploited by for evaluating pca . frequently the coefficient space is referred to as _dual space_. this arises from the fact that the coefficients are obtained by solving a dual optimization problem .unfortunately , on large amounts of data , this expansion becomes a significant liability for computational efficiency .for instance , show that the number of nonzero ( i.e. , , also known as the number of `` support vectors '' ) in many estimation problems can grow linearly in the size of the training set . as a consequence ,as the dataset grows , the expense of evaluating also grows .this property makes kernel methods expensive in many large scale problems : there the sample size may well exceed billions of instances .the large scale solvers of and work in primal space to sidestep these problems , albeit at the cost of limiting themselves to linear kernels , a significantly less powerful function class .numerous methods have been proposed to mitigate this issue . to compare computational cost of these methodswe make the following assumptions : * we have observations and access to an with algorithm for solving the optimization problem at hand . in other words ,the algorithm is linear or worse .this is a reasonable assumption almost all data analysis algorithm need to inspect the data at least once to draw inference .* data has dimensions . for simplicitywe assume that it is dense with density rate , i.e. on average coordinates are nonzero . *the number of nontrivial basis functions is .this is well motivated by and it also follows from the fact that e.g. in regularized risk minimization the subgradient of the loss function determines the value of the associated dual variable . * we denote the number of ( nonlinear ) basis functions by .[ [ reduced - set - expansions ] ] reduced set expansions + + + + + + + + + + + + + + + + + + + + + + focused on compressing function expansions after the problem was solved by means of reduced - set expansions .that is , one first solves the full optimization problem at cost and subsequently one minimizes the discrepancy between the full expansion and an expansion on a subset of basis functions .the exponent of arises from the fact that we need to compute kernels times .evaluation of the reduced function set costs at least operations per instance and storage , since each kernel function requires storage of .[ [ low - rank - expansions ] ] low rank expansions + + + + + + + + + + + + + + + + + + + subsequent work by and aimed to reduce memory footprint and complexity by finding subspaces to expand functions .the key difference is that these algorithms reduce the function space _ before _ seeing labels .while this is suboptimal , experimental evidence shows that for well designed kernels the basis functions extracted in this fashion are essentially as good as reduced set expansions .this is to be expected .after all , the kernel encodes our prior belief in which function space is most likely to capture the relevant dependencies between covariates and labels .these projection - based algorithms generate an -dimensional subspace : * compute the kernel matrix on an -dimensional subspace at cost . *the matrix is inverted at cost .* for all observations one computes an explicit feature map by projecting data in rkhs onto the set of basis vectors via } ] .the approach requires storage both at training and test time .training costs operations and prediction on a new observation costs .this is potentially much cheaper than reduced set kernel expansions .the experiments in showed that performance was very competitive with conventional rbf kernel approaches while providing dramatically simplified code .note that explicit spectral finite - rank expansions offer potentially much faster rates of convergence , since the spectrum decays as fast as the eigenvalues of the associated regularization operator .nonetheless random kitchen sinks are a very attractive alternative due to their simple construction and the flexility in synthesizing kernels with predefined smoothness properties . [ [ fastfood ] ] fastfood + + + + + + + + our approach hews closely to random kitchen sinks .however , it succeeds at overcoming their key obstacle the need to _ store _ and to _ multiply _ by a random matrix .this way , fastfood , accelerates random kitchen sinks from to time while only requiring rather than storage .the speedup is most significant for large input dimensions , a common case in many large - scale applications .for instance , a tiny 32x32x3 image in the cifar-10 already has 3072 dimensions , and non - linear function classes have shown to work well for mnist and cifar-10 .our approach relies on the fact that hadamard matrices , when combined with gaussian scaling matrices , behave very much like gaussian random matrices .that means these two matrices can be used in place of gaussian matrices in random kitchen sinks and thereby speeding up the computation for a large range of kernel functions .the computational gain is achieved because unlike gaussian random matrices , hadamard matrices admit fft - like multiplication and require no storage .we prove that the fastfood approximation is unbiased , has low variance , and concentrates almost at the same rate as random kitchen sinks .moreover , we extend the range of applications from radial basis functions to any kernel that can be written as dot product .extensive experiments with a wide range of datasets show that fastfood achieves similar accuracy to full kernel expansions and random kitchen sinks while being 100x faster with 1000x less memory .these improvements , especially in terms of memory usage , make it possible to use kernel methods even for embedded applications .our experiments also demonstrate that fastfood , thanks to its speedup in training , achieves state - of - the - art accuracy on the cifar-10 dataset among permutation - invariant methods .table [ tab : compcost ] summarizes the computational cost of the above algorithms ..computational cost for reduced rank expansions .efficient algorithms achieve and typical sparsity coefficients are . [ cols="<,<,<,<,<",options="header " , ] in figure [ fig : expansion ] , we show regression performance as a function of the number of basis functions on the cpu dataset .as is evident , it is necessary to have a large in order to learn highly nonlinear functions .interestingly , although the fourier features do not seem to approximate the gaussian rbf kernel , they perform well compared to other variants and improve as increases .this suggests that learning the kernel by direct spectral adjustment might be a useful application of our proposed method . in the previous experiments ,we observe that fastfood is on par with exact kernel computation , the nystrom method , and random kitchen sinks .the key point , however , is to establish whether the algorithm offers computational savings . for this purposewe compare random kitchen sinks using eigen and our method using spiral .both are highly optimized numerical linear algebra libraries in c++ .we are interested in the time it takes to go from raw features of a vector with dimension to the label prediction of that vector . on a small problem with and , performing prediction with random kitchen sinks takes 0.07 seconds .our method is around 24x faster , taking only 0.003 seconds to compute the label for one input vector .the speed gain is even more significant for larger problems , as is evident in table [ tab : improvements ] .this confirms experimentally the vs. runtime and the vs. storage of fastfood relative to random kitchen sinks . in other words ,the computational savings are substantial for large input dimensionality . to understand the importance of nonlinear feature expansions for a practical application , we benchmarked fastfood , random kitchen sinks on the cifar-10 dataset which has 50,000 training images and 10,000 test images .each image has 32x32 pixels and 3 channels ( ) . in our experiments ,linear svms achieve 42.3% accuracy on the test set .non - linear expansions improve the classification accuracy significantly .in particular , fastfood fft ( `` fourier features '' ) achieve 63.1% while fastfood ( `` hadamard features '' ) and random kitchen sinks achieve 62.4% with an expansion of .these are also best known classification accuracies using permutation - invariant representations on this dataset . in terms of speed ,random kitchen sinks is 5x slower ( in total training time ) and 20x slower ( in predicting a label given an image ) compared to both fastfood and and fastfood fft .this demonstrates that non - linear expansions are needed even when the raw data is high - dimensional , and that fastfood is more practical for such problems . in particular , in many cases ,linear function classes are used because they provide fast training time , and especially test time , but not because they offer better accuracy .the results on cifar-10 demonstrate that fastfood can overcome this obstacle .we demonstrated that it is possible to compute nonlinear basis functions in time , a significant speedup over the best competitive algorithms .this means that kernel methods become more practical for problems that have large datasets and/or require real - time prediction .in fact , fastfood can be used to run on cellphones because not only it is fast , but it also requires only a small amount of storage . note that our analysis is not limited to translation invariant kernels but it also includes inner product formulations .this means that for most practical kernels our tools offer an easy means of making kernel methods scalable beyond simple subspace decomposition strategies .extending our work to other symmetry groups is subject to future research . also note that fast multiplications with near - gaussian matrices are a key building block of many randomized algorithms .it remains to be seen whether one could use the proposed methods as a substitute and reap significant computational savings .i. bogaert , b. michiels , and j. fostier . computation of legendre polynomials and gauss legendre nodes and weights for parallel computing ._ siam journal on scientific computing _ , 340 ( 3):0 c83c101 , 2012 .b. boser , i. guyon , and v. vapnik .a training algorithm for optimal margin classifiers . in d.haussler , editor , _ proc . annual conf .computational learning theory _, pages 144152 , pittsburgh , pa , july 1992 .acm press .s. boyd , n. parikh , e. chu , b. peleato , and j. eckstein . distributed optimization and statistical learning via the alternating direction method of multipliers . _foundations and trends in machine learning _ , 30 ( 1):0 1123 , 2010 .a. das and d. kempe .submodular meets spectral : greedy algorithms for subset selection , sparse approximation and dictionary selection . in l.getoor and t. scheffer , editors , _ proceedings of the 28th international conference on machine learning , icml _ , pages 10571064 .omnipress , 2011 .a. dasgupta , r. kumar , and t. sarls . fast locality - sensitive hashing . in _ proceedings of the 17th acmsigkdd international conference on knowledge discovery and data mining _ , pages 10731081 .acm , 2011 .f. girosi and g. anzellotti .rates of convergence for radial basis functions and neural networks . in r.j. mammone , editor , _ artificial neural networks for speech and vision _ , pages 97113 , london , 1993 .chapman and hall .s. matsushima , s.v.n .vishwanathan , and a.j .linear support vector machines via dual cached loops . in q.yang , d. agarwal , and j. pei , editors , _ the 18th acm sigkdd international conference on knowledge discovery and data mining , kdd _ , pages 177185 .acm , 2012 .url http://dl.acm.org/citation.cfm?id=2339530 .a. rahimi and b. recht .random features for large - scale kernel machines . in j.c .platt , d. koller , y. singer , and s. roweis , editors , _ advances in neural information processing systems 20_. mit press , cambridge , ma , 2008 .n. ratliff , j. bagnell , and m. zinkevich .( online ) subgradient methods for structured prediction . in _ eleventh international conference on artificial intelligence and statistics ( aistats ) _ , march 2007 .a. j. smola and b. schlkopf . sparse greedy matrix approximation for machine learning . in_ proceedings of the international conference on machine learning _ , pages 911918 , san francisco , 2000 .morgan kaufmann publishers .a. j. smola , b. schlkopf , and k .-general cost functions for support vector regression . in t. downs , m. frean , and m. gallagher , editors , _ proc . of the ninth australian conf . on neural networks _ , pages 7983 , brisbane , australia , 1998university of queensland .a. j. smola , z. l. vri , and r. c. williamson .regularization with dot - product kernels . in t.k. leen , t. g. dietterich , and v. tresp , editors , _ advances in neural information processing systems 13 _ , pages 308314 . mit press , 2001 .b. taskar , c. guestrin , and d. koller .max - margin markov networks . in s. thrun , l. saul , and b. schlkopf , editors , _ advances in neural information processing systems 16 _ , pages 2532 , cambridge , ma , 2004 .mit press . v. vapnik , s. golowich , and a. j. smola .support vector method for function approximation , regression estimation , and signal processing . in m. c. mozer , m. i. jordan , and t. petsche , editors , _ advances in neural information processing systems 9 _ , pages 281287 , cambridge , ma , 1997 . mit press . c. k. i. williams .prediction with gaussian processes : from linear regression to linear prediction and beyond . in m.i. jordan , editor , _ learning and inference in graphical models _ , pages 599621 .kluwer academic , 1998 .christoper k. i. williams and matthias seeger .using the nystrom method to speed up kernel machines . in t.k. leen , t. g. dietterich , and v. tresp , editors , _ advances in neural information processing systems 13 _ , pages 682688 , cambridge , ma , 2001 . mit press .r. c. williamson , a. j. smola , and b. schlkopf .generalization bounds for regularization networks and support vector machines via entropy numbers of compact operators ._ ieee trans .inform . theory _, 470 ( 6):0 25162532 , 2001 . | despite their successes , what makes kernel methods difficult to use in many large scale problems is the fact that storing and computing the decision function is typically expensive , especially at prediction time . in this paper , we overcome this difficulty by proposing fastfood , an approximation that accelerates such computation significantly . key to fastfood is the observation that hadamard matrices , when combined with diagonal gaussian matrices , exhibit properties similar to dense gaussian random matrices . yet unlike the latter , hadamard and diagonal matrices are inexpensive to multiply and store . these two matrices can be used in lieu of gaussian matrices in random kitchen sinks proposed by and thereby speeding up the computation for a large range of kernel functions . specifically , fastfood requires time and storage to compute non - linear basis functions in dimensions , a significant improvement from computation and storage , without sacrificing accuracy . our method applies to any translation invariant and any dot - product kernel , such as the popular rbf kernels and polynomial kernels . we prove that the approximation is unbiased and has low variance . experiments show that we achieve similar accuracy to full kernel expansions and random kitchen sinks while being 100x faster and using 1000x less memory . these improvements , especially in terms of memory usage , make kernel methods more practical for applications that have large training sets and/or require real - time prediction . |
estimation of discrete structure , such as graphs or clusters , or variable selection is an age - old problem in statistics .it has enjoyed increased attention in recent years due to the massive growth of data across many scientific disciplines .these large datasets often make estimation of discrete structures or variable selection imperative for improved understanding and interpretation .most classical results do not cover the loosely defined case of high - dimensional data , and it is mainly in this area where we motivate the promising properties of our new stability selection . in the context of regression , for example , an active area of research is to study the case , where the number of variables or covariates exceeds the number of observations ; for an early overview see for example . in a similar spirit , graphical modelling with many more nodes than sample sizehas been the focus of recent research , and cluster analysis is another widely used technique to infer a discrete structure from observed data .challenges with estimation of discrete structures include computational aspects , since corresponding optimisation problems are discrete , as well as determining the right amount of regularisation , for example in an asymptotic sense for consistent structure estimation .substantial progress has been made over the last years in developing computationally tractable methods which have provable statistical ( asymptotic ) properties , even for the high - dimensional setting with many more variables than samples .one interesting stream of research has focused on relaxations of some discrete optimisation problems , for example by -penalty approaches or greedy algorithms .the practical usefulness of such procedures has been demonstrated in various applications .however , the general issue of selecting a proper amount of regularisation ( for the procedures mentioned above and for many others ) for getting a right - sized structure or model has largely remained a problem with unsatisfactory solutions .we address the problem of proper regularisation with a very generic subsampling approach ( bootstrapping would behave similarly ) .we show that subsampling can be used to determine the amount of regularisation such that a certain familywise type i error rate in multiple testing can be conservatively controlled for finite sample size . particularly for complex , high - dimensional problems ,a finite sample control is much more valuable than an asymptotic statement with the number of observations tending to infinity . beyond the issue of choosing the amount of regularisation, the subsampling approach yields a new structure estimation or variable selection scheme . for the more specialised case of high - dimensional linear models , we prove what we expect in greater generality : namely that subsampling in conjunction with -penalised estimation requires much weaker assumptions on the design matrix for asymptotically consistent variable selection than what is needed for the ( non - subsampled ) -penalty scheme .furthermore , we show that additional improvements can be achieved by randomising not only via subsampling but also in the selection process for the variables , bearing some resemblance to the successful tree - based random forest algorithm . subsampling ( and bootstrapping )has been primarily used so far for asymptotic statistical inference in terms of standard errors , confidence intervals and statistical testing .our work here is of a very different nature : the marriage of subsampling and high - dimensional selection algorithms yields finite sample familywise error control and markedly improved structure estimation or selection methods . in general ,let be a -dimensional vector , where is sparse in the sense that components are non - zero .in other words , . denote the set of non - zero values by and the set of variables with vanishing coefficient by .the goal of structure estimation is to infer the set from noisy observations . as a first supervised example , consider data with univariate response variable and -dimensional covariates .we typically assume that s are i.i . distributed .the vector could be the coefficient vector in a linear model where , is the design matrix and is the random noise whose components are independent , identically distributed .thus , inferring the set from data is the well - studied variable selection problem in linear regression . a main stream of classical methods proceeds to solve this problem by penalising the negative log - likelihood with the -norm which equals the number of non - zero components of .the computational task to solve such an -norm penalised optimisation problem becomes quickly unfeasible if is getting large , even when using efficient branch and bound techniques .alternatively , one can relax the -norm by the -norm penalty .this leads to the lasso estimator , where is a regularisation parameter and we typically assume that the covariates are on the same scale , i.e. .an attractive feature of lasso is its computational feasibility for large since the optimisation problem in ( [ lasso ] ) is convex .furthermore , the lasso is able to select variables by shrinking certain estimated coefficients exactly to 0 .we can then estimate the set of non - zero coefficients by which involves convex optimisation only .substantial understanding has been gained over the last few years about consistency of such lasso variable selection , and we present the details in section [ subsec.randomlasso ] . among the challengesare the issue of choosing a proper amount of regularisation for consistent variable selection and the fact that restrictive design conditions are needed for asymptotically recovering the true set of relevant covariates .a second example is on unsupervised gaussian graphical modelling .the data is assumed to be the goal is to infer conditional dependencies among the variables or components in .it is well - known that and are conditionally dependent given all other components if and only if , and we then draw an edge between nodes and in a corresponding graph .the structure estimation is thus on the index set which has cardinality ( and of course , we can represent as a vector ) and the set of relevant conditional dependencies is .similarly to the problem of variable selection in regression , -norm methods are computationally very hard and become very quickly unfeasible for moderate or large values of .a relaxation with -type penalties has also proven to be useful in this context .a recent proposal is the graphical lasso : this amounts to an -penalised estimator of the gaussian log - likelihood , partially maximised over the mean vector , when minimising over all nonnegative definite symmetric matrices .the estimated graph structure is then which involves convex optimisation only and is computationally feasible for large values of .another potential area of application is clustering .choosing the correct number of cluster is a notoriously difficult problem . looking for clusters that are stable under perturbations or subsampling of the data can help to get a better sense of a meaningful number of clusters and to validate results .indeed , there has been some activity in this area , most notably in the context of _ consensus clustering _ . for an early application see .our proposed false discovery control can be applied to consensus clustering , yielding good estimates of the parameters of a suitable base clustering method for consensus clustering .the use of resampling for purposes of validation is certainly not new ; we merely try to put it into a more formal framework and to show certain empirical and theoretical advantages of doing so .it seems difficult to give a complete coverage of all previous work in the area , as notions of stability , resampling and perturbations are very natural in the context of structure estimation and variable selection .we reference and compare with previous work throughout the paper .the structure of the paper is as follows .the generic stability selection approach , its familywise type i multiple testing error control and some representative examples from high - dimensional linear models and gaussian graphical models are presented in section [ sec.stable ] . a detailed asymptotic analysis of lasso and randomised lasso for high - dimensional linear modelsis given in section [ sec.cons ] and more numerical results are described in section [ sec.numeric ] . after a discussion in section [ sec.disc ] ,we collect all the technical proofs in the appendix .stability selection is not a new variable selection technique .its aim is rather to enhance and improve existing methods .first , we give a general description of stability selection and we present specific examples and applications later .we assume throughout this section [ sec.stable ] that the data , denoted here by , are independent and identically distributed ( e.g. with covariate and response ) . for a generic structure estimation or variable selection technique , we have a tuning parameter that determines the amount of regularisation .this tuning parameter could be the penalty parameter in -penalised regression , see ( [ lasso ] ) , or in gaussian graphical modelling , see ( [ glasso ] ) ; or it may be number of steps in forward variable selection or orthogonal matching pursuit or the number of iterations in matching pursuit or boosting ; a large number of steps of iterations would have an opposite meaning from a large penalty parameter , but this does not cause conceptual problems . for every value , we obtain a structure estimate .it is then of interest to determine whether there exists a such that is identical to with high probability and how to achieve that right amount of regularisation .we motivate the concept of stability paths in the following , first for regression .stability paths are derived from the concept of regularisation paths .a regularisation path is given by the coefficient value of each variable over all regularisation parameters : .stability paths ( defined below ) are , in contrast , the _ probability _ for each variable to be selected when randomly resampling from the data . for any given regularisation parameter , the selected set is implicitly a function of the samples .we write where necessary to express this dependence .let be a random subsample of of size , drawn without replacement .for every set , the probability of being in the selected set is the probability in ( [ pi ] ) is with respect to both the random subsampling ( and other sources of randomness if is a randomised algorithm , see section [ subsec.randomlasso ] ) . the sample size of is chosen as it resembles most closely the bootstrap while allowing computationally efficient implementation .subsampling has also been advocated in a related context in . for every variable , the stability path is given by the selection probabilities , .it is a complement to the usual path - plots that show the coefficients of all variables as a function of the regularisation parameter .it can be seen in figure [ fig : stabpath ] that this simple path plot is potentially very useful for improved variable selection for high - dimensional data . in the remainder of the manuscript , we look at the selection probabilities of individual variables .the definition above covers also sets of variables .we could monitor the selection probability of a set of functionally related variables , say , by asking how often _ at least one _ variable in this set is chosen or how often _ all _ variables in the set are chosen .left : the lasso path for the vitamin gene - expression dataset .the paths of the 6 non - permuted genes are plotted as solid , red lines , while the paths of the 4082 permuted genes are shown as broken , black lines . selecting a model with all 6 unpermuted genes invariably means selecting a large number of irrelevant noise variables .middle : the stability path of lasso .the first 4 variables chosen with stability selection are truly non - permuted variables .right : the stability path for the ` randomised lasso ' with weakness , introduced in section [ subsec.randomlasso ] .now all 6 non - permuted variables are chosen before any noise variable enters the model ._ , scaledwidth=95.0% ] we apply stability selection to the lasso defined in ( [ lasso ] ) .we work with a gene expression dataset for illustration which is kindly provided by dsm nutritional products ( switzerland ) . for samples, there is a continuous response variable measuring the logarithm of riboflavin ( vitamin b2 ) production rate of bacillus subtilis , and we have continuous covariates measuring the logarithm of gene expressions from essentially the whole genome of bacillus subtilis .certain mutations of genes are thought to lead to higher vitamin concentrations and the challenge is to identify those relevant genes via a linear regression analysis .that is , we consider a linear model as in ( [ eq : linear ] ) and want to infer the set .instability of the selected set of genes has been noted before , if either using marginal association or variable selection in a regression or classification model . are close in spirit to our approach by arguing for ` consensus ' gene signatures which assess the stability of selection , while propose to measure stability of so - called ` molecular profiles ' by the jaccard index .to see how lasso and the related stability path cope with noise variables , we randomly permute all but 6 of the 4088 gene expression across the samples , using the same permutation to keep the dependence structure between the permuted gene expressions intact .the set of 6 unpermuted genes has been chosen randomly among the 200 genes with the highest marginal association with the response .the lasso path is shown in the left panel of figure [ fig : stabpath ] , as a function of the regularisation parameter ( rescaled so that is the minimal -value for which the null model is selected and amounts to the basis pursuit solution ) .three of the ` relevant ' ( unpermuted ) genes stand out , but all remaining three variables are hidden within the paths of noise ( permuted ) genes .the middle panel of figure [ fig : stabpath ] shows the stability path .at least four relevant variables stand out much clearer now than they did in the regularisation path plot .the right panel shows the stability plot for randomised lasso which will be introduced in section [ subsec.randomlasso ] : now all 6 unpermuted variables stand above the permuted variables and the separation between ( potentially ) relevant variables and irrelevant variables is even better .choosing the right regularisation parameter is very difficult for the original path .the prediction optimal and cross - validated choice include too many variables and the same effect can be observed in this example , where 14 permuted variables are included in the model chosen by cross - validation . figure [ fig : stabpath ] motivates that choosing the right regularisation parameter is much less critical for the stability path and that we have a better chance to select truly relevant variables . in a traditional setting , variable selection would amount to choosing one element of the set of models where is again the set of considered regularisation parameters , which can be either continuous or discrete .there are typically two problems : first , the correct model might not be a member of ( [ list ] ) .second , even if it is a member , it is typically very hard for high - dimensional data to determine the right amount of regularisation to select exactly , or to select at least a close approximation .with stability selection , we do not simply select one model in the list ( [ list ] ) .instead the data are perturbed ( for example by subsampling ) many times and we choose all structures or variables that occur in a large fraction of the resulting selection sets . for a cutoff with and a set of regularisation parameters , the set of stable variablesis defined as we keep variables with a high selection probability and disregard those with low selection probabilities .the exact cutoff with is a tuning parameter but the results vary surprisingly little for sensible choices in a range of the cutoff .neither do results depend strongly on the choice of regularisation or the regularisation region .see figure [ fig : stabpath ] for an example . before we present some guidance on how to choose the cutoff parameter and the regularisation region below , it is worthwhile pointing out that there have been related ideas in the literature on bayesian model selection . show certain predictive optimality results for the so - called _ median probability model _ , consisting of variables which have posterior probability of being in the model of 1/2 or greater ( as opposed to choosing the model with the highest posterior probability ) . or are examples of more applied papers considering bayesian variable selection in this context .when trying to recover the set , a natural goal is to include as few variables of the set of noise variables as possible .the choice of the regularisation parameter is hence crucial .an advantage of our stability selection is that the choice of the initial set of regularisation parameters has typically not a very strong influence on the results , as long as is varied with reason .another advantage , which we focus on below , is the ability to choose this set of regularisation parameters in a way that guarantees , under stronger assumptions , a certain bound on the expected number of false selections .[ def - not ] let be the set of selected structures or variables if varying the regularisation in the set .let be the average number of selected variables , .define to be the number of falsely selected variables with stability selection , in general , it is very hard to control , as the distribution of the underlying estimator depends on many unknown quantities .exact control is only possible under some simplifying assumptions .[ error control ] [ theo : error ] assume that the distribution of is exchangeable for all .also , assume that the original procedure is not worse than random guessing , i.e. for any , the expected number of falsely selected variables is then bounded by we will discuss below how to make constructive use of the value which is in general an unknown quantity .the expected number of falsely selected variables is sometimes called the per - family error rate ( _ pfer _ ) or , if divided by , the per - comparison error rate ( _ fcer _ ) in multiple testing . choosing less variables ( reducing ) or increasing the threshold for selection will , unsurprisingly , reduce the the expected number of falsely selected variables , with a minimal achievable non - trivial value of ( for and ) for the _pfer_. this seems low enough for all practical purposed as long as , say .the involved exchangeability assumption is perhaps stronger than one would wish , but there does not seem to be a way of getting error control in the same generality without making similar assumptions . for regression in ( [ eq : linear ] ) , the exchangeability assumption is fulfilled for all reasonable procedures if the design is random and the distribution of is exchangeable .independence of all variables in is a special case .more generally , the variables could have a joint normal distribution with for all with and .for real data , we have no guarantee that the assumption is fulfilled but the numerical examples in section [ sec.numeric ] show that the bound holds up very well for real data . note also that the assumption of exchangeability is only needed to prove theorem [ theo : error ] .all other benefits of stability selection shown in this paper do not rely on this assumption . besides exchangeability , we needed another , quite harmless , assumption , namely that the original procedure is not worse than random guessing .one would certainly hope that this assumption is fulfilled .if it is not , the results below are still valid with slightly weaker constants .the assumption seems so weak , however , that we do not pursue this further .the threshold value is a tuning parameter whose influence is very small . for sensible values in the range of , say , , results tend to be very similar . once the threshold is chosen at some default value , the regularisation region is determined by the desired error control .specifically , for a default cutoff value , choosing the regularisation parameters such that say will control ; or choosing such that controls the familywise error rate ( fwer ) at level , i.e. . of course , we can proceed the other way round by fixing the regularisation region and choosing such that is controlled at the desired level . to do this ,we need knowledge about . this can be easily achieved by regularisation of the selection procedure in terms of the number of selected variables .that is , the domain for the regularisation parameter determines the number of selected variables , i.e. . for example , with -norm penalisation as in ( [ lasso ] ) or ( [ glasso ] ) , the number is given by the variables which enter first in the regularisation path when varying from a maximal value to some minimal value .mathematically , is such that . without stability selection ,the regularisation parameter invariably has to depend on the unknown noise level of the observations .the advantage of stability selection is that ( a ) exact error control is possible , and ( b ) the method works fine even though the noise level is unknown . this is a real advantage in high - dimensional problems with , as it is very hard to estimate the noise level in these settings .[ [ pointwise - control . ] ] pointwise control .+ + + + + + + + + + + + + + + + + + for some applications , evaluation of subsampling replicates of are already computationally very demanding for a single value of .if this single value is chosen such that some overfitting occurs and the set is rather too large , in the sense that it contains with high probability , the same approach as above can be used and is in our experience very successful .results typically do not depend strongly on the utilised regularisation .see the example below for graphical modelling . setting , one can immediately transfer all results above to the case of what we call here pointwise control . for methods which select structures incrementally ,i.e. for which for all , pointwise control and control with are equivalent since is then monotonically increasing with decreasing for all .vitamin gene - expression dataset .the regularisation path of graphical lasso ( top row ) and the corresponding point - wise stability selected models ( bottom row ) ._ , scaledwidth=90.0% ] the same plot as in figure [ fig : eggs ] but with the variables ( expression values of each gene ) permuted independently .the empty graph is the true model . with stability selection ,only a few errors are made , as guaranteed by the made error control ._ , scaledwidth=90.0% ] stability selection is also promising for graphical modelling . herewe focus on gaussian graphical models as described in section [ subsec.prelim ] around formula ( [ ggm ] ) and ( [ glasso ] ) .the pattern of non - zero entries in the inverse covariance matrix corresponds to the edges between the corresponding pairs of variables in the associated graph and is equivalent to a non - zero partial correlation ( or conditional dependence ) between such pairs of variables .there has been interest recently in using -penalties for model selection in gaussian graphical models due to their computational efficiency for moderate and large graphs .here we work with the graphical lasso , as applied to the data from 160 randomly selected genes from the vitamin gene - expression dataset ( without the response variable ) introduced in section [ subsec.examp1 ] .we want to infer the set of non - zero entries in the inverse covariance matrix .part of the resulting regularisation path of the graphical lasso showing graphs for various values of the regularisation parameter , i.e. where , are shown in the first row of figure [ fig : eggs ] . for reasons of display , variables ( genes ) are ordered first using hierarchical clustering and are symbolised by nodes arranged in a circle .stability selection is shown in the bottom row of figure [ fig : eggs ] .we pursue a pointwise control approach . for each value of , we select the threshold so as to guarantee , that is we expect fewer than 30 wrong edges among the 12720 possible edges in the graph .the set varies remarkably little for the majority of the path and the choice of ( which is implied by ) does not seem to be critical , as already observed for variable selection in regression .next , we permute the variables ( expression values ) randomly , using a different permutation for each variable ( gene ) .the true graph is now the empty graph .as can be seen from figure [ fig : eggsnull ] , stability selection selects now just very few edges or none at all ( as it should ) .the top row shows the corresponding graphs estimated with the graphical lasso which yields a much poorer selection of edges .stability selection demands to re - run multiple times . evaluating selection probabilities over 100 subsamples seems sufficient in practice .the algorithmic complexity of lasso in ( [ lasso ] ) or in ( [ randomisedlasso ] ) below is of the order , see . in the regime , running the full lasso path on subsamples of size is hence a quarter of the cost of running the algorithm on the full dataset and running 100 simulations is 25 times the cost of running a single fit on the full dataset .this cost could be compared with the cost of cross - validation , as this is what one has to resort to often in practice to select the regularisation parameter . running 10-foldcross - validation uses approximately as many computational resources as the single fit on the full dataset .stability selection is thus roughly three times more expensive than 10-fold cv .this analysis is based on the fact that the computational complexity scales like with the number of observations ( assuming ) .if computational costs would scale linearly with sample size ( e.g. for lasso with ) , this factor would increase to roughly 5.5 .stability selection with the lasso ( using 100 subsamples ) for a dataset with and takes about 10 seconds on a 2.2ghz processor , using the implementation of .computational costs of this order would often seem worthwhile , given the potential benefits .stability selection is a general technique , applicable to a wide range of applications , some of which we have discussed above . here , we want to discuss advantages and properties of stability selection for the specific application of variable selection in regression with high - dimensional data which is a well - studied topic nowadays .we consider a linear model as in ( [ eq : linear ] ) with gaussian noise , with fixed design matrix and i.i.d .the predictor variables are normalised with for all .we allow for high - dimensional settings where .stability selection is attractive for two reasons .first , the choice of a proper regularisation parameter for variable selection is crucial and notoriously difficult , especially because the noise level is unknown . with stability selection ,results are much less sensitive to the choice of the regularisation .second , we will show that stability selection makes variable selection consistent in settings where the original methods fail .we give general conditions under which consistent variable selection is achieved with stability selection .consistent variable selection for a procedure is understood to be equivalent to it is clearly of interest to know under which conditions consistent variable selection can be achieved . in the high - dimensional context , this places a restriction on the growth of the number of variables and sparsity , typically of the form . while this assumption is often realistic , there are stronger assumptions on the design matrix that need to be satisfied for consistent variable selection . for lasso, it amounts to the ` neighbourhood stability ' condition which is equivalent to the ` irrepresentable condition ' . for orthogonal matching pursuit ( which is essentially forward variable selection ) , the so - called ` exact recovery criterion ' is sufficient and necessary for consistent variable selection . here, we show that these conditions can be circumvented more directly by using stability selection , also giving guidance on the proper amount of regularisation . due to the restricted length of the paper, we will only discuss in detail the case of lasso whereas the analysis of orthogonal matching pursuit is just indicated .an interesting aspect is that stability selection with the original procedures alone yields often very large improvements already .moreover , when adding some extra sort of randomness in the spirit of random forests weakens considerably the conditions needed for consistent variables selection as discussed next .the lasso estimator is given in ( [ lasso ] ) .for consistent variable selection using , it turns out that the design needs to satisfy the so - called ` neighbourhood stability ' condition which is equivalent to the ` irrepresentable condition ' : the condition in ( [ irc ] ) is sufficient and ( almost ) necessary ( the word ` almost ' refers to the fact that a necessary relation is using ` ' instead of ` ' ) .if this condition is violated , all one can hope for is recovery of the regression vector in an -sense of convergence by achieving for .the main assumption here are bounds on the sparse eigenvalues as discussed below .this type of -convergence can be used to achieve consistent variable selection in a two - stage procedure by thresholding or , preferably , the adaptive lasso .the disadvantage of such a two - step procedure is the need to choose several tuning parameters without proper guidance on how these parameters can be chosen in practice .we propose the randomised lasso as an alternative . despite its simplicity ,it is consistent for variable selection even though the ` irrepresentable condition ' in ( [ irc ] ) is violated .randomised lasso is a new generalisation of the lasso .while the lasso penalises the absolute value of every component with a penalty term proportional to , the randomised lasso changes the penalty to a randomly chosen value in the range ] for all .the size of the active set is varied between 4 and 50 , depending on the dataset . for regression ,the noise vector is chosen i.i.d . , where the rescaling of the variance with is due to the rescaling of the predictor variables to unit norm , i.e. . the noise level is chosen to achieve signal - to - noise ratios ( snr ) of and . for classification , we scale the vector to achieve a given bayes misclassification rate , either or .each of the 64 scenarios is run 100 times , once using the standard procedure ( lasso or omp ) , once using stability selection with subsampling and once using stability selection with subsampling and additional randomisation ( for the randomised lasso and for randomised omp ) .the methods are thus in total evaluated on about 20.000 simulations each .the solution of stability selection can not be reproduced by simply selecting the right penalty with lasso , since stability selection provides a fundamentally new solution . to compare the power of both approaches , we look at the probability that of the relevant variables can be recovered without error , where . a set of variablesis said to be recovered successfully for the lasso or omp selection , if there exists a regularisation parameter such that at least variables in have a non - zero regression coefficient and all variables in have a zero regression coefficient .for stability selection , recovery without error means that the variables with highest selection probability are all in .the value is chosen such that at most variables are selected in the whole path of solutions for .note that this notion neglects the fact that the most advantageous regularisation parameter is selected here automatically for lasso and omp but not for stability selection .results are shown in figure [ fig : lasso ] for lasso applied to regression , and in figure [ fig : class_omp ] for lasso applied to classification and omp applied to regression again . in figure[ fig : lasso ] , we also give the median number of variables violating the irrepresentable condition ( denoted by ` violations ' ) and the average of the maximal correlation between a randomly chosen variable and all other variables ( ` max cor ' ) as two measures of the difficulty of the problem .stability selection identifies as many or more correct variables than the underlying method itself in all cases except for scenario ( a ) , where it is about equivalent . that stability selection is not advantageous for scenario ( a ) is to be expected as the design is nearly orthogonal ( very weak empirical correlations between variables ) , thus almost decomposing into univariate decisions and we would not expect stability selection to help in a univariate framework. often the gain of stability selection under subsampling is substantial , irrespective of the sparsity of the signal and the signal - to - noise - ratio .additional randomisation helps in cases where there are many variables violating the irrepresentable condition ; for example in setting ( e ) .this is in line with our theory .next , we test how well the error control of theorem [ theo : error ] holds up for these datasets . for the motif regression dataset ( f ) and the vitamin gene expression dataset ( g ) ,lasso is applied , with randomisation and without . for both datasets, the signal - to - noise ratio is varied between 0.5 , 1 and 2 .the number of non - zero coefficients is varied in steps of 1 between 1 and 12 , with a standard normal distribution for the randomly chosen non - zero coefficients .each of the 72 settings is run 20 times .we are interested in the comparison between the cross - validated solution and stability selection . for stability selection , we chose and thresholds of , corresponding to a control of , where is the number of wrongly selected variables .the control is mathematically derived under the assumption of exchangeability for the distribution of noise variables , see theorem [ theo : error ] .this assumption is most likely not fulfilled for the given dataset and it is of interest to see how well the bound holds up for real data .results are shown in figure [ fig : motifs ] .stability selection reduces the number of falsely selected variables dramatically , while maintaining almost the same power to detect relevant variables .the number of falsely chosen variables is remarkably well controlled at the desired level , giving empirical evidence that the derived error control is useful beyond the discussed setting of exchangeability .stability selection thus helps to select a useful amount of regularisation .stability selection addresses the notoriously difficult problem of structure estimation or variable selection , especially for high - dimensional problems .cross - validation fails often for high - dimensional data , sometimes spectacularly .stability selection is based on subsampling in combination with ( high - dimensional ) selection algorithms .the method is extremely general and we demonstrate its applicability for variable selection in regression and gaussian graphical modelling .stability selection provides finite sample familywise multiple testing error control ( or control of other error rates of false discoveries ) and hence a transparent principle to choose a proper amount of regularisation for structure estimation or variable selection .furthermore , the solution of stability selection depends surprisingly little on the chosen initial regularisation .this is an additional great benefit besides error control .another property of stability selection is the improvement over a pre - specified selection method .it is often the case that computationally efficient algorithms for high - dimensional selection are inconsistent , even in rather simple settings .we prove for randomised lasso that stability selection will be variable selection consistent even if the necessary conditions needed for consistency of the original method are violated . and thus , stability selection will asymptotically select the right model in scenarios where lasso fails .in short , stability selection is the marriage of subsampling and high - dimensional selection algorithms , yielding finite sample familywise error control and markedly improved structure estimation .both of these main properties are demonstrated on simulated and real data .an alternative to subsampling is sample splitting . instead of observingif a given variable is selected for a random subsample , one can look at a random split of the data into two non - overlapping samples of equal size and see if the variable is chosen in both sets simultaneously .let and be two random subsets of with for and .define the simultaneously selected set as the intersection of and , define the simultaneous selection probabilities for any set as where the probability is with respect to the random sample splitting ( and any additional randomness if is a randomised algorithm ) .we work with the selection probabilities based on subsampling but the following lemma lets us convert these probabilities easily into simultaneous selection probabilities based on sample splitting ; the latter is used for the proof of theorem [ theo : error ] .the bound is rather tight for selection probabilities close to 1 .[ lemma : bound ] for any set , a lower bound for the simultaneous selection probabilities is given by , for every , by _ proof ._ let and be the two random subsets in sample splitting of with for and . denote by the probability .note that the two events are not independent as the probability is only with respect to a random split of the fixed samples into and .the probabilities are defined equivalently by , , and .note that and it is obvious that . as , it also follows that .hence which completes the proof . the proof uses mainly lemma [ lemma : markov ] .we first show that for all , using the made definitions and .define furthermore to be the set of noise variables ( in ) which appear in and analogously .the expected number of falsely selected variables can be written as . using the assumption ( [ btrg ] ) ( which asserts that the method is not worse than random guessing ), it follows that .putting together , and hence . using the exchangeability assumption, we have for all and hence , for , it holds that , as desired .note that this result is independent of the sample size used in the construction of , .now using lemma [ lemma : markov ] below , it follows that for all and . using lemma [ lemma : bound ], it follows that .hence , which completes the proof . [ lemma : markov ]let and the set of selected variables based on a sample size of . if , then if for some , then _ proof ._ let be , as above , the random split of the samples into two disjoint subsets , where both for .define the binary random variable for all subsets as denote the data ( the samples ) by .the simultaneous selection probability , as defined in ( [ simult ] ) , is then where the expectation is with respect to the random split of the samples into sets and ( and additional randomness if is a randomised algorithm ) . to prove the first part, the inequality ( for a sample size ) , implies that and hence therefore , using a markov - type inequality , thus , completing the proof of the first claim .the proof of the second part follows analogously . instead of working directly with form ( [ randomisedlasso ] ) of the randomised lasso estimator , we consider the equivalent formulation of the standard lasso estimator , where all variables have initially unit norm and are then rescaled by their random weights w. [ def : additionala ] for weights as in ( [ randomisedlasso ] ) , let be the matrix of re - scaled variables , with for each .let and be the maximal and minimal eigenvalues analogous to ( [ phimin ] ) for instead of .the proof rests mainly on the two - fold effect a weakness has on the selection properties of the lasso .the first effect is that the singular values of the design can be distorted if working with the reweighted variables instead of itself .a bound on the ratio between largest and smallest eigenvalue is derived in lemma [ lemma : trans ] , effectively yielding a lower bound for useful values of .the following lemma [ lemma : boundedq ] then asserts , for such values of , that the relevant variables in are chosen with high probability under any random sampling of the weights .the next lemma [ lemma:1/2 ] establishes the key advantage of randomised lasso as it shows that the ` irrepresentable condition ' ( [ irc ] ) is sometimes fulfilled under randomly sampled weights , even though its not fulfilled for the original data .variables which are wrongly chosen because condition ( [ irc ] ) is not satisfied for the original unweighted data will thus not be selected by stability selection .the final result is established in lemma [ lemma : nofalse ] after a bound on the noise contribution in lemma [ lemma : boundnoise ] .[ lemma : trans ] define by and assume .let be weights generated randomly in ] , where the last step follows by a change of variable transform and the fact that as well as and thus for all with diagonal entries in ] and randomly sampled weights .suppose that the weakness .under the assumptions of theorem [ theo : randlasso ] , there exists a set in the sample space of with , such that for all realisations , for , if , where is defined as in theorem [ theo : randlasso ] ._ follows mostly from theorem 1 in . to this end ,set in their notation .we also have , as , by definition , , as in lemma [ lemma : trans ] .the quantity in is identical to our notation .it is bounded for all random realisations of , as long as , using lemma [ lemma : trans ] , by hence all assumptions of theorem 1 in are fulfilled , with , for any random realisation . using ( 2.20)-(2.24 ) in , it follows that there exists a set in the sample space of with for all , such that if , from ( 2.21 ) in , and , from ( 2.23 ) in , having used for the first inequality that , in the notation of , .the factor was omitted to account for our different normalisation . for the second inequality, we used .the last inequality implies , by definition of in theorem [ theo : randlasso ] , that , which completes the proof . [ lemma:1/2 ] set .let and let be a set which can depend on the random weight vector .suppose that satisfies and for all realisations .suppose furthermore that for some implies that for all pairs of weights that fulfill for all , with equality for all .then , for , where the probability is with respect to random sampling of the weights and is , as above , the probability of choosing weight for each variable and the probability of choosing weight 1 ._ let be the realisation of for which and for all other .the probability of is clearly under the used sampling scheme for the weights .let be the selected set of variables under these weights .let now be the set of all weights for which and for all , and arbitrary values in for all with .the probability for a random weight being in this set is . by the assumption on , it holds that for all , since for all with equality for . for all weights , it follows moreover that using the bound on , it hence only remains to be shown that , if for all , since for any vector , it is sufficient to show , for , as is the projection of into the space spanned by and , it holds that . using , it follows that , which shows ( [ toshowww ] ) and thus completes the proof . [ lemma : boundnoise ]let be the projection into the space spanned by all variables in subset .suppose .then there exists a set with , such that for all , _ proof ._ let be the event that as entries in are i.i . distributed , for all .note that , for all and , .define as it is now sufficient to show that . showing this boundis related to a bound in and we repeat a similar argument .each term has a distribution as long as is of full rank .hence , using the same standard tail bound as in the proof of theorem 3 of , having used for all in the last step and thus , using , which completes the proof by setting and concluding that for all . [ lemma : nofalse]let and be again the probability for variable of being in the selected subset , with respect to random sampling of the weights .then , under the assumptions of theorem [ theo : randlasso ] , for all and , there exists a set with such that for all and , where is defined as in theorem [ theo : randlasso ] ._ we let , where is the event defined in lemma [ lemma : boundedq ] and event is defined in lemma [ lemma : boundnoise ] . since , using these two lemmas , it is sufficient to show ( [ toshowif1 ] ) and ( [ toshowif2 ] ) for all .we begin with ( [ toshowif1 ] ) .a variable is in the selected set only if where is the solution to ( [ randomisedlasso ] ) with the constraint that , comparable to the analysis in .let be the set of non - zero coefficients and be the set of regression coefficients which are either truly non - zero or estimated as non - zero ( or both ) .we will use as a short - hand notation for .let be the projection operator into the space spanned by all variables in the set . for all , this is identical to then , splitting the term in ( [ tmp1 ] ) into the two terms it holds for the right term in ( [ twoterms ] ) that looking at the left term in ( [ twoterms ] ) , since , we know by lemma [ lemma : boundedq ] that and , by definition of above , . thus the left term in ( [ twoterms ] ) is bounded from above by having used lemma [ lemma : boundnoise ] in the last step and .putting together , the two terms in ( [ twoterms ] ) are bounded , for all , by we now apply lemma [ lemma:1/2 ] to the rightmost term .the set is a function of the weight vector and satisfies for every realisation of the observations the conditions in lemma [ lemma:1/2 ] on the set .first , .second , by definition of above , for all weights .third , it follows by the kkt conditions for lasso that the set of non - zero coefficients of and is identical for two weight vectors and , as long for all and for all ( increasing the penalty on zero coefficients will leave them at zero , if the penalty for non - zero coefficients is kept constant ) .hence there exists a set in the sample space of with such that .moreover , for the same set , we have .hence , for all and , for all , the lhs of ( [ tmp1 ] ) is bounded from above by and variable is hence not part of the set .it follows that with for all .this completes the first part ( [ toshowif1 ] ) of the proof . for the second part ( [ toshowif2 ] ), we need to show that , for all , all variables in are chosen with probability at least ( with respect to random sampling of the weights ) , except possibly for variables in , defined in theorem [ theo : randlasso ] . for all , however , it follows directly from lemma [ lemma : boundedq ] that .hence , for all , the selection probability satisfies for all , which completes the proof . since the statement in lemma [ lemma : nofalse ] is a reformulation of the assertion of theorem [ theo : randlasso ] , the proof of the latter is complete .both authors would like to thank anonymous referees for many helpful comments and suggestions which greatly helped to improve the manuscript .n.m . would like to thank fim ( forschungsinstitut fr mathematik ) at eth zrich for support and hospitality . | estimation of structure , such as in variable selection , graphical modelling or cluster analysis is notoriously difficult , especially for high - dimensional data . we introduce stability selection . it is based on subsampling in combination with ( high - dimensional ) selection algorithms . as such , the method is extremely general and has a very wide range of applicability . stability selection provides finite sample control for some error rates of false discoveries and hence a transparent principle to choose a proper amount of regularisation for structure estimation . variable selection and structure estimation improve markedly for a range of selection methods if stability selection is applied . we prove for randomised lasso that stability selection will be variable selection consistent even if the necessary conditions needed for consistency of the original lasso method are violated . we demonstrate stability selection for variable selection and gaussian graphical modelling , using real and simulated data . |
quantitative photoacoustic tomography is concerned with recovering quantitatively accurate estimates of chromophore concentration distributions , or related quantities such as optical coefficients or blood oxygenation , from photoacoustic images .the source of contrast in photoacoustic tomography ( pat ) is optical absorption , which is directly related to the tissue constituents . by obtaining pat images at multiple optical wavelengths, it may be possible to recover chemically specific information about the tissue .however , such a spectroscopic use of pat images must consider the effect of the spatially and spectrally varying light fluence distribution . asa photoacoustic image is the product of the optical absorption coefficient distribution , which carries information about the tissue constituents , and the optical fluence , which only acts to distort that information , the challenge in quantitative photoacoustic imaging is to remove the effect of the light fluence .a common approach is to use a model of the unknown fluence and use it to extract the desired optical properties from the measured data .this has been done analytically or numerically , often within a minimisation framework .the majority of this literature uses the diffusion approximation to the radiative transfer equation to model the light distribution , which is accurate in highly scattering media and away from boundaries or sources . in pat ,the region of interest often lies close to the tissue surface where the diffusion approximation is not accurate .the radiative transfer equation ( rte ) , on the other hand , is widely considered to be an accurate model of light transport so long as coherent effects are negligible , which is the case here .finite element discretisations of the rte have been developed and proposed for quantitative pat reconstructions , but due to the need to discretise in angle as well as space they quickly become computationally intensive and their applicability is limited to small and medium scale problems .an alternative is monte carlo ( mc ) modelling , which is a stochastic technique for modelling light transport that converges to the solution to the rte .the significant advantage of the mc approach is that it is highly parallelisable so scales well to the large - scale inversions that will be encountered in practice .monte carlo models of light transport are popular in biomedical optics and have predominantly been applied in the planning of experimental measurements and in dosimetric studies for a range of light based therapies .many of the applications are summarised by zhu et al .one early mc model of light transport , mcml , computes the fluence in 3d slab geometry .this model was later extended to simulate spherical inclusions in the tissue , and later to spheroidal and cylindrical inclusions .mc modelling in 3d heterogeneous media has been shown both for voxelised media , which was later gpu - accelerated , and using a mesh - based geometry . although the rte is an equation for the radiance , which is a function of angle at every point , the quantity usually calculated by mc models is the fluence rate , which is the radiance integrated over all angles .the reasons are practical : most measurable quantities are related to the fluence rate rather than the radiance , storing just the integrated quantity saves on computational memory , and the estimates for the fluence rate will converge sooner than the underlying estimates for the radiance . in photoacoustics ,the measurable signal is related to the fluence ( the time - integrated fluence rate ) so current mc models can be used in the simulation of photoacoustic signals . however , as will be discussed in section 4 , the full angle - dependent radiance is required when tackling the inverse problem of estimating the optical coefficients , specifically the optical scattering. in this paper , section [ sec : qpat ] introduces the inverse problem of quantitative pat .sections [ sec : light_transport ] and [ sec : adjoint ] present forward and adjoint monte carlo models of the radiance employing a harmonic angular basis . in section[ sec : gradients ] it is shown that this choice of basis allows the functional gradients for the inverse problem to be calculated straightforwardly .inversions for absorption and scattering coefficient distributions are given in section [ sec : examples ] .the inverse problem in qpat can be stated as the minimisation where the error functional is given by is the absorbed energy density and is the ` data ' for this problem .it is related to the photoacoustic image by the grneisen parameter , which here is set to 1 .additional regularisation terms or terms reflecting prior knowledge may also be added to .gradient - based approaches to solving this problem require estimates of the gradients of the error functional with respect to the parameters of interest .saratoon et al . gives expressions for these gradients in terms of the forward and adjoint fields , and : and monte carlo models to calculate the radiance and adjoint radiance are given in the following two sections .in pat , the optical and acoustic propagation times are so different that the optical propagation can be considered instantaneous and the time - dependence of the light transport can be neglected . the time - independent radiative transfer equation ( rte )is given by where is the radiance , and are the absorption and scattering coefficients , respectively , is position , and are the original and scattered propagation directions , is the scattering phase function , is a source term and is used to indicate integration over angle in dimensions . to obtain approximations to the solutions to this equation , various flavours of mchave been proposed .the approach used here begins with launching a packet of energy , referred to herein as a ` photon ' , from a given position in an initial direction . after travelling a distance ) / { { \mu_{s}}} ] is a real uniform random variable on ] and is the angle of the photon direction relative to the z - direction ( i.e. ) .( the equivalent expansion in 3d would be into spherical harmonics . ) for a given voxel , the weight is deposited into the relevant fourier coefficients according to where is the weight deposited by the photon traversing the ^th^ voxel . the algorithm was implemented in the julia programming language .analytical solutions to the rte are available for the fluence for a range of geometries and source types , however there are few analytical solutions for the radiance , particularly in 2d . the rmc model was compared to one such analytic solution for an infinite , homogeneous 2d domain illuminated by an isotropic point source .an isotropic point source was placed at the centre of a domain of size 15mm mm , large compared to the transport mean free path in order to approximate an infinite domain .the absorption and scattering coefficients were 0.01mm^-1^ and 10mm^-1^ respectively and the henyey - greenstein phase function was used used with set to 0.9 .the pixel size was 0.05 mm 0.05 mm , and 5 fourier harmonics were used .[ fig : rmc_validation ] shows the good agreement between the analytical and rmc modelled radiance at radial distances of 2 mm and 3 mm from the source along the horizontal axis .mm^-1^ , ^-1^ and .results from an analytic method ( infinite domain ) and rmc simulations ( 15mm mm square domain ) shown.,scaledwidth=100.0% ]the adjoint equation to the rte is given by where is the adjoint radiance and is the adjoint source .this was implemented numerically using the same mc scheme as for the forward rmc model ( section [ sec : radiance ] ) .the principle difference is that the light sources typically used in pat are restricted to the boundary , but the adjoint source will not be , as a consequence of the fact that the ` data ' in qpat - the photoacoustic images - is volumetric .the adjoint model was validated by checking it satisfied the condition : where and are the operators corresponding to the forward and adjoint rmc models , and and are the angle and position dependent source and detector .three cases were tested : where and are the positions of the source and detector , are the spatial and angular sensitivity of the detector and source . substituting these into yields where and are the forward and adjoint radiances from computing and , respectively . is the fluence , or angle - integrated radiance .it can be seen from that the case where a pair of isotropic -functions are used for and , that we expect the resulting fluence values at their respective positions , and , to be equal .this is an intuitive result given the reciprocity of the rte and the angular indpendence of the source - detector combination .simulations were performed using a 40mm mm ( 101 pixel ) domain , and 10 fourier harmonics .each source distribution emitted 10 ^ 6^ photons . was set to be the centre of the domain with moved along the x - direction across the domain .comparisons are shown in fig .[ fig : isoiso_adjoint_test1 ] for case 1 with an isotropic source and detector , fig .[ fig : isoaniso_adjoint_test2 ] for case 2 with an isotropic source and and anisotropic detector with , and fig.[fig : isoiso_adjoint_test3 ] for case 3 with the same but with the distributed shown in fig .[ fig : isoiso_adjoint_test3](a ) .good agreement was obtained in all cases , showing the the rmc adjoint model is an accurate representation of the rte adjoint . and to validate the adjoint model . and were isotropic point sources with at the centre of the domain and translated across the domain at y = 23.6mm.,scaledwidth=60.0% ] ; ( b ) plot of for validation of adjoint model .plot was produced with as an isotropic point source at the centre of the domain . was translated across the domain along a line at y = 23.6mm.,scaledwidth=100.0% ] ; ( b ) and to validate the adjoint model . was an anisotropic point source emitting light over angle following . was translated along a line across the domain at y = 23.6 mm , as shown by the grey line dashed line in ( a).,scaledwidth=100.0% ]both the radiance and the adjoint radiance can be expressed as fourier series as in . by substituting these expressions into eqs .[ absorption_gradient_main ] and [ scattering_gradient_main ] for the functional gradients , simple and easily computed expressions for the gradients can be obtained .the fluence is simply given by the isotropic component of the field .the other terms in the expressions for the functional gradients contain integrals of products of the radiance and its adjoint .if , and are the fourier coefficients of the adjoint radiance , then the gradient with respect to absorption can be written as \theta.\end{aligned}\ ] ] by orthogonality , all terms for which integrate to zero and reduces to \theta,\\ & = -a_{0}(h^{meas}-{{\mu_{a}}}a_{0 } ) + \frac{1}{2\pi}a_{0}a_{0}^{*}+\frac{1}{\pi}\sum_{n=1}^{\infty}a_{n}a_{n}^{*}+\frac{1}{\pi}\sum_{n=1}^{\infty}b_{n}b_{n}^{*}. \label{grad_2d_abs_integral3}\end{aligned}\ ] ] this expression for the absorption gradient is computationally straightforward to evaluate due to the fact that it requires simply summing over products of fourier coefficients already loaded in memory .the second term in is which contains the phase function given in and can be expanded using a fourier series in powers of : where .thus we can write , \notag \\ \left[\frac{1}{2\pi}+\frac{1}{\pi}\sum_{l=0}^{\infty}g^{l}\cos(l(\theta-\theta'))\right ] \notag \\\left[\frac{1}{2\pi}a_{0}^{*}+\frac{1}{\pi}\sum_{m=1}^{\infty}a_{m}^{*}\cos(m\theta)+\frac{1}{\pi}\sum_{m=1}^{\infty}b_{m}^{*}\sin(m\theta)\right ] d\theta d\theta',\end{aligned}\ ] ] where and are the angles between the z - axis and and , respectively . as such , the scattering angle between the previous direction into the new direction is given by .it is possible to expand as which in turn allows us to employ orthogonality relationships to simplify the above integrals and write substituting this expression into , we can write the full expression for the functional gradient with respect to the scattering coefficient : \left(1-g^{n}\right ) . \label{scattering_gradient3}\end{aligned}\ ] ] the ability to calculate these gradients is the first step to finding a computationally efficient way to solve the full qpat inversion using a monte carlo model of light transport .the forward and inverse mc models of radiance described above were used with a gradient - descent ( gd ) scheme to estimate and from simulated pat images by minimising the error functional in .as the adjoint source , , was independent of angle , photons were launched istropically with the launch position being spread out over the range of a source voxel using a randomly distributed number on the interval ] ; it was found that this range yielded sufficiently large steps to ensure reasonably efficient progress in the minimisation .second , the termination condition in was relaxed due to the much slower convergence of the scattering coefficient , and instead required a relative change in the error functional of 10 ^ -5^. this was satisfied after 35 iterations , and is shown in fig .[ recon - mus](b ) with profiles through the true and reconstructed distributions of shown in fig .[ recon - mus](d ) .it can be seen from fig .[ recon - mus](c ) and ( d ) that the inversion has partly reconstructed the inclusion in the scattering coefficient .the inability to reconstruct edges of the inclusion in the scattering coefficient is expected , given the diffusive nature of the scattering .however , the discrepancy in in the inclusion , evident from fig .[ recon - mus](d ) , suggests premature termination of the optimisation .this is due to the fact that the gradient with respect to scattering is small and prone to noise in the functional gradients .this low snr in the gradients has the impact that that search directions in the optimisation routine are often sub - optimal , which results in little or no progress of the optimisation .the progressive reduction in snr in the gradient means that non - descent steps are likely and can therefore trigger the termination condition .in this paper a novel mc model of the rte was presented .the model computes the radiance in a fourier basis in 2d and is straightforward to extend to 3d using a spherical harmonics basis .the accuracy of the model was demonstrated by comparing the angle - resolved radiance at two positions in the domain to corresponding appropriate analytic solutions .sections [ sec : gradients ] and [ sec : examples ] presented the application of the rmc algorithm to estimating the absorption and scattering coefficients from simulated pat images . in section [ sec : mua - recon ] it was observed that the absorption coefficient was estimated with an average error of 0.2% over the domain relative to the true value , when the scattering coefficient is known , and in the presence of 0.7% average noise in the data .this is encouraging , particularly because noise is not only present in but in the fourier harmonics computed using the forward and adjoint rmc simulations , which is propagated to the estimates of the functional gradients .consequently the search direction in the gd algorithm will always be sub - optimal .furthermore , noise in , the estimate of the absorbed energy density at the iteration of the linesearch , will be also be propagated to the error functional , resulting in a non - smooth search trajectory for the linesearch because at every point , the error function will be corrupted by some different noise : . in practice, this did not preclude reconstruction of the absorption coefficient since the calculated gradients remained descent directions despite the noise .furthermore , the error functional in is sufficiently convex that the addition of some noise does not prevent the linsearch from yielding sufficiently large a step length to allow rapid convergence .reconstruction of the scattering coefficent correctly located the scattering perturbation in the simulated image , however the peak value in the reconstruction was lower than the true value .this is a direct consequence of the fact that the scattering coefficient is related to the absorbed energy distribution only through the optical fluence distribution .consequently , the snr in is typically much less than that for absorption .this causes termination of the algorithm before the peak magnitude of the parameter has been found in the search space .the authors acknowledge the contribution of andre liemert who kindly provided radiance data used in the validation of rmc in section [ sec : validation ] .the authors acknowledge the use of the ucl legion high performance computing facility ( legion ) , and associated support services , in the completion of this work .10 j. laufer , b. cox , e. zhang , and p. beard , `` quantitative determination of chromophore concentrations from 2d photoacoustic images using a nonlinear model - based inversion scheme . , '' _ applied optics _ * 49 * , pp . 121933 , mar .b. t. cox , s. r. arridge , and p. c. beard , `` estimating chromophore distributions from multiwavelength photoacoustic images ., '' _ journal of the optical society of america . a , optics , image science , and vision _* 26 * , pp .443455 , feb .t. tarvainen , a. pulkkinen , b. cox , j. kaipio , and s. arridge , `` bayesian image reconstruction in quantitative photoacoustic tomography ., '' _ ieee transactions on medical imaging _ * 32 * , pp .22872298 , aug .2013 .t. saratoon , t. tarvainen , b. t. cox , and s. r. arridge , `` a gradient - based method for quantitative photoacoustic tomography using the radiative transfer equation , '' _ inverse problems _ * 29 * , p. 075006, july 2013 .p. surya mohan , t. tarvainen , m. schweiger , a. pulkkinen , and s. r. arridge , `` variable order spherical harmonic expansion scheme for the radiative transport equation using finite elements , '' _ journal of computational physics _* 230 * , pp .73647383 , aug .2011 .d. boas , j. culver , j. stott , and a. dunn , `` three dimensional monte carlo code for photon migration through complex heterogeneous media including the adult human head , '' _ optics express _ * 10 * , pp . 15970 , feb . 2002 .p. c. beard and t. n. mills , `` characterization of post mortem arterial tissue using time - resolved photoacoustic spectroscopy at 436 , 461 and 532 nm ., '' _ physics in medicine and biology _ * 42 * , pp .17798 , jan .antonelli , a. pierangelo , t. novikova , p. validire , a. benali , b. gayet , and a. de martino , `` impact of model parameters on monte carlo simulations of backscattering mueller matrix images of colon tissue ., '' _ biomedical optics express _ * 2 * , pp . 183651 , july 2011 .n. manuchehrabadi , y. chen , a. lebrun , r. ma , and l. zhu , `` computational simulation of temperature elevations in tumors using monte carlo method and comparison to experimental measurements in laser photothermal therapy ., '' _ journal of biomechanical engineering _* 135 * , p. 121007j. cassidy , v. betz , and l. lilge , `` treatment plan evaluation for interstitial photodynamic therapy in a mouse model by monte carlo simulation with fullmonte , '' _ frontiers in physics _ * 3 * , pp . 110 , feb .2015 . v. periyasamy and m. pramanik ,`` monte carlo simulation of light transport in tissue for optimizing light delivery in photoacoustic imaging of the sentinel lymph node ., '' _ journal of biomedical optics _ * 18 * , p. 106008, jan . 2013 .v. periyasamy and m. pramanik , `` monte carlo simulation of light transport in turbid medium with embedded object spherical , cylindrical , ellipsoidal , or cuboidal objects embedded within multilayered tissues ., '' _ journal of biomedical optics _ * 19 * , p. 0450032014 .a. sassaroli and f. martelli , `` equivalence of four monte carlo methods for photon migration in turbid media ., '' _ journal of the optical society of america . a , optics , image science , and vision _* 29 * , pp . 21107 , oct . 2012 .r. hochuli , s. powell , s. arridge , and b. cox , `` forward and adjoint radiance monte carlo models for quantitative photoacoustic imaging , '' in _ proc . of spie ,photons plus ultrasound : imaging and sensing _ , a. a. oraevsky and l. v. wang , eds . , * 9323 * , pp . 93231p10 ,2015 .a. liemert and a. kienle ,`` analytical approach for solving the radiative transfer equation in two - dimensional layered media , '' _ journal of quantitative spectroscopy and radiative transfer _ * 113 * , pp .559564 , may 2012 .m. j. c. van gemert , a. j. welch , w. m. star , m. motamedi , and w .- f .cheong , `` tissue optics for a slab geometry in the diffusion approximation , '' _ lasers in medical science _ * 2 * , pp .295302 , dec . 1987 .a. k. jha , m. a. kupinski , h. h. barrett , e. clarkson , and j. h. hartman , `` three - dimensional neumann - series approach to model light transport in nonuniform media . , ''_ journal of the optical society of america .a , optics , image science , and vision _ * 29 * , pp . 188599 , sept . 2012 . | forward and adjoint monte carlo ( mc ) models of radiance are proposed for use in model - based quantitative photoacoustic tomography . a 2d radiance mc model using a harmonic angular basis is introduced and validated against analytic solutions for the radiance in heterogeneous media . a gradient - based optimisation scheme is then used to recover 2d absorption and scattering coefficients distributions from simulated photoacoustic measurements . it is shown that the functional gradients , which are a challenge to compute efficiently using mc models , can be calculated directly from the coefficients of the harmonic angular basis used in the forward and adjoint models . this work establishes a framework for transport - based quantitative photoacoustic tomography that can fully exploit emerging highly parallel computing architectures . |
partitioning of a linear system for its parallel solution typically aims at satisfying the two standard objectives : _ minimizing the communication volume _ and _ maintaining the load balance _ among different processors .both of these requirements are motivated by considerations of the efficiency of the parallel matrix - vector multiplications , which lie in the heart of the iterative solution methods .once the partitioning is performed , the obtained partitions are further used for constructing parallel preconditioners another crucial ingredient , contributing into the overall performance of the computational scheme .however , the quality of the resulting preconditioner may depend significantly on the given partition , which , while targeting the efficiency of the parallel matrix - vector multiplication , ignores the nature of the employed preconditioning strategy .the latter often leads to preconditioners of a poor quality , especially in the cases , where the coefficient matrices have entries with large variations in magnitudes . in the current work ,we suggest to remove the requirement on the communication volume and , instead , consider partitionings , which favor the quality of the resulting preconditioner .in particular , we focus on the additive schwarz ( as ) preconditioners , see , e.g. , , for symmetric positive definite ( spd ) linear systems , and present a partitioning algorithm , which aims at optimizing the quality of the as procedure by attempting to minimize the condition number of the preconditioned matrix , while maintaining the load balance . the problem of partitioning of a linear system is commonly formulated in terms of the adjacency graph of the coefficient matrix . here, is the set of vertices ( nodes ) corresponding to the equations / unknowns of the linear system , and is the set of edges , where iff . throughout, we assume that is spd , i.e. , , which , in particular , implies that the graph is undirected . the _ standard _ goal is to partition into `` nonoverlapping '' subgraphs , where and , such that imposing the additional constraint that the edge cut between is kept to a minimum , while the cardinalities of the vertex sets are approximately the same , i.e. , .equations and unknowns with numbers in are then typically mapped to the same processor .the requirement on the small edge cut between aims at reducing the cost of communications coming from the parallel matrix - vector multiplication .the condition attempts to ensure the load balancing .the solution of the above - described graph partitioning problem is np - complete .however , there exist a variety of heuristics , which have been successfully applied to the problem ; see , e.g. , for an overview .efficient implementations of relevant algorithms are delivered by a number of graph partitioning software packages , e.g. , chaco and metis .we note that alternative approaches for partitioning of linear systems are known , e.g. , based on bipartite graph or hypergraph model , however , we do not consider them in this paper .if the preconditioner quality becomes an objective of the partitioning , then along with the adjacency graph , it is reasonable to consider weights assigned to the edges , where are determined by the coefficients of the matrix .the corresponding algorithm should then be able to take these weights into account and properly use them to perform graph partitioning .an example of such an algorithm has been discussed in . indeed ,if one considers partitioning as an `` early phase '' of a preconditioning procedure ( which , in the purely algebraic setting , is based solely on the knowledge of ) , then the use of the coefficients of at the partitioning step , e.g. , through the weights , represents a natural option .this approach , however , faces a number of issues .for example , given a preconditioning strategy , how does one assign the weights ?what are the proper partitioning objectives? how can the partitioning be performed in practice ? in this work ,we address these three question for the case of an spd linear system and a simple domain decomposition ( dd ) type preconditioner the _ nonoverlapping _ as . in particular , for a given , the proposed approach is based on the idea of constructing a ( bi)partition , which attempts to minimize an upper bound on the condition number of the preconditioned coefficient matrix over all possible balanced ( bi)partitions .the resulting algorithm relies on the computation of eigenvectors corresponding to the smallest eigenvalues of generalized eigenvalue problems , which simultaneously involve the weighted and standard graph laplacians .although the formal discussion is focused on the case of the _ nonoverlapping _ as , we show numerically that , in practice , adding several `` layers '' of neighboring nodes to the obtained sets in ( [ eqn : partn ] ) leads to decompositions of , which provide a good quality of the _ overlapping _ as preconditioning . the paper is organized as following . in section [ sec : bdiag ] , we recall several known results concerning the block - diagonal preconditioning for spd matrices .these results motivate the new partitioning scheme , presented in section [ sec : cbs_partn ] .the relation of the introduced approach to the existing graph partitioning schemes is discussed in section [ sec : other_spectral ] .finally , in section [ sec : numer ] , we report on a few numerical examples .let us consider as a block -by- matrix , i.e. , where the diagonal blocks and are square of size and , respectively ; the off - diagonal block is -by- .let be a block - diagonal preconditioner , where , .the dimensions of and are same as those of and , respectively .since both and are spd , the convergence of an iterative method for , such as , e.g. , the the preconditioned conjugate gradient method ( pcg ) , is fully determined by the spectrum of the preconditioned matrix . if no information on the exact location of eigenvalues of is available , then the worst - case convergence behavior of pcg is traditionally described in terms of the condition number of ; with and denoting the largest and the smallest eigenvalues of the preconditioned matrix , respectively. the question which arises is how we can bound for an arbitrary and a block - diagonal .the answer to this question is given , e.g. , in ( * ? ? ?* chapter 9 ) .below , we briefly state the main result . in the subsequent sections , we will need this statement to justify the new partitioning algorithm .[ def : cbs ] let and be finite dimensional spaces , such that in ( [ eqn : bk_sys ] ) is partitioned consistently with and .the constant where and are subspaces of the form is called the cauchy - bunyakowski - schwarz ( cbs ) constant . in ( [ eqn : cbsa ] ), denotes the standard inner product .we note that can be interpreted as a cosine of an angle between subspaces and .thus , since , additionally , , it is readily seen that .also we note that is the smallest possible constant satisfying the strengthened cauchy - schwarz - bunyakowski inequality which motivates its name .the cbs constant , defined by ( [ eqn : cbsa ] ) , turns to play an important role in estimating the condition number for the class of spd matrices and block - diagonal preconditioners .[ thm : a1 ] if and in ( [ eqn : bk_prec ] ) , and in ( [ eqn : bk_sys ] ) is spd , then the bound given by theorem [ thm : a1 ] is sharp . in what follows ,we use the cbs constants to construct a new approach for graph partitioning .given decomposition of the set ( possibly with overlapping ) , we consider the as preconditioning for an spd linear system .the preconditioning procedure is given in algorithm [ alg : as ] .by we denote a submatrix of located at the intersection of rows with indices in and columns with indices in .similarly , denotes the subvector of , containing entries from positions .[ alg : as ] input : a , r , .output : . 1 . for , do 2 .set , , and .3 . solve .4 . set .5 . enddo 6 . . in this section ,we focus on the case , where sets ( subdomains ) are nonoverlapping , i.e. , ( [ eqn : partn ] ) holds .algorithm [ alg : as ] then gives a nonoverlapping as preconditioner , which is a form of the block jacobi iteration .indeed , let be a permutation matrix , which corresponds to the reordering of according to the partition , where the elements in are labeled first , in second , etc .then the as preconditioner , given by algorithm [ alg : as ] , can be written in the matrix form as where .thus , algorithm [ alg : as ] results in the block - diagonal , or block jacobi , preconditioner , up to a permutation of its rows and columns . in the following subsectionwe define an optimal _ bipartitioning _ for algorithm [ alg : as ] with two nonoverlapping subdomains .let us assume that and consider a _ bipartition _ of , where and are nonempty , such that the following theorem provides a relation between a given bipartition and the condition number of the preconditioned matrix .let in ( [ eqn : partitij ] ) be a bipartition of , where .let be the as preconditioner for linear system with an spd matrix , given by algorithm [ alg : as ] , with respect to the bipartition .then , where the spaces and are the subspaces of with dimensions and ( ) , respectively , such that according to ( [ eqn : reorder ] ) , for the given bipartition in ( [ eqn : partitij ] ) , the preconditioner , constructed by algorithm [ alg : as ] , is of the form where , , and is a permutation matrix corresponding to the reordering of with respect to the partition . in particular , for any , the vector is such that , i.e. , the entries of with indices in become the first components of , while the entries with indices in get positions from through .we observe that the condition number of the matrix is the same as the condition number of the matrix , where and in ( [ eqn : prec2x2 ] ) . indeed , since a unitary similarity transformation preserves the eigenvalues of , we have , where .the matrix represents a symmetric permutation of with respect to the given bipartition , and , thus , can be written in the -by- block form , where , , and . since is spd and the preconditioner in ( [ eqn : prec2x2 ] ) is the block diagonal of , we apply theorem [ thm : a1 ] to get the upper bound on the condition number , and hence bound ( [ eqn : kappa ] ) on , where , according to definition [ def : cbs ] , the cbs constant is given by the matrix defines the permutation that is the `` reverse '' of the one corresponding to .thus , the substitution and leads to expression ( [ eqn : cbsc])([eqn : wij ] ) for , where the and contain vectors , which can have nonzero entries only at positions defined by or , respectively . while no reasonably simple expression for the condition number of is generally available , ( [ eqn : kappa])([eqn : wij ] ) provides a sharp upper bound .this suggests that instead of choosing the bipartition which directly minimizes , we look for , such that the _ upper bound _ in ( [ eqn : kappa ] ) is the smallest . the function is monotonically increasing .therefore , the optimal value of in ( [ eqn : kappa ] ) is attained for a bipartition corresponding to the smallest value of the cbs constant .since the targeted partitions are also required to be balanced , throughout this section , we request to be even , which guarantees the existence of _ fully balanced _bipartitions in ( [ eqn : partitij ] ) , i.e. , such that .the latter , however , will not be a restriction for the practical algorithm described below .thus , we suggest an optimal bipartition for the nonoverlapping as preconditioner to be such that where and are the subspaces defined in ( [ eqn : wij ] ) . in the previous subsection, we have shown that the cbs constant in ( [ eqn : cbsc ] ) provides a reasonable quantification of the quality of the bipartition in terms of the nonoverlapping as preconditioner with respect to the two subdomains . however, finding an optimal bipartition ( possibly not unique ) according to ( [ eqn : opt ] ) , represents a challenging task .therefore , we further construct bipartitions which _ attempt to approximate _ the minimizer of ( [ eqn : opt ] ) , rather than determine it exactly . in particular , we use the following approach . let , with in ( [ eqn : cbsc ] ) , be the objective function defined on all possible fully balanced bipartitions in ( [ eqn : partitij ] ) .let be some other ( simpler ) function , which behaves similarly to , i.e. , the values of and change compatibly with respect to different bipartitions , and then , instead of , we attempt to minimize .the constructed minimizer is used to define the bipartition for under the nonoverlapping as preconditioning , given in algorithm [ alg : as ] .below , we suggest the choice of , and describe the resulting bipartitioning procedures . given a bipartition in ( [ eqn : partitij ] ) , , let us consider a set of pairs where denotes the unit vector with at position and zeros elsewhere . by ( [ eqn : cbsc ] ), the computation of the cbs constant for this bipartition involves finding the maximum in and of the quantity instead , we suggest to evaluate ( [ eqn : qty ] ) on the pairs of the unit vectors in ( [ eqn : sample ] ) , and then find the mean of the values which result from this `` sampling '' , i.e. , define , such that we note that , in terms of the adjacency graph of , is equal to the edge cut between vertex sets and , i.e. , .the expression above can be written as where is the weighted cut with denoting the weights assigned to the edges of .thus , instead of the objective function , which according to ( [ eqn : opt ] ) , results in optimal bipartitions , we suggest to minimize in ( [ eqn : subopt ] ) , i.e. , find the minimizer of minimization ( [ eqn : min_subopt ] ) represents the problem of bipartitioning of the graph , which has the prescribed edge weights , with respect to the objective of minimizing the weighted cut normalized by the standard cut . since , by our assumption is spd , the weights are well - defined . the solution of ( [ eqn : min_subopt ] ) is then expected to approximate the optimal partition , i.e. , the minimizer of problem ( [ eqn : opt ] ) , which leads to the nonoverlapping as preconditioner of the optimal quality , in terms of minimizing the upper bound on the condition number of the preconditioned matrix .let us reformulate optimization problem ( [ eqn : min_subopt ] ) in terms of bilinear forms involving graph laplacians .first , we introduce the -dimensional indicator vector with the components such that then , for the given bipartition , where is the weighted degree of the vertex ; denotes the set of vertices adjacent to the vertex .the weighted cut can then be written as a bilinear form evaluated at the indicator vector , where is the weighted degree matrix , is the weighted adjacency matrix ( , ) of , and is the corresponding ( weighted ) graph laplacian .similarly , for the same bipartition , we repeat the above derivations with to get the expression for the ( unweighted ) cut , i.e , where is the diagonal degree matrix , is the adjacency matrix ( , iff , and otherwise , ) of , and is the standard graph laplacian . using expressions ( [ eqn : wcut])([eqn : cut ] ) , minimization problem ( [ eqn : min_subopt ] ) can be written as where the minimum is searched over all possible indicator vectors .the condition imposes the requirement that ; is -vector of ones . in order to find an approximate solution of ( [ eqn : rq_discrete ] ) , we relax the requirement on to be the vectors of , andembed the problem into the real space .thus , instead of ( [ eqn : rq_discrete ] ) , we attempt to find , such that where and are the weighted and the standard graph laplacians of the adjacency graph of , respectively .both and are symmetric positive semi - definite .we assume for simplicity that the adjacency graph is connected , i.e. , the nullspace of is one - dimensional and is spanned by the vector .we also note that is in the nullspace of .problem ( [ eqn : rq ] ) then corresponds to the minimization of the generalized rayleigh quotient on the subspace .since is spd on , the minimum in ( [ eqn : rq ] ) exists and is given by the smallest eigenvalue of the symmetric generalized eigenvalue problem on , the minimizer , i.e. , the eigenvector corresponding to the smallest eigenvalue of ( [ eqn : evp ] ) , can be viewed as an approximation to the minimizer of the discrete problem ( [ eqn : rq_discrete ] ) from the real vector space .the solution of ( [ eqn : evp ] ) is delivered by eigensolvers , which can be applied to the problem , preferably , without factorizing the matrix , and which can be configured to perform iterations on the subspace . in our numerical tests in section [ sec : numer ] , we use the locally optimal block preconditioned conjugate gradient ( lobpcg ) method ; see .a number of possible approaches can be used to define an indicator vector and , hence , the related partition , based on the given eigenvector corresponding to the smallest eigenvalue of problem ( [ eqn : evp ] ) .for example , if the strict requirement on load balancing is enforced , i.e. , ( or if , in practice , is odd ) , then the set is formed by assigning the indices of smallest components of .the indices of the remaining components form the set . in general, however , the requirement on the full load balancing may be restrictive . for example , a slight imbalance in the cardinalities of and may lead to a significant improvement in the preconditioner quality .in such cases , ignoring the explicit requirement on the load balancing , one can , e.g. , consider all the _ negative _ components of as approximations to , and assign their indices to the set .the _ nonnegative _ components are then considered as approximations to , and their indices are put in .thus , the sets and can be defined as similarly , and can be formed as where is a component of with the median value . a known generalization of this approach is to consider , say , `` candidate '' partitions , such that where the values are some chosen components of the vector , e.g. , ; .the vector is obtained by sorting the eigenvector in the ascending order , and determines a linear search order .all partitions are used to evaluate ( [ eqn : subopt ] ) .the resulting bipartition is chosen to be the one , which delivers the smallest value of ( [ eqn : subopt ] ) , i.e. , corresponds to the minimizer of ( [ eqn : min_subopt ] ) over the `` candidate '' bipartitions in ( [ eqn : linorder_extr ] ) . in the partitioning algorithm below , we introduce a parameter _ loadbalance _ , which controls the sizes of the resulting sets and .in particular , the parameter defines a smallest possible size , say , of the sets and , so that the indices of the smallest and largest components of the eigenvector are moved into the sets and , respectively .the indices of the rest components are distributed among and similarly to ( [ eqn : linorder_extr ] ) , with , where and is a given parameter .let be the bipartition of the set resulting from the approach , based on the eigenvalue problem ( [ eqn : evp ] ) , discussed in the previous subsection .a natural way to construct further partitions is to apply the bipartitioning process separately to and , then to the resulting partitions and so on , until all the computed subpartitions are sufficiently small .the obtained procedure is similar to the well - known recursive spectral bisection ( rsb ) algorithm , see , e.g. , , which is based on computing the fiedler vector , i.e. , the eigenvector corresponding to the smallest eigenvalue of each of the subgraphs associated with the sets and , which are determined by ( [ eqn : evp ] ) , can be , and often is , disconnected .this constitutes an important difference between bipartitions delivered by eigenvalue problems ( [ eqn : evp ] ) and ( [ eqn : fiedler ] ) . at the same time , the assumption on the connectedness is crucial for eigenvalue problem ( [ eqn : evp ] ) to be well - posed for the given graph . indeed ,if a graph has more then one ( trivial ) connected component , then the dimension of the nullspace of the corresponding graph laplacian is larger than one , i.e. , the nullspace of is no longer spanned only by the vector . in this case , becomes symmetric positive _ semidefinite _ on , and hence ( [ eqn : evp ] ) does not lead to a correct symmetric eigenvalue problem on .the latter complication can be simply avoided by searching for the connected components in the subgraphs corresponding to and , and further treating the detected components as separate subpartitions in the suggested recursive procedure .therefore , it is important to realize that the bipartitioning step corresponding to problem ( [ eqn : evp ] ) may result in more than two connected components .finally , we note that if the weights take a single nonzero value , i.e. , are the same for all edges , then the weighted and the standard graph laplacians and represent the multiples of each other .this happens for matrices with a very special , `` regular '' , behavior of entries , e.g. , as for the discrete laplace operator with constant ( diffusion ) coefficients . in such cases, the eigenvector , which is used to define the bipartition , corresponds to the only nonzero eigenvalue of ( [ eqn : evp ] ) of multiplicity .this observation implies that _ any _ bipartition can be expected , and , hence , the results of the approach based on ( [ eqn : evp ] ) are highly uncertain . in these situations , we simply replace ( [ eqn : evp ] ) , e.g. , by eigenvalue problem ( [ eqn : fiedler ] ) , or use any other ( bi)partitioning scheme , which targets to satisfy the standard criteria of minimizing the edge cut and maintaining the load balance .let us now summarize the discussion into the following partitioning algorithm .[ alg : partn1 ] input : , .output : partition of . 1 .assign weights to the edges of .if are the same for all edges , then construct the bipartition using a standard approach , e.g. , based on the fiedler vector , and go to step with and .2 . construct graph laplacians and .3 . find the eigenpair corresponding to the smallest eigenvalue of problem ( [ eqn : evp ] ) .4 . define the bipartition based on the computed eigenvector .the sizes of and are controlled by the parameter _loadbalance_. 5 .find connected components in the subgraphs of corresponding to the vertex sets and .6 . for all with , apply cbspartition(, ) .if all , then return .the parameters _ loadbalance _ and _ maxsize _ in algorithm [ alg : partn1 ] are provided by the user .the connected components can be detected by the standard algorithms based on the breadth - first search ( bfs ) or the depth - first search ( dfs ) ; see , e.g. , .note that every weight assigned to the edge is the same at every level of the recursion in algorithm [ alg : partn1 ] , and , in practice , is assigned only once , i.e. , when constructing the adjacency graph of the whole matrix .in the previous section , we have used the idea of minimizing the cbs constant to define the edge weights for the matrix adjacency graph and obtain objective ( [ eqn : min_subopt ] ) for graph partitioning , which aims at increasing the quality of the as preconditioner in algorithm [ alg : as ] .below , we show that similar considerations can lead to the problems of graph partitioning with well - known objectives , such as the ( weighted ) cut minimization and the min - max cut . given a partition in ( [ eqn : partitij ] ) , ,let us consider the set where denotes the unit vector with at position and zeros elsewhere . unlike ( [ eqn : sample ] ) , the set in ( [ eqn : sample_mincut ] ) contains _ all _ pairs with and , i.e. , including those , which correspond to . thus , instead of maximizing ( [ eqn : qty ] ) to compute the cbs constant in ( [ eqn : cbsc ] ) for the given bipartition , we sample ( [ eqn : qty ] ) at all pairs in ( [ eqn : sample_mincut ] ) and then find the mean value .in other words , we define the quantity , such that where is the weighted cut , and are the weights assigned to the edges of the adjacency graph of .thus , following the pattern of the previous subsection , instead of the objective function , which gives an optimal bipartition , we suggest to minimize in ( [ eqn : subopt_mincut ] ) , i.e. , find such a bipartition that it is readily seen that ( [ eqn : min_subopt_mincut ] ) represents the well - known problem of graph partitioning , which aims at finding equal - sized vertex sets and with the minimal ( weighted ) edge cut . in particular , repeating the derivations in subsection [ subsec : uv ] for this problem , we can define the bipartition according to the components of the eigenvector corresponding to the smallest eigenvalue of where is the _ weighted _ graph laplacian .the recursive application of this procedure leads to the partitioning scheme , which is similar to the standard rsb algorithm with the difference that now edge weights are encapsulated into the graph laplacian .let us further assume that the matrix is diagonally scaled , i.e. , in this case , the diagonal entries of are all equal to , and the off - diagonal elements are less than one . in particular , we note that the weights defined in the previous sections simply coincide with the entries of the scaled matrix . given a partition in ( [ eqn : partitij ] ) of ,let us consider the set of pairs where and denotes the unit vector with at position and zeros elsewhere .the vectors in are defined to have components , such that similarly , the vectors in are such that now , following the approach exploited throughout this paper , instead of maximizing ( [ eqn : qty ] ) to compute the cbs constant in ( [ eqn : cbsc ] ) for the given bipartition , we sample ( [ eqn : qty ] ) at all pairs in ( [ eqn : sample_mcut ] ) and find the mean value . in particular , recalling that the diagonal entries of are equal to after the diagonal scaling , we get ^{1/2 } } + \displaystyle \sum_{j \in j } \frac{\displaystyle \sum_{i \in i } |a_{ji}|}{\left[\displaystyle \sum_{k , l \in i } a_{kl } \mbox{sign}(a_{jl } ) \mbox{sign}(a_{jk } ) \right]^{1/2 } } \right ) \geq \\ \displaystyle \frac{1}{n } \left(\displaystyle \sum_{i \in i } \frac{\displaystyle \sum_{j \in j } |a_{ij}|}{\left[\displaystyle \sum_{k , l \in j } |a_{kl}| \right]^{1/2 } } + \displaystyle \sum_{j \in j } \frac{\displaystyle\sum_{i \in i } |a_{ji}|}{\left[\displaystyle \sum_{k , l \in i } |a_{kl}| \right]^{1/2 } } \right ) \geq \frac{1}{n } \left ( \frac{\displaystyle \sum_{i \in i , j \in j } |a_{ij}|}{\displaystyle \sum_{k , l \in j } |a_{kl}| } + \frac{\displaystyle \sum_{j \in j , i \in i } |a_{ji}|}{\displaystyle \sum_{k , l \in i } |a_{kl}| } \right ) , \end{array}\ ] ] and define the quantity , such that where is the weighted cut between sets and with edge weights ; and .thus , instead of the objective function , which gives an optimal bipartition , one can attempt to minimize in ( [ eqn : subopt_mcut ] ) , i.e. , find the minimizer of minimization ( [ eqn : min_subopt_mcut ] ) represents the problem of finding the so - called min - max cut ( mcut ) ; see .we note that the explicit requirement has been dropped in ( [ eqn : min_subopt_mcut ] ) , however , the mcut is known to target balanced partitions .the corresponding algorithm , which attempts to construct satisfying ( [ eqn : min_subopt_mcut ] ) , is based on finding the eigenpair corresponding to the smallest eigenvalue of the problem where is the weighted graph laplacian and is the diagonal weighted degree matrix .the recursive application of this procedure delivers the actual partitioning scheme .we remark that eigenvalue problem ( [ eqn : evp_mcut ] ) is also used to construct the normalized cuts ( ncuts ) , introduced in .in this section , we present the results of graph partitioning with respect to the new objective ( [ eqn : min_subopt ] ) , and demonstrate the effects of the proposed partitioning strategy on the quality of preconditioning . in our numerical experiments , we apply algorithm [ alg : partn1 ] ( cbspartition ) , introduced in subsection [ subsec : recur ] , to a number of test problems with spd coefficient matrices .the resulting partitions ( subdomains ) are passed as an input to the as preconditioner in algorithm [ alg : as ] , which is used to accelerate the convergence of the pcg method .we refer to this solution scheme as pcg as . in order to assess the quality of the constructed as preconditioners , we consider the iteration counts of pcg as for different partitioning schemes . in particular ,we compare pcg as with partitions resulting from algorithm [ alg : partn1 ] versus pcg as with the standard partitioning based on the rsb algorithm .we also provide the comparisons for pcg as with partitioning schemes based on the ( weighted ) mincut and mcut objectives , discussed in section [ sec : other_spectral ] .although , throughout the paper , the formal discussion has been concerned only with the case of the nonoverlapping as procedure , in some of our numerical examples , we skip this theoretical limitation and expand the obtained partitions with several `` layers '' of neighboring nodes .this allows considering the effects of the partitioning strategies on the quality of the _ overlapping _ as preconditioners . in all the tests ,the right - hand side and the initial guess vectors are randomly chosen .we apply pcg as to the diagonally scaled linear systems , so that the corresponding coefficient matrices have on the diagonal ; see ( [ eqn : dscale ] ) . in this case , the weights , assigned to the edges of the adjacency graph , are equal to the entries of the scaled matrix . for all partitioning schemes ,the parameter _ loadbalance _ , which controls the load balancing , is set to ; _ loadbalance _ .the partitioning algorithms that we consider in the current paper are based on finding the eigenvectors of certain eigenvalue problems , i.e. , represent the _ spectral _ partitioning techniques .we recall that algorithm [ alg : partn1 ] computes an eigenpair of problem ( [ eqn : evp ] ) .the rsb algorithm targets the fiedler vector in ( [ eqn : fiedler ] ) .the approaches based on the ( weighted ) mincut and mcut use the eigenvectors of problems ( [ eqn : evp_mincut ] ) and ( [ eqn : evp_mcut ] ) , respectively . in our numerical examples , as an underlying eigensolver , we use the lobpcg method , which allows to handle generalized symmetric eigenvalue problems , such as ( [ eqn : evp ] ) , without any factorizations of the matrices and , and can be easily configured to perform iterations on the subspace .the lobpcg algorithm is a form of a ( block ) three - term recurrence , which performs the local minimization of the rayleigh quotient ; see for more details .the method is known to be practical for large - scale eigenvalue computations , if a good preconditioner is provided to accelerate the convergence to the desired eigenpairs . forall lobpcg runs , we construct preconditioners using incomplete cholesky ( ic ) factors with the drop tolerance of matrices ( for problems ( [ eqn : evp ] ) , ( [ eqn : evp_mincut ] ) , and ( [ eqn : evp_mcut ] ) ) and ( for problem ( [ eqn : fiedler ] ) ) .the parameter is assigned with a small value , in our examples , to ensure that the ic procedure is correctly applied to the spd matrices . for problems ( [ eqn : fiedler ] ) , ( [ eqn : evp_mincut ] ) , and ( [ eqn : evp_mcut ] ) , we remove the orthogonality constraints on , and perform the block iteration , with the block size .the solution is then given by the eigenpair corresponding to the second smallest eigenvalue .for problem ( [ eqn : evp ] ) , however , we run a single vector iteration , choosing the initial guess from and ensure that the residuals are projected back to after the ic preconditioning . in all the tests ,the lobpcg residual norm tolerance is set to . as a model problemwe choose the diffusion equation \times [ 0,1]\ ] ] with zero dirichlet boundary conditions .the functions and are piecewise constants with jumps in the specified subregions . for all tests ,we use the standard -point fd discretization on -by- uniform grid to obtain a ( diagonally scaled ) linear systems of size .we consider two geometries for the location of jumps in coefficients and . in the first case, the jumps occur in the subregion of the domain .the second case is more complex , with jumps located in the `` checkerboard '' fashion ( `` -by- black - white checkerboard '' ) . in the following example, we assume that _ both _coefficients and have jumps in the subregion of the problem s domain , such that figure [ fig : j_xy ] shows the result of a single step , i.e. , the bipartitioning , performed by algorithm [ alg : partn1 ] ( top left ) , the standard rsb approach ( top right ) , as well as the weighted mincut ( bottom left ) and the mcut ( bottom right ) algorithms .+ we observe that , unlike the rest , the partition resulting from algorithm [ alg : partn1 ] does not perform `` cuts '' within the jump region .in fact , this is consistent with the well - established computational experience , which suggests that the subdomains with different physics phenomena should be `` segmented '' out of the original domain .similarly , we apply the bipartitioning step of all four approaches to the model problem , where the jump occurs only in the coefficient , i.e. , the resulting bipartitions are illustrated in figure [ fig : j_x ] .we note that the partitions given by algorithm [ alg : partn1 ] ( top left ) , the weighted mincut ( bottom left ) and the mcut ( bottom right ) algorithms do not discard the mesh edges in the -direction within the jump region .the border between the subdomains is of a `` smoother '' shape for the mincut and mcut , which is preferable in terms of minimizing the communication volume .however , as suggested by the numerical experiments below , the partitions based on algorithm [ alg : partn1 ] typically guarantee a smaller number of steps of the iterative scheme as the number of the desired subdomains becomes larger .we also remark that , although independent of the matrix coefficients , the partitions resulting from the standard rsb approach may not be unique , see figures [ fig : j_xy ] and [ fig : j_x ] ( top right ) , if the targeted eigenvalue has multiplicity greater than one .+ in figure [ fig : ev1 ] , we plot the components of the eigenvectors , corresponding to the smallest eigenvalues of ( [ eqn : evp ] ) , at the grid points . according to algorithm [ alg : partn1 ] ,such eigenvectors are used to construct the bipartition . indeed , the components of the eigenvectors are well - separated , which allows to easily determine the partitions and detect the `` weak '' connection between the grid nodes to be discarded in order to obtain the resulting bipartition .figure [ fig : j_x_cv ] demonstrates the effects of partitioning schemes on the quality of the nonoverlapping as preconditioners . in figure[ fig : j_x_cv ] ( left ) , we consider the pcg as runs for the model problem with coefficients in ( [ eqn : j_xy ] ) , where the as preconditioners are constructed with respect to the partitions delivered by different algorithms .the parameter _ maxsize _ , which determines the largest possible size of a single subdomain , is set to in this example . in figure[ fig : j_x_cv ] ( right ) , we apply pcg as for the model problem with coefficients in ( [ eqn : j_x ] ) . here , the parameter _ maxsize _ ,is set to .this corresponds to a more realistic situation , where each subdomain is significantly smaller than the original domain .the results in figure [ fig : j_x_cv ] suggest that pcg as with the partitioning scheme in algorithm [ alg : partn1 ] requires a noticeably smaller number of iterations to get the solution .we remark that the quality of the as preconditioning with respect to the partitions delivered by algorithm [ alg : partn1 ] may often depend on the value of the parameter _maxsize_. for example , in the case of the model problem with coefficients in ( [ eqn : j_xy ] ) , a small value of _ maxsize _ forces algorithm [ alg : partn1 ] to perform the partitioning inside the jump region , i.e. , inside the subdomain marked with `` o '' s in figure [ fig : j_xy ] ( top left ) .this , clearly , can lead to less remarkable gains in the use of algorithm [ alg : partn1 ] compared to the other partitioning approaches . in the following pair of examples, we assume the `` checkerboard '' pattern for the jump regions ( `` -by- black - white checkerboard '' ) .first , we consider jumps in both and in `` black '' positions , i.e. , in `` black '' , and in `` white '' .the corresponding bipartitions resulting from different partitioning schemes are given in figure [ fig : j_xy_checker ] .+ similarly , figure [ fig : j_x_checker ] demonstrates the bipartitions corresponding to the second case , with jumps only in , i.e. , in `` black '' , and in `` white '' , .+ we recall that the bipartitioning step of algorithm [ alg : partn1 ] may deliver two _ disconnected _ subdomains , i.e. , two subgraphs which possibly contain more than one connected component .each of these connected components is then processed separately in the recursive partitioning procedure .this explains the presence of more than two subdomains in ( top left ) figures [ fig : j_xy_checker ] and [ fig : j_x_checker]the nodes of each connected component , resulting from a single step of algorithm [ alg : partn1 ] , are plotted as separate regions .we also note that the recursive bipartitioning given by algorithm [ alg : partn1 ] may generate a number of small connected subdomains of sizes much smaller than the value of the _ maxsize _ parameter .such subdomains should be treated with care when being assigned to parallel processors . in figure [ fig :ev2 ] , we plot the components of the eigenvectors corresponding to the smallest eigenvalues of ( [ eqn : evp ] ) at the grid points for the `` checkerboard '' example .it is possible to see that , as in the previous examples , both eigenvectors attempt to capture the discontinuities in the coefficients of the model problem . in figure[ fig : j_x_checker_cv ] , we compare the convergence behavior of pcg as with different partitioning schemes .figure [ fig : j_x_checker_cv ] ( left ) corresponds to the case of the jumps in both and in `` black '' positions .we observe that for this relatively complex geometry of the jump locations , all partitioning schemes which use the information on the matrix coefficients result in the as preconditioners of a better quality . in this example, the number of pcg as iterations is typically similar for the partitioning techniques in algorithm [ alg : partn1 ] , the weighted mincut , and mcut .figure [ fig : j_x_checker_cv ] ( right ) demonstrates the runs of pcg as applied to the model problem with the jump only in in `` black '' positions . in this case, the iterative scheme with partitions resulting from algorithm [ alg : partn1 ] gives the fastest convergence . inboth , figure [ fig : j_x_checker_cv ] ( left ) and figure [ fig : j_x_checker_cv ] ( right ) , the _ maxsize _ parameter has been set to . finally , we apply the partitioning schemes , discussed in this paper , to a set of test problems from the university of florida sparse matrix collection . in particular , we consider ill - conditioned spd matrices arising in structural engineering and computational fluid dynamics . in tables[ tbl : ufl ] and [ tbl : ufl_ovlp ] , we report the numbers of iterations ( averaged after - sample runs ) required by pcg as to reach the tolerance in the residual norm ( relative to the norm of the right - hand side vectors ) for _ nonoverlapping _ and _ overlapping _ as procedures , respectively ..iteration numbers of the * nonoverlapping * pcg as with different partitioning schemes applied to test problems from the university of florida sparse matrix collection . [ cols="^,^,^,^,^,^ " , ]in the present paper , we have shown that using matrix coefficients for graph partitioning allows to achieve a noticeable decrease in the number of iterations performed by an iterative scheme . for a class of spd matrices and as preconditioners , we have suggested an approach for assigning weights to the edges of the adjacency graph and formulated a new partitioning objective , which aims at approximately minimizing the cbs constant . the resulting partitioning algorithm is based on computing the eigenpairs corresponding to the smallest eigenvalues of the sequence of generalized eigenvalue problems , which involve both weighted and standard graph laplacians . in particular , this means that the proposed technique inherits all specificities of spectral partitioning algorithms , such as good quality of partitions , on the one hand , and the computational expenses related to finding eigenvectors , on the other hand .thus , in order to obtain highly efficient graph partitioning schemes , it is important to study all aspects of the occurring eigencomputations , such as , e.g. , preconditioning , the use of alternative eigenvalue solvers , possible ways to replace the eigenvalue problems by linear systems .other approaches for satisfying the suggested partitioning objective may be delivered , e.g. , by ( multilevel ) combinatorial graph partitioning techniques or by certain extensions of greedy algorithms .we note that methods for reaching the new partitioning objective may be combined with the communications minimizing techniques .as one could conclude from this work , it is likely that different choices of iterative methods and preconditioning strategies may require different schemes for graph partitioning with matrix coefficients . in the current paper , we have considered the case of pcg with the as preconditioning .exploring the partitioning for other forms of parallel preconditioning ( e.g. , incomplete factorizations and multigrid ) is a natural continuation of the research in this direction .constructing partitioning algorithms with matrix coefficients for nonsymmetric problems is also of a particular interest . , _non - standard parallel solution strategies for distributed sparse linear systems _ , in parallel computation : proc . of acpc99 , a. u. p. zinterhof , m. vajtersic , ed . , lecture notes in computer science , berlin , 1999 , springer - verlag . | prior to the parallel solution of a large linear system , it is required to perform a partitioning of its equations / unknowns . standard partitioning algorithms are designed using the considerations of the efficiency of the parallel matrix - vector multiplication , and typically disregard the information on the coefficients of the matrix . this information , however , may have a significant impact on the quality of the preconditioning procedure used within the chosen iterative scheme . in the present paper , we suggest a spectral partitioning algorithm , which takes into account the information on the matrix coefficients and constructs partitions with respect to the objective of increasing the quality of the additive schwarz preconditioning for symmetric positive definite linear systems . numerical results for a set of test problems demonstrate a noticeable improvement in the robustness of the resulting solution scheme when using the new partitioning approach . graph partitioning , iterative linear system solution , preconditioning , cauchy - bunyakowski - schwarz ( cbs ) constant , symmetric positive definite , spectral partitioning 15a06 , 65f08 , 65f10 , 65n22 |
it has been difficult to read the recent financial news without finding mention of collateralized debt obligations ( cdo s ) . these financial instrumentsprovide ways of aggregating risk from a large number of sources ( viz .bonds ) and reselling it in a number of parts , each part having different risk - reward characteristics . notwithstandingthe role of cdo s in the recent market meltdown , the near future will no doubt see the financial engineering community continuing to develop structured investment vehicles like cdo s .unfortunately , computational challenges in this area are formidable .the main types of these assets have several common problematic features : * they pool a large number of assets * they tranche the losses .the `` problematic '' nature of this combination is that the trancheing procedure is nonlinear ; as usual , the effect of a nonlinear transformation on a high - dimensional system is often difficult to understand .ideally , one would like a theory which gives , if not explicit answers , at least some guidance . in , we formulated a _ large deviations _analysis of a homogeneous pool of names ( i.e. bonds ) .the theory of large deviations is a collection of ideas which are often useful in studying rare events ( see for a more extensive list of references to large deviations analysis of financial problems ) . in ,the rare event was that the notional loss process exceeded the tranche attachment point for an investment - grade tranche .our interest here is heterogeneous pool of names , where the names can have different statistics ( under the risk - neutral probability measure ) .there are several perspectives from which to view this effort .one is that we seek some sort of _ homogenization _ or _ data fusion_. is there an effective macroscopic description of the behavior of the cdo when the underlying instruments are a large number of different types of bonds ? another is an investigation into the _ fine detail _ of the rare events which cause loss in the investment - grade tranches .there may be many ways or `` configurations '' for the investment - grade tranches to suffer losses .which one is most likely to happen ?this is not only of academic interest ; it also is intimately tied to quantities like loss given default and also to numerical simulations .we believe this to be an important component of a larger analysis of cdo s , particularly in cases where correlation comes from only a few sources ( we will pursue a simple form of this idea in subsection [ s : correlated ] ) .we will find a natural generalization of the result of , where the dominant term ( as the number of names becomes large ) was a relative entropy . here, the dominant term will be an integrated entropy , with the integration being against a distribution in `` name '' space .our main result is given in theorem [ t : main ] and .as in , we let ] as note that since ] ( * ? ? ?thus has at least one cluster point .we actually assume that it is unique ; [ a : limitexists ] we assume that exists .in example [ ex : simpleexample ] , we would have that and in example [ ex : merton ] , we would similarly have that our next assumption reflects our interest in cases where where it is unlikely that the tranched loss process suffers any losses by time .note here that = \frac1n\sum_{n=1}^n \mu^{(n)}_n[0,t)=\int_{p\in [ 0,1]}p{\bar { u}^{(n)}}(dp)\ ] ] for all .also note that by the formula and the fact that the variance of an indicator is less than or equal to , we see that the variance of tends to zero as .[ a : ig ] we assume that }p{\bar { u}}(dp)<\alpha.\ ] ] assumption [ a : limitexists ] implies that }p{\bar { u}}(dp ) = \lim_{n\to \infty}\int_{p\in [ 0,1]}p{\bar { u}^{(n)}}(dp ) = \lim_{n\to \infty}\frac{1}{n}\sum_{n=1}^n \mu^{(n)}_n[0,t).\ ] ] thus assumption [ a : ig ] is equivalent to the requirement that in the case of example [ ex : simpleexample ] , assumption [ a : ig ] is that and in the case of example [ ex : merton ] , assumption [ a : ig ] is that [ l : tail ] thanks to assumption [ a : ig ] , we have that . assumption [ a : ig ] is exactly that for sufficiently large , <\alpha ] .thus if , assumption [ a : ig ] is satisfied .in fact if is large enough , , so this is not a very interesting case .there is simply too much certainty here .note that assumption [ a : ig ] implies a bound on the number of bonds with certain default ; since for all ] and .since the result will require a fair amount of notation , let s verbally understand its structure first .the point of was that the dominant asymptotic of the price was a relative entropy term ; this entropy was that of relative to the risk - neutral probability of default . in ,all bonds were identically distributed , so this amounted to the entropy of a single reference coin flip ( the coin flip encapsulating default ) .here we have a distribution of coins , one for each name .not surprisingly , perhaps , our answer again involves relative entropy , but where we average over `` name''-space , and where we minimize over all configurations whose average loss is .to state our main result , we need some notation . for all and in , define \ln \frac{1}{1-\beta_2 } & \text{for , } \\\infty & \text{else.}\end{cases}\ ] ] for each and } ] and \lambda=\infty \lambda=-\infty ] .if , there is a unique { \bar { v}}\in { \mathcal{g}}_{\alpha'} { \bar { v}}=\mu^\dagger_{\alpha'} ] so the asymptotic behavior of the premium is given by }{n^{3/2}(\beta-\alpha)\sqrt{2\pi\sigma^2(\alpha,{\bar { u}})}{\left\ { } \newcommand{\rb}{\right\}}\sum_{t\in { \mathcal{t } } } e^{-{\textsf{r}}t}\rb}\\ & \qquad \times { \left\ { } \newcommand{\rb}{\right\}}\frac{e^{-\lambda(\alpha,{\bar { u}})}}{(1-e^{-\lambda(\alpha,{\bar { u}})})^2 } + \frac{{\lceil n \alpha \rceil}-n \alpha}{1-e^{-\lambda(\alpha,{\bar { u } } ) } } + { \mathcal{e}}'(n)\rb \exp\left[-n { \mathfrak{i}}(\alpha,{\bar { u}^{(n)}})\right ] \end{aligned}\ ] ] where .[ r : discretization ] although the dominant exponential asymptotics follows from theorem [ t : main ] , we can not replace in theorem [ t : main ] by ; the pre - exponential asymptotics of theorem [ t : main ] are at too fine a resolution to allow that .a careful examination of the calculations of lemma [ l : ficont ] reveals that and should differ by something on the order of the distance ( in the prohorov metric ) between and . in general, we should expect that this distance would be of order ; as an example consider approximating a uniform distribution on by point masses at multiples of .then we would have that .this term would contribute to the pre - exponential asymptotics of theorem [ t : main ] . to close this section ,we refer the reader to section [ s : merton ] , where we simulate our results for the merton model of example [ ex : merton ] .we also point out that it would not be hard to combine the calculations of sections [ s : measurechange ] and [ s : asympanal ] to get an asymptotic formula for the loss given default of the cdo .the terms in front of the ] . as in , we want to compute the asymptotic ( for large ) likelihood that .we want to do this via a collection of arguments stemming from the theory of large deviations .the value of the calculations in this section is that they naturally lead to a measure transformation ( cf .section [ s : measurechange ] ) which will lead to the precise asymptotics of theorem [ t : main ] . for the moment , it is sufficient for our arguments to be formal ; it is sufficient to _ guess _ a large deviations rate functional for . in the ensuing parts of this paperwe will show that this guess is correct ( cf .section [ s : asympanal ] ) .define now for our calculations here in this section , we will assume that exists ( as a limit in ) .see example [ ex : walk ] .our approach is similar to that of ; we first identify a large deviations principle for , and then use the contraction principle to find what should be a rate function for .we hopefully can identify the large deviations principle for by looking at the asymptotic moment generating function for and appealing to the grtner - ellis theorem .the following result gets us started .[ l : momgen ] for , \right ] = \int_{\rho\in { { \mathscr{p}}(i)}}{\left\ { } \newcommand{\rb}{\right\}}\ln \int_{t\in i}e^{\varphi(t)}\rho(dt)\rb { u}(d\rho).\ ] ] to make this a bit clearer , let s first carry out these calculations for our test case .for example [ ex : simpleexample ] , \right ] = \lim_{n\to \infty}\frac{1}{n}\ln { \mathbb{e}}_n\left[\exp\left[\sum_{n=1}^n \varphi(\tau_n)\right]\right]\\ & \qquad = \lim_{n\to \infty}\frac{1}{n}\ln \prod_{n=1}^n { \mathbb{e}}_n\left[\exp\left[\varphi(\tau_n)\right]\right]\\ & \qquad = \lim_{n\to \infty}\frac{1}{n}{\left\ { } \newcommand{\rb}{\right\}}\left \lfloor \frac{n}{3}\right\rfloor \ln \int_{t\in i}e^{\varphi(t)}\check \mu_a(dt ) + \left(n-\left \lfloor \frac{n}{3}\right \rfloor\right ) \ln \int_{t\in i}e^{\varphi(t)}\check \mu_b(dt)\rb \\ & \qquad = \frac{1}{3 } \ln \int_{t\in i}e^{\varphi(t)}\check \mu_a(dt ) + \frac{2}{3}\ln \int_{t\in i}e^{\varphi(t)}\check \mu_b(dt ) .\end{aligned}\ ] ] we can now prove the result in full generality . for every , \right ] = \frac{1}{n}\ln { \mathbb{e}}_n\left[\exp\left[\sum_{n=1}^n \varphi(\tau_n)\right]\right]\\ & \qquad = \frac{1}{n}\ln \prod_{n=1}^n { \mathbb{e}}_n\left[\exp\left[\varphi(\tau_n)\right]\right ] = \frac{1}{n}\sum_{n=1}^n \ln \int_{t\in i}e^{\varphi(t)}\mu^{(n)}_n(dt)\\ & \qquad = \frac{1}{n}\sum_{n=1}^n \int_{\rho\in { { \mathscr{p}}(i)}}{\left\ { } \newcommand{\rb}{\right\}}\ln \int_{t\in i}e^{\varphi(t)}\rho(dt)\rb \delta_{\mu^{(n)}_n}(d\rho)= \int_{\rho\in { { \mathscr{p}}(i)}}{\left\ { } \newcommand{\rb}{\right\}}\ln \int_{t\in i}e^{\varphi(t)}\rho(dt)\rb { u}^{(n)}(d\rho ) . \end{aligned}\ ] ] now use remark [ r : continuity ] ; the claimed result thus follows .we next appeal to the insights of large deviations theory .we expect . ] that will be governed by a large deviations principle ( in ) with rate function , we treat as a subset of .] by the contraction principle of large deviations , we then expect that should be governed by a large deviations principle ( in { \mathfrak{u}}^{(n)}_n\in ( 0,1) { \mathfrak{u}}^{(n)}_n\in \{0,1\} \lambda\ge 0 \lambda<0 \lambda\ge 0 \lambda<0 ] for all .next , we note that if , then if and only if , and if and only if .we can also take derivatives . for and , and thus , we have that is continuous on ] . in light of these thoughts, we note that and if so , .we can now make several calculations about .first , with }(t)&\text{if } \\ 1 & \text{if }\end{cases}\ ] ] for all .in light of , we have that each is finite and strictly positive .let s also note that }\phi(p,\lambda(\alpha,{\bar { u}^{(n)}})){\bar { u}^{(n)}}(dp ) = \alpha . \end{aligned}\ ] ] ( clearly if ; by , we also have that if ) .[ t : measurechange ] we have that = i_ne^{-n{\mathfrak{i}}(\alpha)}\ ] ] for all positive integers , where \chi_{\{\gamma_n>0\}}\right]\ ] ] where in turn \qquad a\in { \mathscr{f}}\\ \gamma_n&= \sum_{n=1}^n { \left\ { } \newcommand{\rb}{\right\}}\chi_{[0,t)}(\tau_n)-\alpha\rb = n(l_{t-}^{(n)}-\alpha)).\end{aligned}\ ] ] under , are independent and has law for . set }(t ) & \text{if } \\ 0 & \text{if }\end{cases } \qquadt\in i\\ \gamma_n & { \overset{\text{def}}{=}}\sum_{n=1}^n \psi^{(n)}_n(\tau_n)- \sum_{n=1}^n \int_{t\in i}\psi^{(n)}_n(t)\tilde \mu^{(n)}_n(dt)\end{aligned}\ ] ] ( as we pointed out above , each is positive and finite on all of , ensuring that is well - defined ) .then = \frac{{\mathbb{e}}_n\left[{\textbf{p}^{\text{prot}}}_n \exp\left[-\gamma_n\right]\exp\left[\gamma_n\right]\right]}{{\mathbb{e}}_n\left[\exp\left[\gamma_n\right]\right]}{\mathbb{e}}_n\left[\exp\left[\gamma_n\right]\right].\ ] ] some straightforward calculations ( recall ) show that }\hbar(\phi(p,\lambda(\alpha,{\bar { u}^{(n)}})),p){\bar { u}^{(n)}}(dp)\\ & = n{\mathfrak{i}}(\alpha,{\bar { u}^{(n)}})\\ \exp\left[\sum_{n=1}^n \psi^{(n)}_n(\tau_n)\right]&= \exp\left[\sum_{n=1}^n\ln \frac{d\tilde \mu^{(n)}_n}{d\mu^{(n)}_n}(\tau_n)\right ] = \prod_{n=1}^n \frac{d\tilde \mu^{(n)}_n}{d\mu^{(n)}_n}(\tau_n ) \end{aligned}\ ] ] we chose exactly so that the following calculation holds : \right]= e^{-n{\mathfrak{i}}(\alpha,{\bar { u}^{(n)}})}{\mathbb{e}}_n\left[\prod_{n=1}^n \frac{d\tilde \mu^{(n)}_n}{d\mu^{(n)}_n}(\tau_n)\right ] = e^{-n{\mathfrak{i}}(\alpha,{\bar { u}^{(n)}})}.\ ] ] we also clearly have that \right]}{{\mathbb{e}}_n\left[\exp\left[\gamma_n\right]\right]}\\ = \frac{{\mathbb{e}}_n\left[\chi_a\exp\left[\sum_{n=1}^n \psi^{(n)}_n(\tau_n)\right]\right]}{{\mathbb{e}}_n\left[\exp\left[\sum_{n=1}^n \psi^{(n)}_n(\tau_n)\right]\right ] } = \tilde { \mathbb{p}}_n(a)\ ] ] for all .the properties of are clear from the explicit formula .finally , it is easy to check that }(\tau_n)-\tilde \mu^{(n)}_n[t,\infty]\rb \\ & = \sum_{\substack{1\le n\le n \\ { \mathfrak{u}}^{(n)}_n\in ( 0,1 ) } } { \left\ { } \newcommand{\rb}{\right\}}\ln \frac{\tilde { \mathfrak{u}}^{(n)}_n}{{\mathfrak{u}}^{(n)}_n}-\ln \frac{1-\tilde { \mathfrak{u}}^{(n)}_n}{1-{\mathfrak{u}}^{(n)}_n } \rb { \left\ { } \newcommand{\rb}{\right\}}\chi_{[0,t)}(\tau_n)-\tilde \mu^{(n)}_n[0,t)\rb \\ & = \sum_{\substack{1\le n\le n \\ { \mathfrak{u}}^{(n)}_n\in ( 0,1 ) } } \ln \left(\frac{\phi({\mathfrak{u}}^{(n)}_n,\lambda(\alpha,{\bar { u}^{(n)}}))}{1-\phi({\mathfrak{u}}^{(n)}_n,\lambda(\alpha,{\bar { u}^{(n ) } } ) ) } \frac{1-{\mathfrak{u}}^{(n)}_n}{{\mathfrak{u}}^{(n)}_n}\right ) { \left\ { } \newcommand{\rb}{\right\}}\chi_{[0,t)}(\tau_n)-\tilde \mu^{(n)}_n[0,t)\rb .\end{aligned}\ ] ] a straightforward calculation shows that for any and , recall now and note that if , then -a.s . , while if then -a.s . . thus -a.s . combining things together , we get that -a.s ., recall now . by and, we see that is nonzero only if ; we have explicitly included this in the expression for .we proceed now as in .define ; then is the nonnegative collection of values which can take . for each ,let ] ). then \right].\ ] ] the behavior of is very nice for large .[ l : hasymp ] for all , we have that where section [ s : has ] is dedicated to the proof of this result .we can also see that the distribution of is nice for large .the proof of this result is qualitatively different than the corresponding proof of lemma 5.2 in .[ l : probasymp ] we have that for all and all , where is as in and where the proof of this is the subject of section [ s : fourier ] ; the result is in some sense a statement of convergence in the `` vague '' topology .we can now set up the proof theorem [ t : main ] .for , define {\left\ { } \newcommand{\rb}{\right\}}\frac{e^{-\lambda}}{(1-e^{-\lambda})^2 } + \frac{{\lceil n \alpha \rceil}-n\alpha}{1-e^{-\lambda}}\rb \end{aligned}\ ] ] then , as in , {\left\ { } \newcommand{\rb}{\right\}}\frac{e^{-\lambda}}{(1-e^{-\lambda})^2 } + \frac{{\lceil n \alpha \rceil}-n \alpha}{1-e^{-\lambda } } + { \mathcal{e}}_3(\lambda , n)\rb\ ] ] where there is a such that }{\lambda(1-e^{-\lambda})^2}\ ] ] for all positive integers and all .we have that where \\ \tilde { \mathcal{e}}_2(n)&{\overset{\text{def}}{=}}\frac{1}{\sqrt{2\pi n\sigma^2(\alpha,{\bar { u}})}}\sum_{\substack{s\in { \mathsf{s}}_n\\s\le n^{1/4}}}h_n(s)e^{-\lambda(\alpha,{\bar { u}^{(n ) } } ) s}{\mathcal{e}}_2(s , n ) \\ \tilde { \mathcal{e}}_3(n)&{\overset{\text{def}}{=}}\frac{e^{-{\textsf{r}}t}}{n^{3/2}(\beta-\alpha)\sqrt{2\pi \sigma^2(\alpha,{\bar { u}})}}\sum_{\substack{s\in { \mathsf{s}}_n\\s\le n^{1/4}}}se^{-\lambda(\alpha,{\bar { u}^{(n ) } } ) s}{\mathcal{e}}_1(s , n ) \\ \tilde { \mathcal{e}}_4(n)&{\overset{\text{def}}{=}}\frac{e^{-{\textsf{r}}t}}{n^{3/2}(\beta-\alpha)\sqrt{2\pi \sigma^2(\alpha,{\bar { u}})}}\exp\left[-\lambda(\alpha,{\bar { u}^{(n)}})\left({\lceil n \alpha \rceil}-n \alpha\right)\right]{\mathcal{e}}_3(\lambda(\alpha,{\bar { u}^{(n)}}),n)\\ \tilde { \mathcal{e}}_5(n)&{\overset{\text{def}}{=}}\frac{e^{-{\textsf{r}}t}}{n^{3/2}(\beta-\alpha)\sqrt{2\pi \sigma^2(\alpha,{\bar { u}})}}{\left\ { } \newcommand{\rb}{\right\}}\tilde i_{2,n}(\lambda(\alpha,{\bar { u}^{(n)}}))-\tilde i_{2,n}(\lambda(\alpha,{\bar { u}}))\rb \end{aligned}\ ] ] keep in mind now the second claim of . we see that there is a such that for sufficiently large furthermore , we can fairly easily see that there is a such that for all sufficiently large ( note from and that is uniformly bounded in as long as is bounded away from zero from below ) . finally , we get that there is a such that for sufficiently large .combine things together within the framework of theorem [ t : measurechange ] to get the stated result .as an example of how the computations of section [ s : model ] work , let s delve a bit more deeply into example [ ex : merton ] . to be very explicit , let s assume that all the names are governed by the merton model with risk - neutral drift , initial valuation , and bankruptcy barrier .we assume that expiry is .assume that the volatility is distributed according to a gamma distribution with size parameter and shape parameter ; is then given by .numerical integration shows that }p{\bar { u}}(dp)=.0738.\ ] ] to understand how our calculations work , figure [ fig : fa ] is a plot of the function }\phi(p,\lambda){\bar { u}}(dp).\ ] ] thus if the attachment point of the tranche is , we would have .} \phi(p,\lambda){\bar { u}}(dp) ] is nonnegative ( but possibly infinite ) , so is well - defined ( but possibly infinite ) via the theory of lebesgue integration ; we can approximate { \mathfrak{u}}^{(n)}_n\in ( 0,1) { \mathfrak{u}}^{(n)}_n\in \{0,1\} { \mathfrak{u}}^{(n)}_n\in ( 0,1) { \mathfrak{u}}^{(n)}_n\in \{0,1\} ] .recall lemma 6.2 of and its proof .measurability and integrability are clear ( use instead of ( 13 ) of ) .define next : \mu^{(n)}_n[0,t - t)=0\rb \wedge t;\ ] ] then but for all .we can thus use lemma 6.2 of to see that if , then =m^{(n , n)}_s { \mathfrak{u}}^{(n)}_n\in ( 0,1) { \mathfrak{u}}^{(n)}_n\in \{0,1\} { \mathfrak{u}}^{(n)}_n\in ( 0,1) { \mathfrak{u}}^{(n)}_n\in \{0,1\} ] .finally , we claim that for any , =0;\ ] ] if so , we can fairly easily conclude that =m^{(n , n)}_s ] , and so -a.s .( and thus again by absolute continuity -a.s . ) here we calculate that this proves and completes the proof .let s now recombine things .set for .also observe that we next rewrite to be in reverse time .set then , as in section 7 of , we know that .fix now two parameters and .we want to show ( this will occur in ) that it is unlikely that ; we want to do this by exploiting the equation assume now that in fact .firstly , this implies that ( see figure 3 of ) .thus on the other hand , we can combine and the second inequality of and the fact that the s are nonincreasing to see that for , also , \le 1 ] .assume next that .for , thus for ( again using the fact that s are independent under ) we have that -a.s . }{\mu^{(n)}_n[0,t)}{z^{(n)}}_0 .\end{gathered}\ ] ] thus \le \frac{2{z^{(n)}}_0}{\mu^{(n)}_n[0,t)}\int_{r_1\in ( 0,t)}\int_{r_2\in ( 0,r_1]}\frac{1}{\mu^{(n)}_n[0,r_1]}\mu^{(n)}_n(dr_2)\mu^{(n)}_n(dr_1)\\ \le\frac{2{z^{(n)}}_0}{\mu^{(n)}_n[0,t)}\int_{r_1\in(0,t)}\mu^{(n)}_n(dr_1 ) \le 2. \end{gathered}\ ] ] summarizing thus far , we have that since , we have let s finally bound .the above bound will show us that it is unlikely that . on the other hand , if , then in fact .thus \le\delta + \frac{t}{{\varepsilon}\delta_n({\varepsilon},\delta)}{\left\ { } \newcommand{\rb}{\right\}}\frac{\gamma^+_n}{n } + \frac{1}{n}+ \sqrt{\frac{12}{n}}\rb.\ ] ] in other words , \chi_{\{\gamma_n = s\}}\le \delta + \frac{t}{{\varepsilon}\delta_n({\varepsilon},\delta)}{\left\ { } \newcommand{\rb}{\right\}}\frac{1}{n^{3/4 } } + \frac{1}{n}+ \sqrt{\frac{12}{n}}\rb.\ ] ] we now use . take , then , and finally .let s start by representing as a fourier transform ; that will allow us to mimic various arguments from the central limit theorem . for and , define = \sum_{n=0}^n \exp\left[i\theta(n - n \alpha)\right]\tilde { \mathbb{p}}_n{\left\ { } \newcommand{\rb}{\right\}}\gamma_n = n- n \alpha\rb.\ ] ] thus = \sum_{n=0}^n e^{i\theta n}\tilde { \mathbb{p}}_n{\left\ { } \newcommand{\rb}{\right\}}\gamma_n = n- n \alpha\rb.\ ] ] thus for for some , e^{-i\theta n}d\theta \\= \frac{1}{2\pi}\int_{\theta=-\pi}^{\pi } { \mathcal{p}}_n(\theta ) e^{-i\theta s}d\theta . \end{gathered}\ ] ] and so by a change of variables , \theta.\ ] ] this last representation is the same scaling as for the central limit theorem .the advantage of using is that we can explicitly compute it .we have that \right ] = \prod_{n=1}^n \tilde { \mathbb{e}}_n\left[\exp\left[i\theta { \left\ { } \newcommand{\rb}{\right\}}\chi_{[0,t)}(\tau_n ) - \tilde { \mathfrak{u}}^{(n)}_n\rb\right]\right ] \\ & = \prod_{n=1}^n { \left\ { } \newcommand{\rb}{\right\}}\tilde { \mathbb{e}}_n\left[\exp\left[i\theta \chi_{[0,t)}(\tau_n)\right]\right]\exp\left[-i\theta \tilde { \mathfrak{u}}^{(n)}_n\right]\rb = \prod_{n=1}^n { \left\ { } \newcommand{\rb}{\right\}}\left(e^{i\theta}\tilde { \mathfrak{u}}^{(n)}_n + 1-\tilde { \mathfrak{u}}^{(n)}_n\right)\exp\left[-i\theta \tilde { \mathfrak{u}}^{(n)}_n\right]\rb\end{aligned}\ ] ] ( the part of the last equality due to for which is obvious ; for those for which we use ) we can now start to see the important asymptotic behavior of . before actually launching into the proof , we need to study of for a moment .[ l : sigmalim ] we have that for each , the map is continuous and positive on .we first observe that }\phi(p,\lambda(\alpha,{\bar { u}^{(n)}})){\left\ { } \newcommand{\rb}{\right\}}1-\phi(p,\lambda(\alpha,{\bar { u}^{(n)}})\rb{\bar { u}^{(n)}}(dp ) = \sigma^2(\alpha,{\bar { u}^{(n ) } } ) . \end{gathered}\ ] ] if and in are such that , then we can write }\left|\phi(p,\lambda(\alpha,{\bar { v}}_n)){\left\ { } \newcommand{\rb}{\right\}}1-\phi(p,\lambda(\alpha,{\bar { v}}_n))\rb -\phi(p,\lambda(\alpha,{\bar { v}})){\left\ { } \newcommand{\rb}{\right\}}1-\phi(p,\lambda(\alpha,{\bar { v}}))\rb\right|{\bar { v}}_n(dp)\\ + \left|\int_{p\in [ 0,1]}\phi(p,\lambda(\alpha,{\bar { v}})){\left\ { } \newcommand{\rb}{\right\}}1-\phi(p,\lambda(\alpha,{\bar { v}}))\rb{\bar { v}}_n(dp)-\int_{p\in [ 0,1]}\phi(p,\lambda(\alpha,{\bar { v}})){\left\ { } \newcommand{\rb}{\right\}}1-\phi(p,\lambda(\alpha,{\bar { v}}))\rb{\bar { v}}(dp)\right| \end{gathered}\ ] ] from remark [ r : phiprops ] and in a way similar to arguments in the proofs of lemmas [ l : lambdacont ] and [ l : ficont ] , we have that }\left|\phi(p,\lambda(\alpha,{\bar { v}}_n)){\left\ { } \newcommand{\rb}{\right\}}1-\phi(p,\lambda(\alpha,{\bar { v}}_n))\rb -\phi(p,\lambda(\alpha,{\bar { v}})){\left\ { } \newcommand{\rb}{\right\}}1-\phi(p,\lambda(\alpha,{\bar { v}}))\rb\right|{\bar { v}}_n(dp)\\ \le \left|\lambda(\alpha,{\bar { v}}_n)-\lambda(\alpha,{\bar { v}})\right| \end{gathered}\ ] ] and we then use the continuity of lemma [ l : lambdacont ] in appendix b , and by weak convergence }\phi(p,\lambda(\alpha,{\bar { v}})){\left\ { } \newcommand{\rb}{\right\}}1-\phi(p,\lambda(\alpha,{\bar { v}}))\rb{\bar { v}}_n(dp)=\int_{p\in [ 0,1]}\phi(p,\lambda(\alpha,{\bar { v}})){\left\ { } \newcommand{\rb}{\right\}}1-\phi(p,\lambda(\alpha,{\bar { v}}))\rb{\bar { v}}(dp).\ ] ] this proves the stated continuity . finally , if , then if and only if the integrand ( which is nonnegative ) in is -a.s .this occurs if and only if for -a.e . ] , and .a direct computation in particular thus shows that for all .we will need two bounds in the proof of lemma [ l : probasymp ] .the first bound is that is close to ] , thus \ } \subset { \mathbb{c}}\setminus { \mathbb{r}}_- ] , implies that fix .if and ] .recall next the standard fact that for all , where there is a such that for all . combining things together ,we conclude that for all and ] .collecting our calculations , we thus have that - 1\right)\right)- i\tilde { \mathfrak{u}}^{(n)}_n\frac{\theta}{\sqrt{n}}\rb = -\frac12 \sigma^2(\alpha,{\bar { u}^{(n)}})\theta^2 + \tilde { \mathcal{e}}_n(\theta)\ ] ] for all such that , where there is a such that for all and such that . the claimed result now easily follows .we next prove the uniform bound on .[ l : ubound ] there is a such that \ ] ] for all and . for ] . to do so, it suffices by continuity to check ; this can easily be done via lhpital s rule . ]that there is an such that for all ] ) we have that \\ \left|{\textsc{\tiny e}}_4(n)\right|&\le \sqrt{\frac{\sigma^2(\alpha,{\bar { u}})}{\sigma^2(\alpha,{\bar { u}^{(n)}})}}\exp\left[-\frac12\sigma^2(\alpha,{\bar { u}^{(n)}})n^{1/4}\right ] .\end{aligned}\ ] ] recalling , , and , we have that to finally bound , define which is fairly easily seen to be finite . again using, we have that for sufficiently large }{n^{1/8}}\sqrt{\frac{\sigma^2(\alpha,{\bar { u}})}{\sigma^2(\alpha,{\bar { u}^{(n)}})}}.\ ] ] combining things , the stated claim follows .we have intentionally formulated our assumptions to reflect their usage . for a large , we can readily check in a given situation if furthermore , we can construct the measure of . for a finite but large , this would suggest that we use theorem [ t : main ] and to price the cdo .our goal here is to take a slightly different tack and restructure our assumptions in the framework that the s are , in a sense , samples from an underlying distribution .we would like to reframe our assumptions in terms of this underlying distribution .our setup here is as follows .we define as in , and we assume that holds .[ ex : walk ] for example [ ex : simpleexample ] , we would have that and for example [ ex : merton ] , we would have that [ r : bho ] we also note that the relation between the s and can allow some complexities . for example , let }{\int_{t\in [ 0,\infty)}\exp\left[-\frac{n(t-1)^2}{2}\right]dt}. \qquad a\in { \mathscr{b}}(i)\ ] ] for every and , is very nice .however , it is fairly easy to see that , where the measure ( as an element of ) does not have a density with respect to lebesgue measure .this suggests that in certain situations , there is value in stating regularity assumptions on the limiting measure , rather than on the approximating sequence of the s .let s next define {u}(d\rho ) \qquad t\in i\ ] ] by lemma [ l : distmeas ] , we know that is a well - defined cdf on ; informally , is the expected notional loss distribution ( see ) .for example [ ex : simpleexample ] , we would have that + \frac23 \check \mu_b[0,t]\ ] ] and for example [ ex : merton ] , we would have that \frac{\sigma^{{\varsigma}-1}e^{-\sigma/\sigma_\circ}}{\sigma_\circ^{\varsigma}\gamma({\varsigma})}d\sigma\ ] ] for each , define . by lemma [ l : meas ] , we know that is a measurable map from to ] as \ ] ] for all .let s now turn to our assumptions . if , then assumption [ a : limitexists ] holds and .we first note that .fix ] is compact , .fix now . then ( using the notation of section [ s : proofs ] ) }\psi(p){\bar { u}^{(n)}}(dp)-\int_{p\in [ 0,1]}\psi(p)(p_*{u})(dp)\right|=\left|\int_{\rho\in { { \mathscr{p}}(i)}}\psi(\rho[0,t)){u}^{(n)}(d\rho)-\int_{\rho\in { { \mathscr{p}}(i)}}\psi(\rho[0,t)){u}(d\rho)\right|\\ & \qquad \le \left|\int_{\rho\in { { \mathscr{p}}(i)}}{\left\ { } \newcommand{\rb}{\right\}}\psi(\rho[0,t))-\psi({\mathbf{i}}_{\psi^-_{t , m}}(\rho))\rb { u}^{(n)}(d\rho)\right|\\ & \qquad \qquad + \left|\int_{\rho\in { { \mathscr{p}}(i)}}\psi({\mathbf{i}}_{\psi^-_{t , m}}(\rho)){u}^{(n)}(d\rho)-\int_{\rho\in { { \mathscr{p}}(i)}}\psi({\mathbf{i}}_{\psi^-_{t , m}}(\rho)){u}(d\rho)\right| \\ & \qquad \qquad + \left|\int_{\rho\in { { \mathscr{p}}(i)}}{\left\ { } \newcommand{\rb}{\right\}}\psi({\mathbf{i}}_{\psi^-_{t , m}}(\rho))-\psi(\rho[0,t))\rb { u}(d\rho)\right| .\end{aligned}\ ] ] by weak convergence , we have that for each .by dominated convergence , we also have that thirdly , we calculate that for each }{u}^{(n)}{\left\ { } \newcommand{\rb}{\right\}}\rho\in { { \mathscr{p}}(i ) } : \left|\rho[0,t)-{\mathbf{i}}_{\psi^-_{t , m}}(\rho)\right|\ge \delta\rb . \end{gathered}\ ] ] for every , , so by markov s inequality thus combine things together , take , them , and finally . if , then assumption [ a : ig ] holds .we will use the equivalent characterization of assumption [ a : ig ] given in .for each and in , we have that let to get that now let and use dominated convergence to see that this gives the desired claim .we can also check assumption [ a : nondegen ] in our two favorite examples .for example [ ex : simpleexample ] , we have that which is zero if and . for example [ ex : merton ] , we similarly have that we finally turn our attention to assumption [ a : notflat ] .[ l : notflatlemma ] if then assumption [ a : notflat ] holds . for all and , ] for each . secondly , for all , and , , let be such that , is decreasing , if , and if . for each and , let be such that , if , and if .we note that for all , , and , and that for all and .fix , , and and in .then take first .we get that now let and then , and use dominated convergence in both calculations .we get that now let to get the claim .for example [ ex : simpleexample ] , we have that which is zero if either or is not flat at . for example [ ex : merton ] , we have that this section we look more deeply into the variational problems which have appeared in our arguments .most of this section is motivational ; the only results we need in the body of the paper are the regularity results of lemmas [ l : lambdacont ] , [ l : ficont ] , and [ l : sopen ] , and the proof of lemma [ l : increasing ] .the remainder of the section is devoted to proving lemmas [ l : finalitmin ] and [ l : variational ] .looking carefully at our arguments , we see that we could in fact _ define _ as in and proceed with the rest of our paper . nevertheless , we prove both lemma [ l : finalitmin ] and lemma [ l : variational ] so that we can have a fairly complete understanding of the calculations involved in identifying how the rare events are most likely to form . to begin our calculations , we first explore some regularity of the objects described in lemma [ l : finalitmin ] .define } : { \bar { v}}\in { \mathcal{g}}_{\alpha'}\rb \\ { \mathcal{s}}^{\text{strict}}&{\overset{\text{def}}{=}}{\left\ { } \newcommand{\rb}{\right\}}(\alpha',{\bar { v}})\in ( 0,1)\times { { \mathscr{p}}[0,1 ] } : { \bar { v}}\in { \mathcal{g}}^{\text{strict}}_{\alpha'}\rb .\end{aligned}\ ] ] also define }\phi(p,\lambda){\bar { v}}(dp)\ ] ] for all ]. then we have [ l : lambdacont ] for each , the solution of exists and is unique . if , then .thirdly , the map is continuous on as a map from } ] .remark [ r : phiprops ] ensures that is strictly increasing on ] , the continuity of ( again using remark [ r : phiprops ] ) and dominated convergence imply that is continuous on ] , then .otherwise , .let s next address continuity .we begin with some general comments which we will at the end organize in several ways .fix and in such that ( in the product topology ) .assume also that ] , so by portmanteau s theorem , \le { \bar { v}}[1-\delta,1]<\alpha'-\delta ] , so by portmanteau s theorem , \le { \bar { v}}[0,\delta]<1-\alpha'-\delta ] .note that for and all .[ r : hprops ] we have that for all and , and for all ] and and in .finally , remark [ r : phiprops ] implies that for and ] such that in the product topology . by definition of , we have that there is a such that since and are closed subsets of ] such that .then , and .since is open , we thus have that for all sufficiently large ; i.e. , for sufficiently large .hence is indeed open .we use lemma [ l : sopen ] to see that if is sufficiently large .we use lemmas [ l : lambdacont ] and [ l : ficont ] to get the convergence claims of . by assumption[ a : ig ] and [ a : nondegen ] , we get that there is an such that }p{\bar { u}^{(n)}}(dp)<\alpha\ ] ] for all .thus for , we have that ( use a calculation similar to ) }p{\bar { u}^{(n)}}(dp)<1-\alpha+\alpha<1;\ ] ] thus for , , so in fact we have the following string of inequalities : }p{\bar { u}^{(n)}}(dp)>{\bar { u}^{(n)}}\{1\}.\ ] ] thus for , .lemma [ l : ficont ] thus ensures that is continuous on for . remark [ r : phiprops ] implies that is nondecreasing in its second argument , so must also be nondecreasing on . remark [ r : hprops ] ensures that is also nondecreasing in its second argument , so we can now conclude that is nondecreasing on . to finally understand the sign of , note that }p{\bar { u}^{(n)}}(dp),{\bar { u}^{(n)}}\right),{\bar { u}^{(n)}}\right ) = \int_{p\in [ 0,1]}p{\bar { u}^{(n)}}(dp ) = { \mathbf{\phi}}(0,{\bar { u}^{(n)}});\ ] ] thus }p{\bar { u}^{(n)}}(dp),{\bar { u}^{(n)}}\right)=0 ] ( both of which are polish spaces ; see also lemma [ l : meas ] ) , there is a measurable map from ] .fix now such that for each ] .clearly } \phi(p){\bar { u}}(dp ) = \int_{\rho\in { { \mathscr{p}}(i ) } } m\tilde ( \rho)[0,t){u}(d\rho)=\alpha'.\ ] ] convexity of in the first argument thus implies that }{\left\ { } \newcommand{\rb}{\right\}}\int_{\rho\in { { \mathscr{p}}(i)}}\hbar(\tilde m(\rho)[0,t),p)\check { u}_p(d\rho)\rb { \bar { u}}(dp)\\ \ge \int_{p\in [ 0,1]}\hbar\left(\int_{\rho\in { { \mathscr{p}}(i)}}\hbar(\tilde m(\rho)[0,t),p)\check { u}_p(d\rho)\rb { \bar { u}}(dp ) = \int_{p\in [ 0,1]}\hbar(\phi(p),p){\bar { u}}(dp ) .\end{gathered}\ ] ] this directly leads to let s now prove the reverse inequality ; i.e , that fix ;[0,1]) ] .we can of course also assume that }\hbar(\phi(p),p){\bar { u}}(dp)<\infty.\ ] ] for every , define if , and define if .we first claim that for all . if , a direct calculation shows that this is in fact an equality . if , then .thus }\hbar(\phi(p),p){\bar { u}}(dp ) = \int_{\rho\in { { \mathscr{p}}(i)}}\hbar(\phi(\rho[0,t)),\rho[0,t)){\bar { u}}(dp ) \ge \int_{\rho\in { { \mathscr{p}}(i)}}h(m(\rho)|\rho){\bar { u}}(dp).\ ] ] fix next . by, we thus have that }\hbar(\phi(p),p){\bar { u}}(dp ) \ge \int_{\rho\in { { \mathscr{p}}(i)}}{\left\ { } \newcommand{\rb}{\right\}}\int_{t\in i}\psi(t)m(\rho)(dt ) - \ln \int_{t\in i}e^{\psi(t)}\rho(dt)\rb { u}(d\rho)\\ = \int_{t\in i}\psi(t)df_{{u}m^{-1}}(dt ) - \int_{\rho\in { { \mathscr{p}}(i)}}{\left\ { } \newcommand{\rb}{\right\}}\ln \int_{t\in i}e^{\psi(t)}\rho(dt)\rb { u}(d\rho ) . \end{gathered}\ ] ] note now that if , then . also , andimply that if , then , and if , then .thus thus }\phi(p){\bar { u}}(dp)=\alpha'.\ ] ] thus }\hbar(\phi(p),p){\bar { u}}(dp ) \ge\sup_{\psi\in c_b(i)}{\left\ { } \newcommand{\rb}{\right\}}\int_{t\in i}\psi(t)df_{{u}m^{-1}}(dt ) - \int_{\rho\in { { \mathscr{p}}(i)}}{\left\ { } \newcommand{\rb}{\right\}}\ln \int_{t\ini}e^{\psi(t)}\rho(dt)\rb { u}(d\rho)\rb\ ] ] and holds .let s now turn to showing that the minimization problem is indeed solved by as stated in lemma [ l : finalitmin ] .this will be a fairly involved proof .again , this is not essential to the paper .however , it is essential to understanding that does indeed give the optimal distribution of rare events leading to loss in the tranche ; i.e. , it explicitly solves .we note before starting that for and in , observe that has singularities at and .our first step is to solve when the singularities are more controlled .fix now } ] ( the support of is a compact subset of , and is continuous on \times ( 0,1) ] .clearly }|\phi^{({\varepsilon})}_n(p)|^2{\bar { v}}(dp)\le 1,\ ] ] so is in the unit ball in ] is reflexive , we know that there is a subsequence and a ] . for any ] . hence }\frac{\partial \hbar}{\partial \beta_1}(\phi^{({\varepsilon})}(p),p){\left\ { } \newcommand{\rb}{\right\}}\phi^{({\varepsilon})}_{n_k}(p)-\phi^{({\varepsilon})}(p)\rb { \bar { v}}(dp)=0,\ ] ] and so }\hbar(\phi^{({\varepsilon})}(p),p){\bar { v}}(dp).\ ] ] in combination with , this gives us the desired claim .note here that the minimizer may not be unique ; in particular , we can change any way we want outside of the support of and we will still have a minimizer .let s next study a bit more .define the ( ] and that -a.s . thus by dominated convergence , holds .in fact , we have now proved that holds for all such that holds .we finish the proof by arguments standard from the theory of lagrange multipliers .we see that there is a such that }{\left\ { } \newcommand{\rb}{\right\}}\frac{\partial \hbar}{\partial \beta_1}(\phi^{({\varepsilon})}(p),p)-\lambda^{\varepsilon}\rb \eta(p){\bar { v}}(dp)=0\ ] ] for all . from this an explicit computationcompletes the proof .let s now understand what happens at points where is either or . for convenience , define we have that and .we use an argument by contradiction to show that .assume that there is a sequence in such that for all and such that .for all , and so }\phi(p,\lambda_{{\varepsilon}_n}){\bar { v}}(dp ) \ge \varliminf_{n\to \infty}\{\alpha_{{\varepsilon}_n}-{\varepsilon}_n\ } \ge \inf_{{\varepsilon}\in ( 0,{\bar { \varepsilon}}_1)}\{\alpha_{\varepsilon}-{\varepsilon}\}>0.\ ] ] since , dominated convergence implies that }\phi(p,\lambda_{{\varepsilon}_n}){\bar { v}}(dp ) = { \bar { v}}\{1\}=0,\ ] ] which is a contradiction . thus . similarly , to show that , assume that there is a sequence such that for all and such that .then , so for all and so }{\left\ { } \newcommand{\rb}{\right\}}1-\phi(p,\lambda_{{\varepsilon}_n})\rb{\bar { v}}(dp ) \ge\varliminf_{n\to \infty}\{1-{\varepsilon}_n-\alpha_{{\varepsilon}_n}\ } \ge \inf_{{\varepsilon}\in ( 0,{\bar { \varepsilon}}_1)}\{1-\alpha_{\varepsilon}-{\varepsilon}\}>0.\ ] ] since here , we now have that }{\left\ { } \newcommand{\rb}{\right\}}1-\phi(p,\lambda_{{\varepsilon}_n})\rb{\bar { v}}(dp ) = { \bar { v}}\{0\}=0.\ ] ] again we have a contradiction , implying that indeed .we next disallow some degeneracies .[ l : nosmallsets ] there is an such that and if .we start with the fact that }\phi^{({\varepsilon})}(p){\bar { v}}(dp ) = { \varepsilon}{\bar { v}}(a_{\varepsilon } ) + ( 1-{\varepsilon } ) { \bar { v}}(c_{\varepsilon } ) + \int_{p\in b_{\varepsilon}}\phi(p,\lambda_{\varepsilon}){\bar { v}}(dp).\ ] ] since , we have thus for , which gives us what we want .let now be such that ] such that , , and . set then for , , and positive and sufficiently small , , so }\hbar(\phi^{({\varepsilon})}(p)+\nu_1 \eta_1(p)+\nu_2\eta_2(p)+\nu_3\eta_3(p),p){\bar { v}}(dp)\ge \int_{p\in [ 0,1]}\hbar(\phi^{({\varepsilon})}(p),p){\bar { v}}(dp).\ ] ] differentiating with respect to , and , we conclude that }\frac{\partial \hbar}{\partial \beta_1}(\phi^{({\varepsilon})}(p),p)\eta_1(p){\bar { v}}(dp)\ge 0 , \quad \int_{p\in [ 0,1]}\frac{\partial \hbar}{\partial \beta_1}(\phi^{({\varepsilon})}(p),p)\eta_2(p){\bar { v}}(dp)\ge 0\\ \int_{p\in [ 0,1]}\frac{\partial \hbar}{\partial \beta_1}(\phi^{({\varepsilon})}(p),p)\eta_3(p){\bar { v}}(dp)\ge 0.\end{gathered}\ ] ] in other words , letting , we see that these inequalities hold for any sets , , and in ] such that }\phi(p){\bar { v}}(dp)=\alpha ' \qquad \text{and}\qquad \int_{p\in [ 0,1]}\hbar(\phi(p),p){\bar { v}}(dp)<{\mathfrak{i}}(\alpha',{\bar { v}})+\delta.\ ] ] for each , define }\phi_{\varepsilon}(p){\bar { v}}(dp ) .\end{aligned}\ ] ] note that .thus , by dominated convergence }\hbar(\phi_{\varepsilon}(p),p){\bar { v}}(dp)=\int_{p\in [ 0,1]}\hbar(\phi(p),p){\bar { v}}(dp ) \qquad \text{and}\qquad \lim_{{\varepsilon}\to 0}\alpha'_{\varepsilon}= \alpha'.\ ] ] by the first of these equalities , we see that there is an such that }\hbar(\phi_{\varepsilon}(p),p){\bar { v}}(dp ) < { \mathfrak{i}}(\alpha',{\bar { v}})+2\delta\ ] ] for all .thus for , }\hbar(\phi_{\varepsilon}(p),p){\bar { v}}(dp ) \ge { \mathfrak{i}}_{\varepsilon}= { \mathfrak{i}}^*(\alpha'_{\varepsilon},{\bar { v}}).\ ] ] we have of course used here corollary [ c : correctmin ] to get the last equality , and we use to define the approximation sequence for .take now and use the continuity result of lemma [ l : ficont ] ( note that and the s are all in ) .then let and conclude that .summarizing thus far our work since , we now know that if .we now want to relax the restriction that .[ l : extremalsnoboundary ] we have that for all and } ] such that }\phi(p){\bar { v}}(dp)=\alpha ' \qquad \text{and}\qquad \int_{p\in [ 0,1]}\hbar(\phi(p),p){\bar { v}}(dp)<{\mathfrak{i}}(\alpha',{\bar { v}})+\delta.\ ] ] since , there is a such that >0 ] ; as in the proof of lemma [ l : extremalscompactsupport ] , and the s are also all in .we have that .then let .thirdly , we want to allow to assign nonzero measure to . before proceeding with this calculation ,let s next simplify a bit .namely , we remove from the admissible set of ] is obviously infinite .thus if , we can restrict the admissible ) ] to those with , and for such , we have that again , both of these equations also hold if . combining our thoughts , we have that ) , \int_{p\in(0,1)}\phi(p){\bar { v}}(dp)=\alpha'-{\bar { v}}\{1\}\rb\ ] ] [ l : edgesupport ] we have that for all and .assume first that .then , and we define } ] is such that then thus and , so in fact . in other words ,if is not in , then the admissible set of s in is empty , implying that .the continuity of and follows directly from lemmas [ l : lambdacont ] and [ l : ficont ] .we here prove some of the really technical measurability results which we have used .this is essentially for the sake of completeness .we start with an obvious comment . for future reference , let s next define for all and .then and are in , and }\le \psi_{t , m}^+\ ] ] and pointwise on we have ( as ) and } ] ( by mapping to $ ] , it is sufficient by carathodory s extension theorem to see that this defines a measure on a semialgebra which generates ; see ( * ? ? ?* section 12.2 ) ) .standard approximation results ( viz. , approximate by indicators ) then imply .the right - hand side of uniquely defines .finally , by remark [ r : continuity ] , we can easily see that if in , then for any , thus the map is continuous ( and thus measurable ) .moody s public finance credit committee . the u.s .municipal bond rating scale : mapping to the global rating scale and assigning global scale ratings to municipal obligations .technical report , moody s investor services , 2007 .huyn pham .some applications and methods of large deviations in finance and insurance . in _paris - princeton lectures on mathematical finance 2004 _ , volume 1919 of _ lecture notes in math ._ , pages 191244 .springer , berlin , 2007 . | we use the theory of large deviations to study the pricing of investment - grade tranches of synthetic cdo s . in this paper , we consider a heterogeneous pool of names . our main tool is a large - deviations analysis which allows us to precisely study the behavior of a large amount of idiosyncratic randomness . our calculations allow a fairly general treatment of correlation . |
in this paper , we consider a simple problem , namely the solution of the scalar wave equation subject to homogeneous initial conditions in the exterior of the unit sphere . here , denote the spherical coordinates of a point in with .standard textbooks on mathematical physics ( such as ) present exact solutions for the time - harmonic cases governed by the helmholtz equation , but generally fail to discuss the difficulties associated with the fully time - dependent case .as we shall see , it is a nontrivial matter to develop closed - form solutions , and a surprisingly subtle matter to develop solutions that can be computed without catastrophic cancellation . in this paper, we restrict our attention to boundary value problems with dirichlet or robin conditions .we consider the dirichlet problem first , and assume we are given data on the boundary of the unit sphere of the form : it is natural to begin by expanding both and in terms of spherical harmonics . where is the standard legendre polynomial of degree , and the associated legendre functions are defined by the rodrigues formula we let and denote the laplace transforms of and : it is straightforward to see that satisfies the linear second order ordinary differential equation ( ode ) \hunm(r , s ) = 0,\ ] ] for which the decaying solution as is the modified spherical hankel function .it follows that matching boundary data on the unit sphere , we have , and the remaining difficulty is that we have an explicit solution in the laplace transform domain , but we seek the solution in the time domain . for this , we write the right hand side of in a form for which the inverse laplace transform can carried out analytically .first , from , we have where ( ) are the simple roots of lying on the open left half of the complex plane .thus , where the second equality follows from an expansion using partial fractions and the coefficients are given from the residue theorem by the formula : substituting into , we obtain taking the inverse laplace transform of both sides , we have this involves the use of the convolution theorem and the formulas and , where is the heaviside function .wilcox studied the solution of the scalar wave equation and derived formula in 1959 . in that short note , wilcox stated that the coefficients given by grew slowly based on the claim that as .unfortunately , this estimate is incorrect .in fact , even after multiplication by the exponentially decaying factor , the coefficients ( ) grow exponentially fast as . in the next section , we explain this growth in detail . as a result ,even though is very convenient for the purpose of theoretical studies , it can not be used for numerical calculation due to catastrophe cancellation in carrying out the summation .benedict , field and lau have recently developed algorithms for compressing the kernel , which they call the teleportation kernel , arising in sphere - to - sphere propagation of data both for the standard wave equation as well as wave equations arising in linearized gravitational theories .for the wave equation their compressed kernels can be used to perform the same function as our solution of the dirichlet problem .the largest value of considered in is .it is as yet unclear if useful compressions for much larger values of can be constructed using their methods .we first show that the coefficients ( ) defined in grow exponentially as , for fixed large .indeed , lemma [ lem6.1 ] in section [ sec : num ] shows that the zeros of satisfy the estimates : for all and .thus when is large , we have where the last line follows from stirling s formula .we have computed for using , and plotted them in figure [ fig2.1 ] for , clearly exhibiting the exponential growth of .we also plot as a function of for a fixed value of in fig .[ fig2.2 ] . _ for increasing values of , with . ,height=240 ] as a function of , for and ., height=240 ] from the preceding analysis , it is clear that one can not use as stated , since the desired solution is and catastrophic cancellation will occur in computing from exponentially large intermediate quantities .fortunately , even though grows exponentially as increases , we can rewrite in the form of a convolution , which involves much more benign growth : where the convolution kernel is defined by the formula if we write then from , we have where is the modified bessel function of the second kind .the last expression follows from the fact that .the convolution kernel and its laplace transform are plotted in figs .[ fig2.3 ] and [ fig2.4 ] , respectively . as a function of for and .the left - hand plot shows for ] , and the right - hand plot shows the function on ] for and .the red ( lower ) curve corresponds to the real part of and the blue ( upper ) curve corresponds to its imaginary part.,height=240 ] the following lemma shows that the convolution kernel grows only quadratically as a function of at .numerical experiments ( see fig .[ fig2.3 ] ) suggest that is maximal in magnitude at .thus , while the sum of exponential expression ( [ eq2.14 ] ) involves catastrophic cancellation , the function is , itself , well - behaved and we may seek an alternative method for the evaluation of the convolution integral .let .then by the initial value theorem for the laplace transform , the first equality in follows from . from ( formula 9.7.2 on page 378 ) , we have the asymptotic expansion where . substituting and into, we obtain the result now follows from the fact that . despite the fact that grows exponentially with ,shows that the _ sum of weights _ is only for fixed .still , however , the formula can not be used in practice because of catastrophic cancellation in the summation thus , we will need a different representation for the convolution operator which is suitable for numerical computation . to obtain a stable formula , we note first that we may rewrite in the form : we then use to express as we can , therefore , compute recursively : and , finally , numerical experiments indicate that the above recursion is stable if the zeros of are arranged in ascending order according to their real parts , i.e. , is closest to the negative real axis and is closest to the imaginary axis .alternatively , it is easy to show that the functions ( ) are the solutions to the following first order system of ordinary differential equations ( odes ) with zero initial conditions . where is a column vector of length with the entry being , , are constant matrices defined by the formulas and is a column vector of length whose only nonzero entry is .the ode system can actually be solved analytically .that is , one may multiply both sides of by to obtain where .it is clear that is a constant lower triangular matrix .one could then diagonalize the system using the eigen - decomposition .this , however , is numerically unstable since is a highly _ nonnormal _ matrix .thus , even though the condition number of is not very high ( numerical evidence shows that ) , is extremely ill - conditioned .in fact , more detailed analysis shows that this approach leads exactly to the formula .nevertherless , the ode system itself can be solved numerically using standard ode packages , albeit less efficiently than the explicit recursive approach we present in section [ sec : num ] , especially for high precision .in this section , we consider the robin problem for the scalar wave equation on the unit sphere : with homogeneous initial data and the boundary condition it should be noted that tokita extended wilcox s analysis of the dirichlet problem to the case of robin boundary conditions of the form , although he assumed that in his discussion .we are primarily concerned with the case since it arises in the solution of the full maxwell equations . as in the analysis of the dirichlet problem , we first expand and in terms of spherical harmonics , perform the laplace transform in , match the boundary data and obtain and we turn now to a study the properties of the kernel in , letting and recalling from [ eq2.8 ] that , we have hence , in particular , for , we have obviously , the poles of are simply the zeros of .those zeros have been characterized by tokita in the following lemma .[ lem3.1 ] _ [ adapted from . ]_ for , has simple roots denoted by .all the roots lie in the open left half of the complex plane symmetrically with respect to the real axis .furthermore , they satisfy the following estimates .hence , there exists a positive number such that for all and . from the preceding lemma , for we have could carry out a partial fraction expansion for the right hand side of to obtain where the coefficients are given by the formula this would yield unfortunately , the coefficients ( ) behave as badly as the coefficients defined in for the dirichlet problem .that is , catastrophic cancellation in makes it ill - suited for numerical computation .fortunately , as in section [ sec : recurrence ] , we can compute without catastrophic cancellation using the following recurrence ( ) : with we leave the derivation of the recurrence to the reader .it is possible to write down a system of odes that is equivalent to .we omit details since the derivation is straightforward and we prefer the recurrence for numerical purposes in any case .in order to carry out the recurrences or , we first need to compute to compute the zeros of and .the following lemma provides asymptotic approximations of the zeros of these two functions , which we will use as initial guesses followed by a simple newton iteration . in practice, we have found that six newton steps are sufficient to achieve double precision accuracy for .[ lem6.1 ] _( asymptotic distribution of the zeros of and , adapted from ) ; see also the appendix . _ 1 .the zeros of have the following asymptotic expansion uniformly in , where is defined by the formula is the negative zero of the airy function whose asymptotic expansion is given by the formula and is obtained from inverting the equation where the branch is chosen so that is real when is positive imaginary . in other words , lies on the curve whose parametric equation is where ] . for this , we interpolate by a polynomial of degree with the shifted and scaled legendre nodes as interpolation nodes. that is , where ( ) are the standard legendre nodes on ] into equispaced subintervals ( yielding a total of discretization points in time ) . in tables[ tab8.2 ] and [ tab8.4 ] , we use terms in the spherical harmonic expansions .these tables show that numerical solution converges spectrally fast to the exact solution ..relative error of the numerical solution of the dirichlet problem with increasing spherical harmonic expansion order . is the total number of discretization on the unit sphere .since the discretization error is usually greater than the truncation error , is chosen to be .thus .the total number of discretization points in time is . [cols="^,^,^,^,^,^,^",options="header " , ] .the exact solution is of the same form as ( [ uexact ] ) - that is , induced by two sources in the interior of the unit sphere . [ fig8.2],height=192 ] for a true " scattering problem .dirichlet boundary conditions are generated by two exterior sources placed on the -axis , at and .the left - hand plot shows the value of the boundary data at the north pole of the unit sphere as a function of time , and the right - hand plot shows the solution at the north pole of the outer sphere of radius .[ fig8.3],height=192 ] for a true " scattering problem .robin boundary conditions are generated by two exterior sources placed on the -axis , at and .the left - hand plot shows the value of the boundary data at the north pole of the unit sphere as a function of time , and the right - hand plot shows the solution at the north pole of the outer sphere of radius .[ fig8.4],height=192 ] -plane within the annular region at . with boundary data as in fig .[ fig8.3 ] .note that the domain is approximately 50 wavelengths in size .[ fig8.5],height=192 ]we have presented an analytic solution for the scalar wave equation in the exterior of a sphere in a form that is numerically tractable and permits high order accuracy even for objects many wavelengths in size . aside from its intrinsic interest in single or multiple scattering from a collection of spheres , our algorithm provides a useful reference solution for any numerical method designed to solve problems of exterior scattering . at the present time , such codes are typically tested by fourier transformation after a long - time simulation and comparison with a set of single frequency solutions computed by separation of variables applied to the helmholtz equation . an exception is the work of sauter and veit , who make use of a formulation equivalent to that of wilcox to develop a benchmark solution for a time - domain integral equation solver which can be applied to scattering from general geometries .exponential ill - conditioning is avoided by considering only low - order spherical harmonic expansions .recently , grote and sim have also used an approach based on the local exact radiation boundary conditions proposed in to develop a new hybrid asymptotic / finite difference formalism for multiple scattering in the time domain .the advantage of the grote - sim method is that spherical harmonic transformations are unnecessary and the evaluation formulas can be localized in angle .however , they also restrict their attention to low - order expansions , and our preliminary experiments using their formulas indicate a loss of conditioning for large .( the loss of conditioning presumably also applies to the radiation boundary conditions in . )the method developed here should be of immediate use in both contexts as implemented above , our algorithm has complexity .it is possible , however , to reduce the cost to .this requires the use of a fast spherical harmonic transform ( see , for example , and references therein ) . with this fast algorithm ,the cost of each spherical harmonic transform is reduced from to .second , we believe that the convolution kernels can be compressed as in , so that they involve only modes for each for a given precision . we note that compressions for and various radii are reported in , both for the scalar wave equation considered here ( which they call the flat - space wave equation ) and for wave equations with zerilli and regge - wheeler potentials . in the latter cases ,compressed kernels are also constructed for smaller values of , as the exact kernels do not have rational transforms .tabulated coefficients required for implementing the compressed kernels may be found online .for the extension of the present approach to the full maxwell equations , see .software implementing the algorithm of the present paper will be made available upon request .an alternative analysis of the instability phenomenon can be carried out using the uniform asymptotic expansions of the bessel functons due to olver .we first recall the relationship between and the hankel function , : thus the residues we wish to estimate are given by to approximate these for we use ( see ) : which hold uniformly in ; thus in particular they hold in where we will be using them . here is given by ( [ eq6.4 ] ) with the replacement .i. : : has infinitely many zeros which lie on the negative real axis .for large the jth zero , , of satisfies ( [ eq6.3 ] ) and the derivative satisfies ii .: : for the function satisfies the asymptotic formula using ( i. ) we deduce that the poles , , are asymptotically given by ( [ eq6.1 ] ) and approximately lie on the curve where is defined in ( [ eq6.5 ] ) .this is the curve for which is real and negative . to evaluate the residues we must calculate using ( [ ash]),([asdh ] ) where we have introduced obviously the scaling moves off the curve where the argument of the airy function is real and negative . thus using ( [ aiasy ] ) and ( [ eq6.4 ] ) we deduce that the asymptotic formula for contains an exponential term \nonumber\end{aligned}\ ] ] where here we have introduced . finally , we consider the real part of the expression in parentheses on the second line of ( [ resexp ] ) .in particular we replace by a continuous variable traversing the scaled curve , , containing the approximate zeros .then the function depends only on and the coordinate describing the curve ; in particular it is independent of and . in fig .[ resasy2 ] we plot the real part of scaled by for .this can be compared with fig .[ fig2.2 ] by scaling both axes by and recognizing the vertical axis as the base ten logarithm .we then observe good agreement with the numerical results .the maximum value plotted in figure [ resasy2 ] is approximately , which is the predicted slope of the straight line plotted in fig .[ fig2.1 ] .again the agreement is good .we note that increasing makes the problem somewhat worse ; the scaled maximum real part is approximately for and for . | we derive new , explicit representations for the solution to the scalar wave equation in the exterior of a sphere , subject to either dirichlet or robin boundary conditions . our formula leads to a stable and high - order numerical scheme that permits the evaluation of the solution at an arbitrary target , without the use of a spatial grid and without numerical dispersion error . in the process , we correct some errors in the analytic literature concerning the asymptotic behavior of the logarithmic derivative of the spherical modified hankel function . we illustrate the performance of the method with several numerical examples . 65m70 , 78a40 , 78m16 |
the authors of the last variant of known method for determination of level density and radiative strength functions from total gamma - spectra of reactions , ( and respectively ) made in the statement on practical impossibility to determine systematical errors for obtained by them values of enumerated parameters .one can assume that this statement concerns not only axel - brink hypothesis , but also rather ordinary systematical errors in determination of absolute intensities of the total gamma - spectra .i d est , reliability of all the data obtained by them is not completely determined .consequently , all conclusions made by authors on parameters of cascade gamma - decay can be wholly mistaken due to rather significant coefficients of transport of experimental errors of measured of the total gamma - spectra intensity onto the required functions and .increase in is strongly caused by the fact that any first generation spectra " in the region of low gamma - quanta energies are small difference of two large values ( moreover , one of them can not be determined experimentally even in principle ) .the errors of function and its parameters , in vicinities of their most probable values are connected by approximate matrix equation : where - the matrix of derivatives of nonlinear function on its parameters and . matrix equation ( 1 ) can be solved on any modern computer .the program for calculation of jacobi matrix ( of derivatives ) in analytic form for the system of nonlinear equations solved in was prepared and tested in .the sole problem by work with this program is caused by the only circumstance level density in diapason of neutron binding energy changes by 4 - 5 orders of magnitude .that is why , specific of computer arithmetic noticeably influences on results of calculation .the presence of very significant non - diagonal elements in characteristic matrix of the likelihood function for the system of equations solved in , causes their degeneration for any possible weight matrix even by the use of additional data on density of neutron resonances , low - lying levels and by fixation of values of the total radiative widths of neutron resonances .therefore , the desired and parameters and their errors have a multitude of values with equal probability .this circumstance was not pointed out in the all papers performed with the use of method .authors of postulated that the error in level density derived from the spectra of evaporated nucleons does not depend on error in level density obtained with the help of ., they did not take into account a possibility of strong correlation between systematical errors in determination of .this correlation is stipulated , for example , by the use of identical hypothesis axel - brink for prediction of gamma - quanta emission partial widths and bohr - mottelson for nucleon products of nuclear reactions .consequently , a multitude of equal - in - probability vectors must be found from solution of equation ( 1 ) for matrixes corresponding to different equal - in - probability vectors . the width of interval of the possible values of the vector elements can be considered as the measure of experimental errors of and .the necessary for this operation vector ( or its upper estimates ) in any experiment is determined only by analysis of factors carrying systematical distortions in the measured spectra .its correct estimation can be done only by the authors of method oslo " by means of corresponding calculations and additional experiments . for the present , this circumstance does not allow one to use equation ( 1 ) for estimation of region of the possible vector values . therefore , qualitative estimation of the expected systematical errors of and can be made only in the other way by comparison of the difference of the total gamma - spectra intensities at decay of levels from any spin window for arbitrary excitation energy , with difference of functional dependencies of and for pairs of their different model or experimental data sets used in the calculation .naturally , this brings to loss of dependence on all the distortions carried in at intermediate steps .the last concerns not only the procedure for determination of the first generation spectra " , but and the use of mistaken ( this is pointed out in first time in ) axel - brink hypothesis and , to a very high extent , to the method used for normalization of the total spectra for arbitrary energy to the same number of decays . therefore , all the conclusions obtained below give notions only of the biggest permissible value ( by using ) of systematical errors in .the considerations presented above permit one to formulate main conditions and criterions for solution of the problem of partial evaluation of the error transfer coefficients of the measured total gamma - spectra onto the values of level density and radiative strength functions derived from them .first of all , there must be whose parameters and were determined by two independent and really in principle different methods in the excitation energy interval being maximally close to neutron binding energy .it is also desirable to have the and values in one of analyzed sets with minimum possible uncertainty estimated by independent and traditional methods of error determination .they must be extracted from experiment without the use of untested hypotheses ( like ) .it is also desirable to have the maximum different types of experimental data for functional dependencies and on excitation and gamma - quantum energies .the mentioned requirements are satisfied , for example , in compound nuclei and .the total gamma - spectrum at maximum excitation energy is ( to a precision of different population of initial levels with different ) a superposition of the first generation spectra " for this and lower excitation energies .that is why , one can expect that the estimated below effect of influence of the total gamma - spectra systematic errors on uncertainties of desired and values has no principle differences relative the case .the total gamma - spectrum following decay of levels from narrow interval for any excitation energy bin of a nucleus under study can be normalized to both the given decay number of initial levels or to the total cascade energy . in the first case , systematical error is an algebraic sum of errors in determination of form of measured spectra at different energies of gamma - quanta and inevitable error of absolute normalization . in the second case : all the systematical errors are minimal and have sign - changeable dependence on and in sum are equal to error .the use in calculation of different functional dependences for and brings ( at the of normalization ( 2 ) ) to analogous effect .its value can be characterized by the parameter for any pairs of the standard " and tested " total gamma - spectra and , respectively .it is postulated below that any sets of and can be obtained with equal probability from analysis like that in , if only mean - square difference of the total gamma - spectra calculated with the use of them is approximately the same and equals square root mean of errors of the experimental spectrum .correspondingly , reliable determination of experimental values of desirable parameters and requires that mean - square error of experimental spectra must be much less than mean - square difference of two corresponding spectra .the existing differences in the and values give notion on real magnitudes of their systematical errors . in practice ,extraction of the and values from is performed in several steps .therefore , each step brings to additional increase in systematical errors of the determined values .this circumstance must be taken into account at estimation of required precision at determination of .comparison between the data on and obtained in oslo within the method and dubna data allows us to reveal their characteristic peculiarities and to determine the minimal permissible systematical error which guaranties reliable identification of minimal difference of the tested parameters .it is determined , first all , by the presence of the step - like structure and correlated with it in position peak in the radiative strength functions in the dubna data .the oslo data on level densities are very close to the existing primitive models ( for example , ) . unlike the dubna data they do not show abrupt change in nuclear properties below neutron binding energy .practically , there were used in calculations the models of radiative strength functions for -transitions , for level density , results of approximation of dubna parameters of cascade gamma - decay and the experimental data on and for , obtained in oslo for reactions induced by .comparison between results of calculation for combinations : + 1 . , + 2 . , + 3 . , + 4 . , + 5 . , + 6 . , + 7 . + was performed for the energy of initial level for two its possible parities and spins excited at the thermal neutron capture . below mev for and 1.85 mev for in calculationwas used experimental information on decay modes and parameters of known levels of these nuclei .parameters of the model were chosen from those obtained in in such a way that to have smallest discrepancy with .information on energy resolution of scintillation detectors was included in calculation as well .it was supposed in all calculations that the level densities of different parity above the energy are equal ; only dipole transitions were taken into account .the sum of in calculation of parameter for different strength functions was re - normalized so that the total radiative width of decaying level was equal in all variants of calculation .all the total gamma - spectra were normalized to the energy ( in correspondence with ( 2 ) ) .the total area of calculated spectrum at normalization ( 2 ) to the cascade energy is obviously equal for any set of and .therefore , all the calculated variants of spectra differ each from other only by shape of dependence .just this circumstance decreases sensibility of the method like as compared with extraction of and from the two - step cascade intensities .this conclusion is true only for the data for and , obtained in dubna .the results of the cascade intensity analysis performed in prague ( and by other groups in the same manner ) are not reliable due to the presence in corresponding method of three principle mistakes .this was shown in .it is naturally that the forms of calculated spectra depend also on ratio between excitation probability of initial levels with different , on difference in level densities with different parity , ratio between strength functions of dipole gamma - transitions of different type and so on .but , judging by the data listed below , limitation in extent of variation of calculation parameters by the mentioned upper variants can not bring to radical change of the made below conclusions on required precision in determination of . at calculation of concrete values of parameters were introduced the low and high thresholds for .they were equal to 0.5 and 7.0 mev for molybdenum and 0.5 and 6.0 mev for ytterbium .this was done for reduction of contribution in value of parts of spectra with low intensity .seven obtained variants of the calculated total gamma - spectra for these isotopes are shown in fig .their parts corresponding to spectra of only primary gamma - transitions are presented in fig . 2 in the same scale .it is natural that the ratio between intensities of the spectra presented in figures 1 and 2 practically does not depend on the used method of normalization .the used in the calculation level densities ( excited by the primary dipole gamma - transitions following thermal neutron capture in nuclei under study ) are shown in fig .3 , and strength functions in fig .4 . the ratios intensity of the calculated total gamma - spectra to intensity of only their primary transitionsis presented in fig .modern nuclear models take into account influence of structure of levels connected by gamma - transition on its probability .that is why a necessity to use the axel - brink hypothesis is absent in modern theory .but , a precision of any theoretical calculation is determined by obviously insufficient accuracy of the experimental data which were the basis for development and parameterization of the model like qpnm .unambiguous conclusion on existence of strong dependence was firstly obtained from the experiment .unfortunately , due to methodical reasons information on this function is limited by the region .the degree of mistakenness of the axel - brink hypothesis at high excitation energy can be estimated only by experimental study of cascades with three and more successively emitted gamma - quanta .corresponding experiment was realized by authors of . in spite of limited character of the data ,they allow one to expect a possibility of very strong or even complete compensation of decrease of level density at any excitation energy with respect to general trend by increasing of the gamma - transition widths directly follows to corresponding levels .it was shown in that the analogous effect should be taken into account and in analysis in the spectra of evaporated nucleons also .there are no principle difficulties to calculate the total gamma - spectrum under assumption of complete compensation of decrease in level density by simultaneous increase in strength functions . in the limit of the possible complete compensation of deviations of parameters of gamma - decay from general trendthe calculation of the total gamma - spectrum with or without accounting for hypothesis must give the same total gamma - spectrum . in the other words ,the method is weakly sensitive or completely insensitive to shape of function . but , cascade population of any level at any excitation energy depends on the cross - section of gamma - quantum interaction with excited nucleus and is not compensated by change in level density. therefore , analysis of cascade intensities allows one to study real dependence of strength functions on both gamma - quantum energy and structure of nucleus . the method like has not such possibilities .the results of comparison of mean - square relative differences of the functions under consideration are listed in table .the parameters for all combinations were determined at choice of the and values as the base " first variant .this combination is more often used in different calculations . _ table .the square mean differences of intensities of calculated total gamma - spectra , intensities of only primary gamma - transitions , the used level densities and radiative strength functions . , , and _ the modules of their maximal relative differences . [ cols="<,>,>,>,>,>,^,>",options="header " , ] _ level density of , given in , completely corresponds to model ._ these data allow one to get some notion on extent of increase in spectrum error of the primary gamma - transitions used in for determination of the and values . to a precision of coefficient ( or some more ) , the data in fig .5 gives an extent of increase of error of total gamma - spectrum intensity at its transformation in the spectrum of only primary gamma - transitions .coefficients of increase in systematical errors of and owing to the use of axel - brink hypothesis instead of the radiative strength function depending on energy of excited level ( structure of its wave function ) can not be less than 2 ( obtained from comparison of for the same nuclei in and ) .it can have arbitrary value .the way for its observation variation of forms of the initial and functions as in the method and using only library programs of multidimensional fitting .maximal value of for the parameters used by the testing does not exceed 0.25 0.28 .correspondingly , maximal error in measurement of the total spectra must be 3 - 5 times less .the procedure of determination of the fist generation spectra " brings to additional increase of experimental errors in the region above threshold of the experiment in 3 - 10 times and more ( fig .5 ) . in practice , infringement of the axel - brink hypothesis appears itself in simultaneous increase in at decrease in values in the region of the step - like structure .this effect was revealed for both primary and secondary gamma - transitions .but , observation of compensation of decrease in by increase in , as it was obtained in , requires of both rise of precision at determination of intensity of total spectrum and to decrease of the value .corresponding coefficient in our estimation can be adopted to equal to that presented above . as a result, one can assume in the first approach that certain and unambiguous identification of different values of level densities and strength functions requires that the total relative error of determination of absolute intensity of the total gamma - spectra in method must be , most probably , less than 0.01 at least for low energy bins of the measured spectra .this conclusion quite unambiguously from the made in attempts to reproduce intensity of two - step cascades in , and by means of level density and strength functions derived by oslo method " .there are made in publications the attempts to reproduce the sum of unknown two - step cascade intensities with close values of energies of the primary and secondary gamma - transitions by calculations with corresponding pairs of and .impossibility to prove unambiguous reproduction of each of these items is obvious even if it was achieved for the sum and for all energies of the cascade gamma - transitions . in the other case the data moreover can not correspond to the and values which reproduce intensity of cascades .analysis of these data performed in correspondence with requirements of mathematical statistics and mathematics completely excludes reliability of determination of and by method , at least , for the gamma - decay of compound states excited at thermal neutron capture .moreover , this question was stated in details in .so , the method for determination of the and values can give reliable information on nuclear parameters only in the experiment which provides rather high precision of measurement of the total gamma - spectra .it is probable that the spectrometer used for this aim in oslo can not provide for required precision even in principle .99 a. schiller et al . , nucl .. res . * a447 * ( 2000 ) 498 .bartholomew et al . , advances in nuclear physics , * 7 * ( 1973 ) 229 .voinov et al . , phys .c * 77 * , ( 2008 ) 034613 .p. axel , phys .* 126(2 ) * , ( 1962 ) 671 .d. m. brink , ph .d. thesis , oxford university ( 1955 ) .v. a. khitrov et al . , in : _ proceedings of the xi international seminar on interactions of neutrons with nuclei , dubna , may 2003 _ , e3 - 2004 - 9 ( dubna , 2004 ) , p. 107; nucl - ex/0305006 . o. bohr , b.r . mottelson,_nuclear structure _ , vol .1 ( benjamin , ny , amsterdam , 1969 ) .a.m. sukhovoj , v.a .khitrov , phys . particl . and nuclei , * 36(4 ) * ( 2005 ) 359 .+ http://www1.jinr.ru/pepan/pepan-index.html ( in russian ) w. dilg , w. schantl , h. vonach , m. uhl , nucl* a217 * ( 1973 ) 269 .kadmenskij , v.p .markushev , v.i .furman , sov .* 37 * ( 1983 ) 165 .strutinsky , in proc . of int .( paris , 1958 ) , p. 617 .a.m. sukhovoj , v.a .khitrov , physics of paricl . and nuclei , * 37(6 ) * ( 2006 ) 899 .http://www1.jinr.ru/pepan/pepan-index.html ( in russian ) .a.m. sukhovoj , w.i .furman , v.a .khitrov , physics of atomic nuclei , * 71(6 ) * ( 2008 ) 982 .r. chankova et al ., phys.rev .c * 73 * ( 2006 ) 034311 , + m. guttormsenet et al . ,c * 71 * ( 2005 ) 044307 .u. agvaanluvsan et al ., phys.rev .c * 70 * ( 2004 ) 054611 . http://www.nndc.bnl.gov/nndc/ensdf .a.m. sukhovoj , v.a .khitrov , preprint no .e3 - 2008 - 134 , jinr ( dubna , 2008 ) .sukhovoj a. m. , physics of atomic nuclei , * 71 * ( 2008 ) 1907 .soloviev , theory of atomic nuclei : quasiparticles and phonons , institute of physics publishing , bristol and philadelphia , 1992 .k. furutaka et al . , in international conference on nuclear data for science and technology 2007 , nice , 2007 , p. 517 , + m. oshima et all , 13-th international symposium on capture gamma - ray spectroscopy , book of apstracts , 2008 , p. 83 .a.m. sukhovoj , v.a .khitrov , in : xvii international seminar on interaction of neutrons with nuclei , dubna , may 2009 , e3 - 2010 - 36 , dubna , 2010 , p. 282 .vasilieva , a.m. sukhovoj , v.a .khitrov , phys . at* 64(2 ) * ( 2001 ) 153 , ( nucl - ex/0110017 ) a.v. voinov et al . , phys.rev.lett .* 93 * ( 2004 ) 142504 .m. krticka , f. becvar , i. tomandl et al ., phys.rev .c * 77 * , ( 2008 ) 054319 .a. schiller et al . , phys.lett .b * 633 * ( 2006 ) 225 .a.m. sukhovoj , v.a .khitrov , li chol , pham dinh khang , vuong huu tan , + nguyen xuan hai , in : xiii international seminar on interaction of neutrons with nuclei , dubna , may 2006 , e3 - 2006 - 7 , dubna , 2006 , p. 72 .khitrov , li chol , a.m. sukhovoj , xii international seminar on interaction of neutrons with nuclei , dubna , may 2004 , e3 - 2004 - 169 , dubna , 2004 , p. 438 .+ http://arxiv.org/abs/nucl-ex/0409016 .the accepted in calculation level density excited by the dipole primary gamma - transitions at the thermal neutron capture .data , lines - , histogram level density from reaction ) , obtained by analogy with . | from a comparison of the total gamma - spectra calculated for different functional dependencies of level density and radiative strength functions , there were obtained both their square root relative differences and analogous data for the used parameters . the analysis of these data showed that the total uncertainty in determination of gamma - spectra intensities which is necessary to obtain reliable values of parameters of cascade gamma - decay , most probably , must not exceed one percent . * estimation of maximum permissible errors in the total gamma - spectra intensities at determination from them of level density and radiative strength functions * * a.m. sukhovoj , v.a . khitrov * + _ joint institute for nuclear research , 141980 , dubna , russia _ + |
many measures including total number of passengers , total number of flights , or total amount of cargo quantifying the importance of the world airports are compiled and publicized .we study here the _ oag max _ database , which comprises flight schedule data of more than 800 of the world s airlines for the period november 1 , 2000 to october 31 , 2001 .this database is compiled by oag , a division of reed business information group , and includes all scheduled flights and scheduled charter flights , both for big aircrafts air carriers and small aircrafts air taxis .we focus our analysis on a network of cities , not of airports for example , the newark , jfk and la guardia airports are all assigned to new york city .we further restrict our analysis to _ passenger flights _ operating in the time period november 1 , 2000 to november 7 , 2000 . even though this data is more than four years old, the resulting world - wide airport network is virtually indistinguishable from the network one would obtain if using data collected today .the reason is that air traffic patterns are strongly correlated with : ( i ) socio - economic factors , such as population density and economic development ; and ( ii ) geo - political factors such as the distribution of the continents over the surface of the earth and the locations of borders between states .clearly , the time scales associated to changes in these factors are much longer than the lag in the data we analyze here . during the period considered ,there are 531,574 unique non - stop passenger flights , or flight segments , operating between 3883 distinct cities .we identify 27,051 distinct city pairs having non - stop connections . the fact that the database is highly redundant that is , that most connections between pairs of cities are represented by more than one flight adds reliability to our analysis .specifically , the fact that unscheduled flights are not considered does not mean , in general , that the corresponding link between a certain pair of cities is missing in the network , since analogous scheduled flights may still operate between them .similarly , even if some airlines have canceled their flights between a pair of cities since november 2000 , it is highly unlikely that all of them have .we create the corresponding adjacency matrix for this network , which turns out to be almost symmetrical .the very minor asymmetry stems from the fact that a small number of flights follow a `` circular '' pattern , i.e , a flight might go from a to b to c , and back to a. to simplify the analysis , we symmetrize the adjacency matrix . further , we build regional networks for different geographic regions ( table [ t - regions ] ) . specifically ,we generate twenty - one regional networks at different aggregation levels . at the highest - aggregation level , we generate six networks ; one each for africa , asia and middle east , europe , latin america , north america , and oceania . for each of these regions , except for north america and oceania, we generate between two and five sub - networks .for instance , the asia and middle east network is further subdivided into south asia , central asia , southeast asia , northeast asia , and middle east .a ubiquitous characteristic of complex networks is the so - called `` small - world '' property . in a small - world network ,pairs of nodes are connected by short paths as one expects for a random graph .crucially , nodes in small - world networks also have a high - degree of cliquishness , as one finds in low - dimensional lattices but not in random graphs . in the air transportation network ,the average shortest path length is the average minimum number of flights that one needs to take to get from any city to any other city in the world .we find that for the cities in the asia and middle east network , and that the average shortest path length between the cities in the giant component of the world - wide network is only about one step greater , . actually , most pairs of cities ( 56% ) are connected by four steps or less .more generally , we find that grows logarithmically with the number of cities in the network , .this behavior is consistent with both random graphs and small - world networks , but not with low - dimensional networks , for which grows more rapidly with . still , some pairs of cities are considerably further away from each other than the average .the farthest cities in the network are mount pleasant , in the falkland islands , and wasu , in papua new guinea : to get from one city to the other , one needs to take fifteen different flights . from mountpleasant , one can fly to punta arenas , in chile , and from there to some hubs in latin america . at the other end of the path , from wasuone needs to fly to port moresby , which requires a unique sequence of eight flights . in the center of the path , between punta arenas and port moresby ,six different flights are needed .in contrast with what happens the ends of the path , in the central region of the path there are hundreds of different flight combinations , all of them connecting punta arenas and port moresby in six steps .the clustering coefficient , which quantifies the local cliquishness of a network , is defined as the probability that two cities that are directly connected to a third city are also directly connected to each other .we find that is typically larger for the air transportation network than for a random graph and that it increases with size .these results are consistent with the expectations for a small - world network but not those for a random graph . for the world - wide network ,we find while its randomization yields .therefore , we conclude that the air transportation network is , as expected , a small - world network . another fundamental aspect in which real - world networks often deviate from the random graphs typically considered in mathematical analysis is the degree distribution , that is , the distribution of the number of links of the nodes . in binomial random graphs ,all nodes have similar degrees , while many real world networks have some nodes that are significantly more connected than others .specifically , many complex networks , termed scale - free , have degree distributions that decay as a power law . a plausible mechanism for such a phenomenon is preferential attachment , that is , the tendency to connect preferentially to nodes with already high degrees .to gain greater insight into the structure and evolution of the air transportation network , we calculate the degree distribution of the cities .the degree of a city is the number of other cities to which it is connected by a non - stop flight . in fig .[ fig1]a , we show the cumulative degree distribution gives the probability that a city has or more connections to other cities , and is defined as , where is the probability density function . ] for the world - wide air transportation network .the data suggest that has a truncated power - law decaying tail where is the power law exponent , is a truncation function , and is a crossover value that depends on the size of the network . the measured value of the exponent would imply that , as one increases the size of the network , the average degree of the cities are also expected to increase .the degree of a node is a source of information on its importance .however , the degree does not provide complete information on the _ role _ of the node in the network . to start to address this issue, we consider the `` betweenness centrality '' of the cities comprising the world - wide air transportation network .the betweenness of city is defined as the number of shortest paths connecting any two cities that involve a transfer at city .we define the normalized betweenness as , where represents the average betweenness for the network .we plot , in fig .[ fig1]b , the cumulative distribution of the normalized betweenness for the world - wide air transportation network .our results suggest that the distribution of betweennesses for the air transportation network obeys the functional form where is the power law exponent , is a truncation function , and is a crossover value that depends on the size of the network .a question prompted by the previous results regarding the degree and the centrality of cities is : `` are the _ most connected _ cities also the _ most central _ ? '' to answer this question , we analyze first the network obtained by randomizing the world - wide air transportation network ( fig .[ fig1]b ) .we find that the distribution of betweennesses still decays as a power law but , in this case , with a much larger exponent value .this finding indicates the existence of anomalously large betweenness centralities in the air transportation network .for the randomized network , the degree of a node and its betweenness centrality are strongly correlated , highly connected nodes are also the most central ( fig .[ fig3]a ) .in contrast , for the world - wide air transportation network it turns out that there are cities that are not hubs , have small degrees but that nonetheless have very large betweennesses ( fig . [ fig3]a ) .to better illustrate this finding , we plot the 25 most connected cities and contrast such a plot with another of the 25 most `` central '' cities according to their betweenness ( figs .[ fig3]b and c ) .while the most connected cities are located mostly in western europe and north america , the most central cities are distributed uniformly across all the continents .significantly , each continent has at least one central city , which is typically highly - connected when compared to other cities in the continent johannesburg in africa or buenos aires and so paulo in south america .interestingly , besides these cities with relatively large degree , there are others such as anchorage ( alaska , u.s . ) and port moresby ( papua new guinea)that , despite having small degrees are among the most central in the network ( table 1 ) .nodes with small degree and large centrality can be regarded as anomalies .other complex networks that have been described in the literature , like the internet , do not display such a behavior , and nodes with the highest degree are also those with the highest betweenness .it is , in principle , easy to construct a network in which a node has small degree and large centrality think , for example , of a network formed by two communities that are connected to one another through a single node with only two links .the relevant question is , however , `` what general and plausible mechanism would give rise to scale - free networks with the obtained anomalous distribution of betweenness centralities ? '' to answer this question it is useful to consider a region such as alaska .alaska is a sparsely populated , isolated region with a disproportionately large for its population size number of airports .most alaskan airports have connections only to other alaskan airports .this fact makes sense geographically . however , distance - wise it would also make sense for some alaskan airports to be connected to airports in canada s northern territories .these connections are , however , absent .instead , a few alaskan airports , singularly anchorage , are connected to the continental us .the reason is clear , the alaskan population needs to be connected to the political centers , which are located in the continental us , while there are political constraints making it difficult to have connections to cities in canada , even to ones that are close geographically .it is now obvious why anchorage s centrality is so large .indeed , the existence of nodes with anomalous centrality is related to the existence of regions with a high density of airports but few connections to the outside .the degree - betweenness anomaly is therefore ultimately related to the existence of `` communities '' in the network .the unexpected finding of central nodes with low degree is a very important one because central nodes play a key role in phenomena such as diffusion and congestion , and in the cohesiveness of complex networks .therefore , our finding of anomalous centralities points to the need to ( i ) identify the communities in the air transportation network and ( ii ) establish new ways to characterize the role of each city based on its pattern of intra- and inter - community connections and not merely on its degree ._ community structure _ to identify communities in the air transportation network , we use the definition of modularity introduced in refs .the modularity of a given partition of the nodes into groups is maximum when nodes that are densely connected among them are grouped together and separated from the other nodes in the network . to find the partition that maximizes the modularity , we use simulated annealing .we display in fig .[ modules ] the communities identified by our algorithm in the world - wide air transportation network . as we surmised , both alaska and papua new guinea form separate communities .this fact explains the large betweenness centrality of anchorage and port moresby , as they provide the main links to the outside world for the other cities in their communities .another significant result is that even though geographical distance plays a clear role in the definition of the communities , the composition of some of the communities can not be explained by purely geographical considerations .for example , the community that contains most cities in europe also contains most airports in asian russia .similarly , chinese and japanese cities are mostly grouped with cities in the other countries in southeast asia , but india is mostly grouped with the arabic peninsula countries and with countries in northeastern africa .these facts are consistent with the important role of political factors in determining community structure . _global role of cities _ we characterize the role of each city in the air transportation network based on its pattern of intra- and inter - community connections .we first distinguish nodes that play the role of hubs in their communities from those that are non - hubs .note that cities like anchorage are hubs in their communities but they are not hubs if one considers all the nodes in the network .thus , we define the within - community degree of a node .if is the number of links of node to other nodes in its community , is the average of over all the nodes in , and is the standard deviation of in , then is the so - called -score .the within - community degree -score measures how `` well - connected '' node is to other nodes in the community .we then distinguish nodes based on their connections to nodes in communities other than their own .for example , two nodes with the same -score will play different roles if one of them is connected to several nodes in other communities while the other is not .we define the participation coefficient of node as where is the number of links of node to nodes in community , and is the total degree of node .the participation coefficient of a node is therefore close to one if its links are uniformly distributed among all the communities , and zero if all its links are within its own community .we hypothesize that the role of a node can be determined , to a great extent , by its within - module degree and its participation coefficient .we define heuristically seven different `` universal roles , '' each one corresponding to a different region in the phase - space . according to the within - module degree ,we classify nodes with as module hubs and nodes as non - hubs .both hub and non - hub nodes are then more finely characterized by using the values of the participation coefficient .we divide non - hub nodes into four different roles : ( r1 ) _ ultra - peripheral nodes _ ,i.e. , nodes with all their links within their module ( ) ; ( r2 ) _ peripheral nodes _ , i.e. , nodes with most links within their module ( ) ; ( r3 ) _ non - hub connector nodes _ ,i.e. , nodes with many links to other modules ( ) ; and ( r4 ) _ non - hub kinless nodes _ , i.e. , nodes with links homogeneously distributed among all modules ( ) .we divide hub nodes into three different roles : ( r5 ) _ provincial hubs_. i.e. , hub nodes with the vast majority of links within their module ( ) ; ( r6 ) _ connector hubs _ , i.e. , hubs with many links to most of the other modules ( ) ; and ( r7 ) _ kinless hubs _ , i.e. , hubs with links homogeneously distributed among all modules ( ) . for each city in the world - wide air transportation network, we calculate its within - community degree and its participation coefficient .then , we assign each city a role according to the definitions above ( figs .[ f - roles]a and c ) .significantly , 95.4% of the cities in the world - wide air transportation network are classified as either peripheral or ultra - peripheral .additionally , there is a small fraction of non - hub connectors ( 0.5% ) .this result suggests that cities which are not hubs in their respective communities rarely have links to many other communities in the air transportation network .this situation is in stark contrast to what happens in some biological networks , in which non - hub connectors seem to be relatively frequent and to play an important role .the remaining 4.1% of the nodes are hubs .we find approximately equal fractions of provincial and connector hubs .the former include cities that , for historical , political , or geographical reasons , are comparatively not well - connected to other communities .examples are denver , philadelphia , and detroit , in north america ; stuttgart , copenhagen , istanbul , and barcelona , in the community formed by europe , north africa and the former soviet union ; adelaide and christchurch in oceania ; brasilia in south america ; fairbanks and juneau in alaska ; and the already discussed case of port moresby .connector hubs include the most recognizable airport hubs in the word : chicago , new york , los angeles , and mexico city in north america ; frankfurt , london , paris , and rome in europe ; beijing , tokyo , and seoul in the south - eastern asian community ; delhi , abu dhabi , and kuwait in the community comprising india , the arabic peninsula and north - eastern africa ; buenos aires , santiago , and so paulo in south america ; melbourne , auckland , and sydney in oceania ; and anchorage in alaska .the fractions of cities with each role in the world - wide air transportation network contrast with the corresponding fractions in a randomization of the network ( fig .[ f - roles]b ) . in this case , the community identification algorithm still yields certain communities , but the network lacks `` real '' community structure .the identification of roles enables one to realize that these communities are somehow artificial .indeed , many cities are either kinless hubs or kinless non - hubs due to the absence of a real community structure , and the network contains essentially no provincial or connector hubs .we carried out a `` systems '' analysis of the structure of the world - wide air transportation network .the study enables us to unveil a number of significant results .the world - wide air transportation network is a small - word network in which : ( i ) the number of non - stop connections from a given city , and ( ii ) the number of shortest paths going through a given city have distributions that are scale - free . surprisingly , the nodes with more connections are not always the most central in the network .we hypothesize that the origin of such a behavior is the multi - community structure of the network .we find the communities in the network and demonstrate that their structure can only be understood in terms of both geographical and political considerations .our analysis of the community structure of the air transportation network is important for two additional reasons .first , it allow us to identify the most efficient ways in which to engineer the structure of the network .specifically , having identified the communities , one can identify which ones are poorly connected and the ways to minimize that problem .second , cities that connect different communities play a disproportionate role in important dynamic processes such as the propagation of infections such as sars . as , we show , finding the communities is the first step toward identifying these cities . the existence of communities and the understanding that different cities may have very different impacts on the global behavior of the air transportation system , call for the definition of the role of each city .we address this issue by classifying cities into seven roles , according to their patterns of inter- and intra - community connections .we find that most of the nodes ( 95% ) are peripheral , that is , the vast majority of their connections are within their own communities .we also find that nodes that connect different communities are typically hubs within their own community , although not necessarily global hubs .this finding is in stark contrast with the behavior observed in certain biological networks , in which non - hub connectors are more frequent .the fact that different networks seem to be formed by nodes with network - specific roles points to the more general question of what evolutionary constraints and pressures determine the topology of complex networks , and how the presence or absence of specific roles affects the performance of these networks .we thank a. arenas , a. barrat , m. barthlmy , a. daz - guilera , a. a. moreira , r. pastor - satorras , m. sales - pardo , d. stouffer , and a. vespignani for stimulating discussions and helpful suggestions .we also thank oag for making their electronic database of airline flights available to us , and landings.com for providing us with the geographical coordinates of the world airports .is scaled by the average degree of the network .the distribution displays a truncated power law behavior with exponent .( b ) cumulative distribution of normalized betweennesses plotted in double logarithmic scale .the distribution displays a truncated power law behavior with exponent . for a randomized network with exactly the same degree distribution as the original air transportation network, the betweenness distribution decays with an exponent .a comparison of the two cases clearly shows the existence of an excessive number of large betweenness values in the air transportation network ., scaledwidth=40.0% ] phase - space corresponds to a city , and different colors indicate different roles .most cities are classified as ultra - peripheral ( black ) or peripheral ( red ) nodes .a small number of non - hub nodes play the role of connectors ( green ) .we find approximately equal fractions of provincial ( yellow ) and connector ( brown ) hubs .( b ) same as ( a ) but for a randomization of the air transportation network .the absence of communities manifests itself in that most hubs become kinless hubs ( gray ) and in the appearance of kinless non - hubs ( blue ) .( c ) non - hub connectors ( green ) , provincial hubs ( yellow ) , and connector hubs ( brown ) in the world - wide air transportation network . ] | we analyze the global structure of the world - wide air transportation network , a critical infrastructure with an enormous impact on local , national , and international economies . we find that the world - wide air transportation network is a scale - free small - world network . in contrast to the prediction of scale - free network models , however , we find that the most connected cities are not necessarily the most central , resulting in anomalous values of the centrality . we demonstrate that these anomalies arise because of the multi - community structure of the network . we identify the communities in the air transportation network and show that the community structure can not be explained solely based on geographical constraints , and that geo - political considerations have to be taken into account . we identify each city s global role based on its pattern of inter- and intra - community connections , which enables us to obtain scale - specific representations of the network . like other critical infrastructures , the air transportation network has enormous impact on local , national , and international economies . it is thus natural that airports and national airline companies are often times associated with the image a country or region wants to project . the air transportation system is also responsible , indirectly , for the propagation of diseases such as influenza and , recently , sars . the air transportation network thus plays for certain diseases a role that is analogous to that of the web of human sexual contacts for the propagation of aids and other sexually - transmitted infections . the world - wide air transportation network is responsible for the mobility of millions of people every day . almost 700 million passengers fly each year , maintaining the air transportation system ever so close to the brink of failure . for example , us and foreign airlines schedule about 2,700 daily flights in and out of ohare alone , more than 10% of the total commercial flights in the continental us , and more than the airport could handle even during a perfect `` blue - sky '' day . low clouds , for example , can lower landing rates at ohare from 100 an hour to just 72 an hour , resulting in delays and flight cancellations across the country . the failures and inefficiencies of the air transportation system have large economic costs ; flight delays cost european countries 150 to 200 billion euro in 1999 alone . these facts prompt several questions : what has led the system to this point ? why ca nt we design a better system ? in order to answer these questions , it is crucial to characterize the structure of the world - wide air transportation network and the mechanisms responsible for its evolution . the solution to this problem is , however , far from simple . the structure of the air transportation network is mostly determined by the concurrent actions of airline companies both private and national that try , in principle , to maximize their immediate profit . however , the structure of the network is also the outcome of numerous historical `` accidents '' arising from geographical , political , and economic factors . much research has been done on the definition of models and algorithms that enable one to solve problems of optimal network design . however , a world - wide , `` system '' level , analysis of the structure of the air transportation network is still lacking . this is an important caveat . just as one can not fully understand the complex dynamics of ecosystems by looking at simple food chains or the complex behavior in cells by studying isolated biochemical pathways , one can not fully understand the dynamics of the air transportation system without a `` holistic '' perspective . modern `` network analysis '' provides an ideal framework within which to pursue such a study . we analyze here the world - wide air transportation network . we build a network of 3883 locales , villages , towns , and cities with at least one airport and establish links between pairs of locales that are connected by non - stop passenger flights . we find that the world - wide air transportation network is a small - world network for which ( i ) the number of non - stop connections from a given city , and ( ii ) the number of shortest paths going through a given city have distributions that are scale - free . in contrast to the prediction of scale - free network models , we find that the most connected cities are not necessarily the most `` central , '' that is , the cities through which most shortest paths go . we show that this surprising result can be explained by the existence of several distinct `` communities '' within the air transportation network . we identify these communities using algorithms recently developed for the study of complex networks , and show that the structure of the communities can not be explained solely based on geographical constraints , and that geo - political considerations must also be taken into account . the existence of communities leads us to the definition of each city s global role , based on its pattern of inter- and intra - community connections . |
surface resistance is very important in the operation of superconducting cavity .as one can see from equation ( 1 ) , surface resistance is the function of a temperature dependent term and a temperature independent one .temperature dependent resistance simply decreases as temperature goes down in a metal since the number of phonons decreases according to boltzmann factor . practically , the operating temperature of a superconducting cavity is usually determined in such a way to achieve possibly low surface resistance as long as an economical issue allows . in other words, the operating temperature can not be infinitely low since lowering temperature always demands substantial cost .for the raon accelerator , a quarter wave resonator ( qwr ) cavity will be operated at 4k while the rest cavities will be operated at 2k .unlike the temperature dependent resistance , the temperature independent resistance , which is called a residual resistance , is due to various factors .it has been reported that defects acting scattering centers contribute to the residual resistance such as inclusions , voids , dislocations with a crystal lattice itself .thus , lowering residual resistance is essential to achieve low surface resistance .basically , the more pure material is , the lower residual resistance one can obtain . since the residual resistance ratio ( rrr ) is defined as the ratio of a resistance at 300k to a residual resistance ( resistance at or just above critical temperature , of nb 9.2k ) , one can have high rrr with the more pure material .fabrication process of superconducting cavities is very complicated since it requires mechanical ( pressing ) , electrical ( electron beam welding ) , and chemical ( polishing ) process . among these steps ,the electrical process ( e - beam welding ) is of our interest in this paper .generally , the rrr degrades during e - beam welding due to the introduction of impurities from surroundings such as oxygen , nitrogen , and hydrogen . although previous studies reported that a vacuum level of a chamber is the most critical , but still e - beam welding conditions , such as e - beam power and welding speed , are considered important as much as the vacuum level because they make a great effect on the heat affected zone ( haz ) in a welding part . in this paper , we report rrr degradation occurred during e - beam welding of niobium samples .we measured rrr with different conditions ; vacuum level , e - beam power , and welding speed .not only we compared rrr degradation , but also we analyzed a heat affected zone ( haz ) .thus , we finally report how rrr degradation and heat affected zone were affected by different e - beam welding conditions .a rrr 300 grade niobium sheet was from ati , wah chang inc .( albany , usa . ) with the size of mm ( ) .the chemical composition and the grain size of nb sheet satisfies each astm and astm .samples for experiments were prepared from two different companies ( a & b ) and the sequences are as follows .first , two companies cut the nb sheet of 3 mm in thickness into two different sets of width and length for butt welding : two nb pieces of mm by company a and two nb pieces of mm by company b. thus , the company a prepared the welded nb sheet of mm , and the company b prepared the welded nb sheet of mm . then , rrr samples were cut out of these welded nb sheets perpendicular to a welding line .each nb sample was cut by electrical discharge machining ( edm ) with the size of mm .[ p1 ] shows how the pieces of nb sheet were welded , and samples were cut out from the welded nb sheet by the company b. also , fig .[ p2 ] shows the layout of cut samples , all samples were perpendicular to the welding line , and the distance from the weld seam increased from left to right ( 0 , 2 , 4 , 6 , 8 , 10 , and 15 , 20 , 25 mm ) . as shown in fig .[ p2 ] , in order to compare differences in rrr between the welding part and the bead part , two types of samples were cut from the welding side and the bead side . since nine samples were cut out from each side , eighteen samples in total were prepared from both sides of one welded nb sheet .welding conditions of two companies are summarized in table [ t1 ] . in this study, we defined the rrr value as the ratio of the nb resistance at 300k to the nb resistance at the temperature just above critical temperature ( 9.2k ) where nb still is in the normal conducting state . that temperature ranges from 9.5k to 10k .the input current was 5 ma , and rrr measurement was performed with no magnetic field application . since the generated power in the sample ( 10 watt ) due to this level of input current is negligible compared to the cooling power of sample stage in ppms ( 10 watt ) , the 5ma input current level dose not vary the resistance of the sample while measurement . the time for stabilizing each temperature step was between 100 200 secthe temperature increments were 0.1k up to 15k , 1k from 16k to 30k , and 5k from 35k to 300k .rrr experiments were carried out with physical property measurement system ( ppms ) operated by the korea basic science institute ( kbsi ) .mm was butt - welded by joining two nb sheet of mm .weld side ( left ) , bead side ( right ) ., width=604 ] mm for rrr measurement . ,width=604 ] .welding conditions for sample preparation [ cols="^,^,^,^,^,^ " , ] [ t3 ] from the fig .[ p6 ] , rrr degradation of the samples from company a occurred less than company b. this is in good agreement with previous studies since rrr degradation is greatly affected by a vacuum level .as shown in table [ t1 ] , the vacuum levels of two companies each are torr for company a , and torr for company b. therefore , we could confirm that samples from company a showed higher rrr values ( less degradation ) than company b as the average rrr ( 292 vs. 264 ) , and the lowest rrr ( 253 vs. 234 ) .the worst degradation rates based on the lowest points were each 20% and 44 % corresponding to company a and b. in fact , risp set the minimum rrr value as 275 , so only less than 8% of degradation is allowable in raon project .although the degradation rate of samples from company a satisfied this criteria as 3% and 8% in the weld side ( these was based on the average rrr ) , the rest degradation rates did not satisfy the criteria .we might think this was due to the vacuum level , that is to say , the vacuum level gauge did not read precisely the real vacuum level in the e - beam chamber , thus the real vacuum level during the welding did not reach torr .so we should perform more experiments to confirm this result .interesting results were that the rrr degradation in the weld side where the e - beam welding directly occurred was less than in the bead side for all samples regardless of companies . according to previous studies , the part where the e - beam welding directly occurred showed high rrr value since this part experienced purification due to the `` melting '' just like the zone melting method used in such as a nb purification and a semiconductor purification .this result can be explained from the fact that a liquid phase can hold more solute atoms ( impurities ) than a solid phase , and the melting the solid into the liquid phase drives impurities from solid to liquid .then , impurities in the liquid are driven into the solid quickly on cooling since the diffusivity of impurities in the liquid can be assumed evenly faster than in the solid .thus , it makes adjacent solid part has more impurities .therefore , the driven effect of impurities caused the weld side to be more purified than the bead side although these two parts experienced the melting event during the e - beam welding .another result to be discussed here is a heat affected zone ( haz ) .the rrr degradation occurs seriously in this region because grains in this region are greatly altered by a heat introduction depending on the e - beam power and the welding speed while rrr recovers as much as bulk material outside this region .therefore , controlling ( narrowing ) the haz is an important issue for optimizing e - beam welding to keep rrr degradation above the target value . from the w. singer s work , the haz was around 10 mm from the weld seam . by looking at fig .[ p3 ] and fig .[ p4 ] , the haz was around 15 mm where rrr started to recover as high as bulk s value for both companies .in particular , the rrr of the bead side from company b was still far lower than the reference sample up to 25 mm , which means the haz expanded into the bulk level . one possible explanation is because too much heat was introduced in the welding part , and this heat did not spread out quickly through the whole sample during e - beam welding . in fact , nb has a low thermal conductivity , for example , the thermal conductivity of copper is larger than that of niobium by one order. therefore , the welded part of nb did not have enough time to dissipate overheat quickly into the whole sample due to the low thermal conductivity .consequently , the development of the optimized welding condition including the welding power and the welding speed should be established in order to achieve good haz ( narrow haz ) . as a next step ,we need more experiments to analyze quantitatively how much heat was generated in the welding zone and how much grains were affected with the function of welding power and speed by using high quality optical microscope or sem .the rrr measurements with the 300 rrr grade nb samples supplied from two companies of different welding conditions were carried out .we confirmed that the vacuum level was critical factor to avoid rrr degradation since the degradation rate of company a having torr was lower than that of company b having torr .also , we found that the degradation of the weld side where the melting directly occurred showed lower degradation than the bead side .in addition , we found that the haz could be varied by controlling the welding condition , which was the heat introduction .the haz expanded deeper into the bulk when the heat was introduced too much in the welding zone .we thank kbsi for performing rrr tests for this study .this work was supported by the rare isotope science project which is funded by the ministry of science , ict and future planning ( msip ) and the national research foundation ( nrf ) of the republic of korea under contract 2011 - 0032011 .h. padamsee , j. knobloch , t. hays , _ rf superconductivity for accelerators _( wiley - vch verlag gmbh co. kgaa , germany , 2008 ) .h. padamsee , _ rf superconductivity , science , technolgoy , and applications _( wiley - vch verlag gmbh co. kgaa , germany , 2009 ) . c. kittel , _ introduction to solid state physics _ ( john wiley son , inc . , united states , 1996 ) .d. jeon _ et al ._ , j. kor .65 * , 1010 ( 2014 ) . c. r. barrett , _ the principles of engineering materials _( prentice hall , united states , 1973 ) .j. d. splett , d. f. vecchia , l. f. goodrich , j. research national inst .standards and tech , * 116 * , 489 ( 2011 ) . c. xu , h. tian , c. e. reece , m. j. kelly , phys .special topics - accelerators and beams , * 15 * , 043502 ( 2012 ) .y. jung , h. j. kim , h. h. lee , h. c. yang , _ proc .particle accel .( dresden , germany , 2014 ) , p. 2549 . w. singer , x. singer , j. tiessen , h. wen , _ proc ._ ( ny , usa , 2003 ) , p. 671 .m. s. champion , _ et al ._ , ieee trans .superconductivity * 19 * , 1384 ( 2009 ) .h. jiang , t. r. bieler , c. compton , t. l. grimm , _ proc .ieee particle accel .( oregon , usa , 2003 ) , p. 1359 . n. m. abbas , d. g. solomon , md .f. bahari , int .j. machine tools manufacturer , * 47 * , 1214 ( 2007 ) .p. bauer , t. berenc , c. boffo , m. foley , m. kuchnir , y. tereshikin , t. wokas , _ proc .11th workshop on rf superconductivity _ , ( lubeck , germany , 2003 ) , p. 588 . g. choi , j. lim , n. r. munirathnam , i. kim , metals and materials international , * 15 * , 385 ( 2009 ) . a. koethe , j. i. moench , materials trans , * 41 * , 7 ( 2000 ) . w. g. pfann , _ zone melting _ ( wiley , new york , 1966 ) . | the first heavy ion accelerator is being constructed by the rare isotope science project ( risp ) launched by the institute of basic science ( ibs ) in south korea . four different types of superconducting cavities were designed , and prototypes were fabricated such as a quarter wave resonator ( qwr ) , a half wave resonator ( hwr ) and a single spoke resonator ( ssr ) . one of the critical factors determining performances of the superconducting cavities is a residual resistance ratio ( rrr ) . the rrr values essentially represent how much niobium is pure and how fast niobium can transmit heat as well . in general , the rrr degrades during electron beam welding due to the impurity incorporation . thus it is important to maintain rrr above a certain value at which a niobium cavity shows target performance . in this study , rrr degradation related with electron beam welding conditions , for example , welding power , welding speed , and vacuum level will be discussed . |
during the last half - century a great attention is being paid to studies of different quantum models of nonlinear optics since they enable to reveal new physical effects and phenomena ( see , e.g. , and references therein ) . in view of the hamiltonian nonlinearity these modelsare mainly analyzed with the help of the numerical calculations or some linearization procedures which are not adapted to reveal many peculiarities of model dynamics .however , recently a new universal lie - algebraic approach , essentially improving both analytical and numerical solutions of physical problems , has been suggested in and developed in for the class of nonlinear quantum models whose hamiltonians have invariance groups =0 ] is the entire part of ) and -invariant dynamic variables : they obey the commutation relations = \pm v_{\pm},\quad [ v_{\alpha } , r_1 ] = 0 = [ v_{\alpha } , k ] , \nonumber \\\ ; & & [ v_- , v_+ ] = \phi ( v_0 ; r_1 ) \equiv \psi ( v_0 + 1 ; r_1 ) -\psi ( v_0 ; r_1 ) , \nonumber \\ & & \psi ( v_{0};r_1 ) = ( r_1 + 2 v_{0})(r_1 + 2 v_{0}-1 ) ( r_1 + 1 - v_{0 } ) , \label{2.2a}\end{aligned}\ ] ] that identifies as generators of pla having the casimir operator =0 , \label{2.2b}\end{aligned}\ ] ] acting on complementarily to ( in view of the relationship ) and , hence , forming dynamic symmetry algebra in the dual algebraic pair . in terms of these collective operators the hamiltonian ( [ 1.1a ] ) is expressed in the form ,\;\nonumber \\ & & c = ( \omega_1 + \omega_0 ) r_1,\quad \delta= 2 \omega_1- \omega_0,\quad [ v_{\alpha } , c ] = 0 , \label{2.3a}\end{aligned}\ ] ] and the hilbert space is decomposed into the infinite direct sum of -irreducible - dimensional subspaces specified by eigenvalues of the invariant operators . herewith are expressed through the numbers determining , respectively , a maximal population of the fundamental ( pump ) mode and a minimal population of the harmonic within a fixed `` optical atom '' as it follows from the structure of the new ( collective ) basis in : ^{1/2 } , \nonumber \\ & & v_0 |k , s;f\rangle = ( l_0+f ) |k , s;f\rangle,\nonumber \\ & & |k , s\rangle= |n_1= k , n_0= s\rangle , \quad v_-\,|k , s\rangle = 0,\quad k=0,1,\,\,s\geq 0 . \label{2.4b}\end{aligned}\ ] ] evidently , eqs .( [ 2.4b ] ) explicitly manifest the cluster structure of the fock states and specify as the `` lowest '' weight operator and as the `` lowest '' weight state .this cluster reformulation of the model enables one to use the formalism for getting representations of the model evolution operator which facilitate analysis of the model dynamics and calculations of temporal dependences ] the eigenvalue problem ( [ 2.6 ] ) is solved exactly with the help of the displacement operators in terms of simple analytical expressions . however , it is not the case for the model under study in view of the absence of explicit expressions for matrix elements .nevertheless , the formalism enables one get convenient ( for physical applications ) calculation schemes , algorithms and analytical expressions for exact and approximate solutions of this problem .a lie - algebraic scheme for finding exact solutions of the eigenproblem ( [ 2.6 ] ) is based on looking for eigenfunctions on each subspace in the form where amplitudes satisfy the orthonormalization and completeness conditions : then , inserting eq .( [ 3.1a ] ) for and eq .( [ 2.3a ] ) for in eq .( [ 2.6 ] ) and using eqs .( [ 2.2a ] ) , ( [ 2.4b ] ) , one gets a set of recurrence relations at fixed \tilde q_f^{v}(k , s ) ) - g\tilde q_{f-1}^{v}(k , s),\nonumber\\ & & f , v=0,\ldots , s,\ ; \nonumber \\ & & \psi ( l_0+f+1 ; l_1)= ( k + 2f+2)(k + 1 + 2f)(s - f ) .\label{3.2}\end{aligned}\ ] ] these relations along with the boundary conditions determine amplitudes and eigenenergies from solutions of the sturm - liouville spectral problem p_{f}(\lambda ) - f=0,\ldots , s ; \qquad p_{0}(\lambda ) = 1 , \nonumber\\ & & p_{-1}(\lambda ) = 0 = [ \lambda - \delta ( s + l_0 ) ] p_{s}(\lambda ) - |g|^{2 } \psi ( l_0+s ; l_1 ) p_{s-1}(\lambda ) , \label{3.3b}\end{aligned}\ ] ] for finding non - classical orthogonal ( in view of ( [ 3.1b ] ) ) polynomials of the discrete variable on the non - uniform lattice . indeed , eqs .( [ 3.3a ] ) , ( [ 3.3b]),([3.4 ] ) provide the following algorithm for solving the eigenproblem ( [ 2.6 ] ) .+ i ) using the recursive formula ( [ 3.3a ] ) with the boundary values from eq .( [ 3.3b ] ) one calculates the polynomial sequence .+ ii ) inserting in the last equality in ( [ 3.3b ] ) one gets the algebraic equation with respect to ; its solution yields the sequence of admissible values of the spectral parameter and the appropriate energy spectrum .+ iii ) for each value using and eq . ( [ 3.4 ] ) one finds the sequence of all amplitudes as functions of the only undetermined quantity which , in turn , is found from the normalization condition of eqs . ( [ 3.1b ] ) . + this algorithm has been realized in with the help of the reduce procedure solve for and the conventional fortran subroutines of eispark package ( with applying the multiprecision package ) for .the routine package developed enables us to implement numerical calculations of model dynamics for but it is unsuitable for practical calculations with larger in view of multiprecision computer limitations .therefore in an approximate analytical solution of the problem ( [ 2.6 ] ) has been suggested .it is given by the -quasiclassical eigenfunctions ^{-\frac{1}{2}}=(y_-)^+ , \quad\!\!\ !2j = s,\nonumber \\ & & y_{\pm , 0}\ , \in\ , su ( 2 ) , \quad s^j_{f v}(\xi)\equiv q_f^{v;\ , ap}(k , s)\,=\,(\frac{g}{|g|})^{f - v}\ , d^{j}_{-j + f,- j + v}(2r ) , \label{3.5b}\end{aligned}\ ] ] and eigenenergies ,\nonumber \\ & & \lambda_v^{qc } ( k , s;r)\ , = \ , \delta [ j + l_0 - ( j - v)\cos 2r]+ \nonumber \\ & & + 2|g| \sum_{f=0}^s \sqrt{(s - f)(f+1)2(2k+1 + 2f ) } \,d^{j } _ { -j+f , -j+v } ( 2r ) \ , d^{j}_{-j + f+1 , -j + v } ( 2r ) \nonumber \\ & & \approx \delta [ j + l_0 - ( j - v)\cos 2r]- \nonumber \\ & & - 2|g| ( j - v ) \sin 2r \sqrt{2 [ s+2k+1+(- s+2v)\cos 2r ] } = \lambda_v^{cmf}(k , s;r),\qquad \label{3.6}\end{aligned}\ ] ] where are the - functions expressed in terms of the gauss hypergeometric function .approximate values in ( [ 3.6 ] ) are calculated in the cluster mean - field approximation : , and values of the parameter in ( [ 3.5a])-([3.6 ] ) are found from energy - stationarity - conditions and/or from minimizing a proximity measure between exact hamiltonian and its - quasiclassical approximation a standard measure for such estimates on the subspaces is defined with the help of the unitarily invariant euclidean operator norm as follows }{\sum_{v}(\lambda_v ( k , s))^2}. \label{3.8}\ ] ] this approximation has been used in for calculating approximate expressions of the temporal dependences determining , in accordance with eqs .( [ 2.1 ] ) , ( [ 3.5b ] ) , the dynamics of the field - mode populations : for different types of initial states ; here ,\ , \bar k= tr [ \rho k ] $ ] and the quantity is calculated with the help of eq .( [ 2.7 ] ) for .specifically , in the case of the -quasiclassical cluster initial state of the form ( [ 3.5a ] ) , ( [ 3.5b ] ) with , belonging to a fixed `` optical atom '' with , eq . ( [ 2.7 ] ) yields an approximate analytical expression \right\ } \label{3.10a}\end{aligned}\ ] ] with ^{\frac{s-1}{2}}\!\!\!\!\!\ ! , \quad \!\tan \phi_{k , s } ( t)= \frac{\tan\omega_{l}(k , s ) t}{\sqrt s } , \label{3.10b}\end{aligned}\ ] ] which exhibits a high - frequency ( ) periodic dynamics with a slow ( ) periodic modulation in the phase and amplitude , i.e. an occurrence of a specific temporal coherent structure ( described in terms of elliptic functions too ) .at the same time for general initial states and having non - zero projections on all subspaces , e.g. , for glauber coherent states , analogous calculations lead to series containing weighted sums of terms like those given by eqs .( [ 3.10a]),([3.10b ] ) that corresponds to occurrences of coherence - decoherence phenomena like `` collapse - revivals '' revealed in by means of other methods .according to the general quasiclassicality theory all approximations ( [ 3.5a])-([3.6 ] ) ( and , hence , ( [ 3.10a ] ) , ( [ 3.10b ] ) ) are valid only for large values of , and , besides , the measure ( [ 3.8 ] ) gives only a global rather than local characteristic of the approximate energy spectra that does not allow to feel their important symmetry properties and local peculiarities related to `` energy errors '' \equiv \deltae_v(k , s ) \cdot e_v(k , s ) .\label{3.11}\ ] ] therefore , we implemented numerical comparisons of both exact and approximate results in order to estimate the applicability range of the quasiclassical approximation ( [ 3.5a])-([3.6 ] ) .in order to examine the efficiency of calculation schemes and the algorithm given above we tested them by means of computer experiments for the resonance case determined by from eq .( [ 3.1a ] ) with .first of all we calculated exact values according to the algorithm of section 3 and their approximations according to eqs .( [ 3.5a])-([3.6 ] ) for .values of the fitting parameter were determined from energy - stationarity - conditions : ( optimizing only the upper part of spectra ) , ( quasi - linear approximation ) and from minimizing the proximity measure ( [ 3.8 ] ) : ( `` smooth '' cluster mean - field approximation ) ; herewith means that we take in the first half of spectra and in the second one . to estimate the accuracy of approximations we also used non - invariant measures ^ 2 } { \sum_{v=0}^s ( \lambda_v ( k , s))^2},\nonumber\\ & & \delta^2_{e_{up}}(k , s ) = \frac{\sum_{v = s/2}^s [ ( \lambda_v ( k , s ) - \lambda_v^{cmf}(k , s;r)]^2 } { \sum_{v = s/2}^s ( \lambda_v ( k , s))^2 } , \label{4.1}\end{aligned}\ ] ] to characterize more precisely ( in comparison with eq . ([ 3.8 ] ) ) energy spectra and standard ( related to the fubini - study metric in ) measures ;v}\equiv \sum_{f}\ , s_{fv}^j\ , q_f^v ( k , s ) , \quad \delta^2_{ef}(k , s;v)= 1-|\cos ( { \bf s},{\bf q})_{k , s;v}|^2 \label{4.2}\ ] ] ( or associated graphic representations via `` overlap areas '' ) to estimate an `` approximation quality '' for eigenfunctions . some of typical results of these numerical calculations are presented in table 1 and figs.1,2 .+ table 1 .multiplets with the level step for . = = = = = + 0 -1536.9 -1096.7 -1545.3 -1482.4 -1421.2 + 10 -1151.7 -919.6 -1205.2 -1175.2 -1137.0 + 20 -798.1 -720.0-880.0 -873.3 -852.7 + 30 -480.3 -499.3 -570.2 -576.7 -568.5 + 40 -205.5 -259.0 -276.7 -285.6 -284.2 + 50 0.0 0.0 0.0 0.0 0.0 + 60 205.5 276.7 276.7 280.0 284.2 + 70 480.3 570.2 570.2 554.3 568.5 + 80 798.1 880.0 880.0 822.8 852.7 + 90 1151.7 1205.2 1205.2 1085.5 1137.0 + 100 1536.9 1545.3 1545.3 1342.3 1421.2 + 10.222 -12.220 0.010 -1.000 + 2.563 0.670 0.806 0.657 + 0.670 0.670 0.944 0.657 + as is seen from data given in table 1 and fig .1 we have an acceptable consent of exact eigenenergies and their approximations ( [ 3.6 ] ) at almost everywhere for and .discrepancies between exact and approximate results in the middle parts of spectra , probably , are due to the availability of the square - root singularities in the model hamiltonian ( [ 2.3a ] ) re - written ( with the help of eqs .( [ 3.5b ] ) ) in terms of that is , actually , ignored in the `` smooth '' the - quasiclassical approximation ( [ 3.5a])-([3.6 ] ) .( note also that some negative values of are due to using instead of in eq .( [ 3.8 ] ) and because of calculation errors ) .however , the approximation with breaks the orthogonality of eigenfunctions belonging to opposite ends of spectra whereas the quasi - linear approximation with leads to equidistant spectra within fixed subspaces .therefore , in spite of the spectrum symmetry breaking , the most satisfactory quasiclassical approximation is given by eqs .( [ 3.5a])-([3.6 ] ) with that minimizes .note that the spectrum asymmetry at and related shifts between amplitude values and ( see fig .2 ) are due to using `` smooth '' - quasiclassical eigenfunctions ( [ 3.5a ] ) . besides the verifications above we also performed calculations of temporal dependences of the quantity and related dynamics of the normalized average photon numbers .herewith exact dependencies were calculated with the help of routine package above , whereas approximate calculations were implemented using approximate expressions ( [ 3.5a ] ) , ( [ 3.5b ] ) , ( [ 3.6 ] ) for eigenvalues and eigenfunctions .results of such calculations for against the dimensionless time are plotted in fig .3 where we compare exact results with the quasiclassical approximations obtained with the help of eqs .( [ 3.5a])-([3.6 ] ) with and ( [ 3.10a])-([3.10b ] ) .+ evidently , the graphic representations of fig.3 enable us to reveal transparently a double - periodic component in the exact multi - frequency dynamics of and that is rather well described by eqs .( [ 3.10a ] ) , ( [ 3.10b ] ) or ( [ 3.5a])-([3.6 ] ) at ( and at ) . note that an availability of this important dynamic feature is displayed clearer when the characteristic parameter increases ( in accordance with the general quasiclassical theory ) .so , our numerical calculations given in section 4 show a rather good qualitative consent of exact and approximate results at and at relevant choices of the fitting parameter in ( [ 3.5a])-([3.6 ] ) .however , partial quantitative discrepancies of them require further improvements of the quasiclassical approximations used . in particular , the approximate solutions of the eigenproblem ( [ 2.6 ] )can be improved by means of : 1 ) using less smooth ( in comparison with ( [ 3.5a ] ) , ( [ 3.5b ] ) ) generalized coherent states of the algebra as quasiclassical eigenfunctions ( cf . ) and 2 ) exploiting the standard or special ( e.g. , developed in ) algebraic perturbative and iterative algorithms or modifications of the algebraic `` dressing '' schemes . then these improvements ( along with the exact calculation schemes developed above ) can be used for a more detail analysis ( like those implemented in ) of the model under consideration in all ranges of the parameter and for arbitrary initial states .it is also of interest to compare results obtained ( and their improvements ) with those of based on an alternative form of as well as with calculations performed in using the formalism of the - deformed lie algebra .the work along these lines is in progress .armstrong , n. bloembergen , j. ducuing , and p.s .pershan , phys ., * 127 * 1918 ( 1962 ) ; n. bloembergen , _ nonlinear optics _ , w. a. benjamin , new york ( 1965 ) .eberly , n.b .narozhny , j.j .sanchez - mondragon , phys .lett . , * 44 * , 1329 ( 1980 ) .shen , the principles of nonlinear optics ( wiley , new york , 1984 ) j. perina , quantum statistics of linear and nonlinear optical phenomena .( reidel , dordrecht 1984 ) .karassiov , l.a .shelepin , trudy fian [ p.n .lebedev inst .proc ] , * 144 * , 124 ( 1984 ) ; v.p .karassiov , j. sov .laser res . ,* 12 * , 147 ( 1991 ) .v.p . karassiov and a.b .klimov , phys .lett . , * a 189 * , 43 ( 1994 ) .ou , phys . rev . , * a 49 * 2106 ( 1994 ) .li , p. kumar , phys ., * a 49 * 2157 ( 1994 ) .s.m . chumakov and kozierowski , quantum semiclass* 8 * , 775 ( 1996 ) ; a. bandilla , g. drobny and i. jex , phys . rev ., * a 53 * 507 ( 1996 ) .n. debergh , j. phys . , * a 30 * , 5239 ( 1997 ) ; * a 31 * , 4013 ( 1998 ) .karassiov , phys .lett . , * a 238 * , 19 ( 1998 ) ; j. rus .laser res . ,* 20 * , 239 ( 1999 ) .karassiov , j. rus .laser res . , * 21 * , 370 ( 2000 ) ; phys . atom .* 63 * , 648 ( 2000 ) ; optika i spektr . ,* 91 * , 543 ( 2001 ) .a.b . klimov , l.l .sanchez - soto , phys ., * a 61 * , 063802 ( 2000 ) .karassiov , teor . mat .fiz . , * 95 * , 3 ( 1993 ) ; j. phys . , * a 27 * , 153 ( 1994 ) .karassiov , rep . math . phys . , * 40*,235 ( 1997 ) ; czech .j. phys.,*48 * , 1381 ( 1998 ) .m. kozierowski , r. tanas , opt . commun . * 21 * 229 ( 1977 ) l. mandel , opt* 42 * 437 ( 1982 ) .averbukh , n.f .perelman , sov .phys. jetp * 96 * 818 ( 1989 ) ; phys .lett . , * a 139 * , 449 ( 1989 ) .nikitin , a.v .masalov , quantum opt . * 3 * 105 ( 1991 ) .olsen , et al ., * a 61 * 021803 ( 2000 ) .a.m. perelomov , generalized coherent states and their applications .( nauka , moscow , 1987 ) .karassiov , a.a .gusev , s.i .vinitsky , e - archive : quant - ph/ 0105152 ( 2001 ) ; in : proc .xxiii inter .icgtm-23 ( dubna , july 28-august 5 , 2000 ) .( jinr , dubna , in press ) .bailey , acm trans .* 19 * , 288 ( 1993 ) ; * 21 * , 379 ( 1995 ) .t. kato , perturbation theory for linear operators .( springer , berlin e.a . , 1965 ) ; p. lankaster , theory of matrices .( academic , new york - london , 1969 ) .jaffe , rev .phys . , * 54 * 407 ( 1982 ) ; a. chatterjee , phys . rep . , * 186 * , 249 ( 1990 ) .d.n . page , phys ., * a 36 * 3479 ( 1987 ) ; s. kobayashi and k. nomizu , differential geometry , vol . 2 ( interscience , new york,1969 ). a. gusev , v. samoilov , v. rostovtsev and s. vinitsky , in : computing algebra in scientific computing : proc .workshop casc 2000 , v.g .ganzha , e.w .mayr , e.v .vorozhtsov ( eds . ) ( springer , berlin e.a . , 2000 ) , p. 219 .grebenikov , yu . a. mitropolsky , yu .a. ryabov , introduction into the resonance analytic dynamics .( janus - k , moscow,1999 ) .a. ballesteros and s.m .chumakov , j. phys . , * a32 * 6261 ( 1999 ) . | we compare exact and -cluster approximate calculation schemes to determine dynamics of the second - harmonic generation model using its reformulation in terms of a polynomial lie algebra and related spectral representations of the model evolution operator realized in algorithmic forms . it enabled us to implement computer experiments exhibiting a satisfactory accuracy of the cluster approximations in a large range of characteristic model parameters . second - harmonic generation model , polynomial lie algebra methods 42.50 , 42.65 , 02.20 , 03.65 |
in the theory of nonlinear systems , the logistic map with , ] , and by using the trigonometric identity in order to obtain .this is equivalent to the map , which has the explicit solution hence , the complete solution in terms of the coordinate must correspond to eq.([clo ] ) . following these steps, one can imagine to construct a map having the solution by expressing in terms of with the identity in this case , one gets as a possible dynamical map on ] , and note that the intermediate functions are expressed as such in order to verify eq.([rec ] ) with the variable change .c + + + + + + table 1 .first five members of . using these definitions, we calculate the first five functions of listed in table 1 . obviously , by construction of , is the logistic equation itself with parameter . from a more general perspective, it can also be seen that the maps are degree polynomials whose leading coefficient , i.e. , the coefficient of the highest degree term , is equal to in absolute value .these two results , satisfied by any function , is proven more formally in ref. .note that the latter property allows us to extend the similarity with the logistic map by parameterizing the functions of in the following manner with .we call the set of functions the _ family of _ , which can be characterized numerically by bifurcation diagrams and lyapunov spectrums such as the ones shown in figure 1 .many of the interesting properties of the logistic map at can be investigated more intuitively by making explicit the fact that eq.([smap ] ) is equivalent to a shift map on the binary expression of .indeed , if we express as a binary number then applying eq.([smap ] ) to is equivalent to shifting all the bits of to the left and dropping the integer part . in other words , where for , and .not surprisingly , the same is true for the maps , since eq.([smap ] ) was the guideline in defining the family .however , in the case of , the shift map to consider takes effect on written in base .this follows from the following result which generalizes effectively the solution of eqs.([clo ] ) and ( [ smap ] ) ._ theorem 1 ._ let be the orbit of under .if we write , then we have that where , as usual , .we omit the proof of this theorem as it follows directly from the next lemma ._ lemma 1_. consider as defined previously .we have that _ proof : _ the result is obvious for and .suppose eq.([sin1 ] ) true for and , that is to say (\sin ^2\theta ) \\& = & \sin ^2[(n-1)\theta ] , \end{aligned}\ ] ] and ] , and contrary to the s , the functions have the interesting property that they are polynomials of degree .in fact , the set coincides with the set of tschebysheff polynomials on the unit interval , the latter set satisfying the exact same recurrence formula as eq.([rec1 ] ) .we thus have that must constitute a set of orthogonal polynomials , i.e. , for all integers and , where is the delta - kronecker function .this fact can be further proved using the property , well - known to be satisfied by the tschebysheff functions .note that is also a set of orthogonal functions ; its members satisfy indeed the relation . in the remaining of this work, we shall restrain our study to the set , since the maps are directly related to by the expression hence , as far as their dynamics are concerned , the functions and are totally equivalent .the analysis of the chaoticity properties of a map is greatly simplified by studying conjugate maps of which are obtained by applying a global change of variables .recall that two maps and are _ conjugate _ if there exists a homeomorphism , i.e. , a bijective and continuous map such that .the function is called a _conjugacy_. in the context of , a possible conjugate function of can be constructed as follows .let be a piecewise linear function ( a generalized tent map ) defined on subintervals ] by setting with ._ theorem 2_. is conjugate to with conjugacy ._ proof _ : first note that is both continuous and bijective on the interval ] to infer that almost everywhere in the case of .accordingly , since lyapunov exponents are invariant under smooth and differentiable coordinate transformations , we have the following theorem .( a more extensive proof of this result , which takes care of the pathological points where is not defined , is contained in ref. . ) _ theorem 3 . _the lyapunov exponent of is almost everywhere ( with respect to the invariant measure ) .the above theorem shows that the members of are non - conjugate to each other simply because they possess different lyapunov exponents .it also shows that , and consequently the set of tschebysheff polynomials , are sets of _ chaotic _ maps .indeed , for , and by using the shift property of we can choose , with irrational , to build an orbit that is not asymptotically periodic .another way to convince ourselves that all the polynomials in have chaotic orbits is to use the celebrated result `` period-3 implies chaos '' , and find an initial point of period 3 for each .for instance , for a let where again , using the shift map property , we must have we thus extended the chaoticity properties of the logistic map to an infinite family of polynomials .to complete the study of the properties of , we now deduce that it is an abelian monoid with respect to the composition of functions ( ) .a monoid , precisely , is a non - empty set together with a binary associative operation , say , such that for .there must also be an element , called the identity element , for which for all .moreover , a monoid is called abelian if the binary operation is commutative . in ,the identity element is . also ,the composition of function is clearly associative .now , to prove that is indeed abelian monoid , we verify that it is closed under composition and that this composition is commutative , a condition that is not verified in the case of composition of general functions .however , before we do so , we present next a new expression of on the unit interval ._ lemma 2 ._ for all and ] .there exists a ] , we obtain from lemma 2 \nonumber \\ & = & \sin ^2[n_1\arcsin ( \sin ( n_2\arcsin \sqrt{x } ) ) ] \nonumber \\ & = & \sin ^2(n_1n_2\arcsin \sqrt{x } ) \nonumber \\ & = & s[n_1n_2](x).\end{aligned}\ ] ] obviously , (x)=s[n_2n_1](x)$ ] , so the composition is commutative. as a direct consequence of the monoid property , -periodic points of a certain polynomial can be looked at as fixed points of the function where .furthermore , a polynomial of very high degree can be computed easily by decomposing its expression using lower degree polynomial of the family .explicitly , consider .we say that is a _ prime element _ of if is a prime number . using this definition, we have as a result of theorem 4 and the fundamental theorem of arithmetic that any polynomial must possess a unique decomposition in prime elements of .to conclude , note that our study of the sine functions , written in the form , have been restricted to positive integers . in a similar manner , it could be interesting to investigate functions of the type with real .one observation about this extra generalization is that , as for , the function does not exhibit chaotic properties for .the function , for example , is conjugate to , and has all of its orbits attracted to . yet ,this is not surprising since the lyapunov exponent of this map must be .this brings us to conjecture that must admit chaotic behavior if and only if , considering that the lyapunov exponents of should be . a complete proof of this result ,however , can not be given here using the same symbolic dynamic approach used for , for the simple reason that the expression of a point in `` base '' makes sense only if is an integer greater than .v.p . would like to thank a. mingarelli for helpful discussions .this work was supported in part by the national sciences and engineering research council of canada ( nserc ) through the es a scolarship program .this definition of chaos is not necessarily equivalent to the more widely held definition based on topological transitivity and sensitivity to initial conditions ( see , e.g. , refs. ) .however , the conditions of chaoticity given in the text , which are used throughout this work , are more easy to verify in practice , and nevertheless capture most of the essense of what is understood as chaos . | a family of non - conjugate chaotic maps generalizing the well - known logistic function is defined , and some of its basic properties studied . a simple formula for the lyapunov exponents of all the maps contained in this family is given based on the construction of conjugacies . moreover , it is shown that , despite the dissimilarity of their polynomial expressions , all the maps possess the same invariant density . other algebraic properties of the family , which shows some relationship with the set of tschebysheff polynomials , are also investigated . ' '' '' * simple iterated maps , such as the baker map and the logistic map , are the subject of constant fascination . part of the interest for these systems is linked to the fact that they provide an easy and pedagogical way to understand how complex and chaotic behavior can arise from simple dynamical models . even more remarkable , yet , is the fact that studies of low - dimensional maps have proven to be fruitful in understanding the basic mechanisms responsible for the appearance of chaos in a large class of dynamical systems ( e.g. , differential flows , high - dimensional maps ) . one paradigm example of such mechanisms is the so - called period - doubling cascades of fixed points , encountered qualitatively in many physical systems of interest . in this paper , we enlarge the set of maps known to be chaotic by presenting a generalization of the logistic map . the generalization , more precisely , enables us to construct an infinite number of one - dimensional maps which are chaotic in the sense that they all have positive lyapunov exponents , and possess at least one orbit that is not asymptotically periodic . * ' '' '' |
* ( fop ) * optimal and sub - optimal algorithms for the special case of solving * ospmp * for simo mac / miso bc have been proposed in .the optimal algorithm is much more complex than algorithm o as it involves several inner and outer iterations .the difference between algorithm o and the sub - optimal algorithm are as follows .1 ) the sub - optimal algorithm works for simo mac , avoiding the calculation of beamforming matrices , while algorithm o works for mimo cases ; 2 ) to find the encoding / decoding order , after obtain the optimal solution of * spmp * with fixed in step 1 , the sub - optimal algorithm in needs to solve an equation to obtain a weight vector {,l=1, ... ,l} ] , where is the total required rate , with still achieves near - optimal performance , while with the meb order performs worse than that .not showing is thatthe target rates are set as /16 $ ] .the encoding / decoding order is partially fixed and is the same as that in example [ exa : pseudobm ] of section [ sub : orderoptimization ] .for the pseudo mac formed by link 2 and link 3 and the pseudo bc formed by link 4 and link 5 , the fixed order that is decoded after and is decoded after , and its improved order obtained by algorithm o are applied .improved order obtained by algorithm o we illustrate the convergence behavior of the distributed optimization with local csi .[ fig : fig6 ] plots the total transmit power and the minimum rate of the users achieved by algorithm prd versus the number of training rounds for a 3-user interference channel with .whenwhen the general mimo one - hop interference networks named b - mac networks with gaussian input and any valid coupling matrices are considered .we design algorithms for maximizing the minimum of weighted rates under sum power constraints and for minimizing sum power under rate constraints .they can be used in in two kinds of algorithms are designed .the first kind takes advantage of existing sinr optimization algorithms by finding simple and optimal mappings between the achievable rate region and the sinr region .the mappings can be used for many other optimization problems .the second kind takes advantage of the .both centralized and distributed algorithms are designed .a. liu , y. liu , h. xiang , and w. luo , `` duality , polite water - filling , and optimization for mimo b - mac interference networks and itree networks , '' _ submitted to ieee trans .info . theory _ ,apr . 2010 .[ online ] .available : http://arxiv4.library.cornell.edu/abs/1004.2484 m. maddah - ali , a. motahari , and a. khandani , `` communication over mimo x channels : interference alignment , decomposition , and performance analysis , '' _ ieee transactions on information theory _, vol . 54 , no . 8 , pp .34573470 , aug .2008 .chang , l. tassiulas , and f. rashid - farrokhi , `` joint transmitter receiver diversity for efficient space division multiaccess , '' _ ieee transactions on wireless communications _, vol . 1 , no . 1 ,1627 , jan 2002 .m. schubert and h. boche , `` solution of the multiuser downlink beamforming problem with individual sinr constraints , '' _ ieee transactions on vehicular technology _ , vol .53 , no . 1 ,1828 , jan . 2004 .f. fung , w. yu , and t. j. lim , `` precoding for the multiantenna downlink : multiuser snr gap and optimal user ordering , '' _ ieee transactions on communications _ , vol .55 , no . 1 ,188 197 , jan .2007 .n. jindal , w. rhee , s. vishwanath , s. jafar , and a. goldsmith , `` sum power iterative water - filling for multi - antenna gaussian broadcast channels , '' _ ieee trans .inform . theory _51 , no . 4 , pp .15701580 , april 2005 .m. varanasi and t. guess , `` optimum decision feedback multiuser equalization with successive decoding achieves the total capacity of the gaussian multiple - access channel , '' in _ proc .thirty - first asilomar conference on signals , systems and computers _ , vol . 2 , 1997 ,. 14051409 .a. liu , a. sabharwal , y. liu , h. xiang , and w. luo , `` distributed mimo network optimization based on local message passing and duality , '' _ in proc .47th annu .allerton conf .monticello , illinois , usa _ , 2009 . | general mimo interference networks , named b - mac networks , which is a combination of multiple interfering broadcast channels ( bc ) and multiaccess channels ( mac ) . two related optimization problems , maximizing the minimum of weighted rates under a sum - power constraint and minimizing the sum - power under rate constraints , are considered . the first approach takes advantage of existing efficient algorithms for sinr problems by building a bridge between rate and sinr through the design of optimal mappings between them so that the problems can be converted to sinr constraint problems . the approach can be applied to other optimization problems as well . the second approach employs polite water - filling , which is the optimal network version of water - filling that we recently found . it replaces almost all generic optimization algorithms currently used for networks and reduces the complexity while demonstrating superior performance even in non - convex cases . both centralized and distributed algorithms are designed and the performance is analyzed in addition to numeric examples . , mimo , interference network , [ [ section ] ] he optimization under rate constraints for general multiple - input multiple - output ( mimo ) interference networks , where each transmitter may send data to multiple receivers and each receiver may collect data from multiple transmitters . consequently , the network is a combination of multiple interfering broadcast channels ( bc ) and multiaccess channels ( mac). as special cases we assume gaussian input and that each interference is either completely cancelled or treated as noise . a wide range of interference cancellation is allowed , from no cancellation to any cancellation specified by a valid binary _ coupling matrix _ of the data links . for example , simple linear receivers , dirty paper coding ( dpc ) at transmitters , and/or successive interference cancellation ( sic ) at receivers may be employed . two optimization problems are considered for . * fop * which maximizes the minimum of scaled rates of all links , where the scale factors are the inverse of the target rates . a****solving the ( * spmp * ) under the rate constraints to the design of the in some cases such as cooperative cellular networks , it is possible to obtain global csi if the base stations are allowed to exchange csi , making the relevant . in ad hoc or large networks , we have to design distributed optimization algorithms with local csi . * * * * means that if a set of sinrs is achievable in the forward links , then the same sinrs can be achieved in the reverse links when the set of transmit and receive beamforming vectors are fixed . thus , optimizing the transmit vectors of the forward links is equivalent to the much simpler problem of optimizing the receive vectors in the reverse links . * * considering interference cancellation and encoding / decoding order , the * fop * and * spmp * for mimo bc / mac have been completely solved in by converting them to convex weighted sum - rate maximization problems for mac . steepest ascent algorithm for the weighted sum - rate maximization a high complexity algorithm that can find the optimal encoding / decoding order for miso bc / simo mac is proposed in that needs several inner and outer iterations . a heuristic low - complexity algorithm in finds the near - optimal encoding / decoding order for * spmp * by observing that the optimal solution of * spmp * must be the optimal solution of some weighted sum - rate maximization problem , in which the weight vector can be found and used to determine the decoding order . in summary , the * fop * and * spmp * for mimo b - mac networks have been open problems . the contribution of the paper is as follows . * _ rate - sinr _ _ conversion _ : one of the difficulties of solving the problems is the joint optimization of beamforming matrices of all links . one approach is to decompose a link to multiple single - input single - output ( siso ) streams and optimize the beamforming vectors through sinr duality , if a bridge between rate and sinr can be built to determine the optimal number of streams and rate / power allocation among the streams . in section [ sec : rate - sinr ] , we show that any pareto rate point of an achievable rate region can be mapped to a pareto sinr point of the achievable sinr region through two optimal and simple mappings that produce equal rate and equal power streams respectively . the significance of this result is that it offers a method to convert the * _ sinr based algorithms _ : using the above result , we take advantage of existing algorithms for sinr problems to solve * * * * * another approach is to directly solve for the beamforming matrices . for the convex problem of mimo mac , steepest ascent algorithm is used except for the special case of sum - rate optimal points , where iterative water - filling can be employed . the b - mac network problems are non - convex in general and thus , better algorithms , like water - filling , than the steepest ascent algorithm is highly desirable . however , directly applying traditional water - filling is far from optimal . in , we recently found the long sought optimal network version of water - filling , polite water - filling , which is the optimal input structure of any pareto rate point , not only the sum - rate optimal point , of the achievable region of a mimo b - mac network . this network version of water - filling is polite because it optimally balances between reducing interference to others and maximizing a link s own rate . the superiority of the polite water - filling is demonstrated for weighted sum - rate maximization in and the superiority is because it is hard not to obtain good results when the optimal input structure is imposed to the solution at each iteration . in section [ sub : itree ] , using polite water - filling , we design an algorithm to monotonically improve the output of the sinr based algorithms for itree networks defined later , if the output does not satisfy the kkt condition . furthermore , in section [ sub : pr - pr1 ] , purely polite water - filling based algorithms are designed that have faster convergence speed . * _ distributed algorithm _ : in a network , it is highly desirable to use distributed algorithms . the polite water - filling based algorithm is well suited for distributed implementation , which is shown in section [ sub : distributed - implementation ] , where each node only needs to estimate / exchange the local csi but the performance of each iteration is the same as that of the centralized algorithm . * _ _ optimization of encoding and decoding order__s : another difficulty is to find the optimal encoding / decoding order when interference cancellation techniques like dpc / sic are employed . again , polite water - filling proves useful in section [ sub : orderoptimization ] because the water - filling levels of the links can be used to identify the optimal encoding / decoding order for bc / mac and pseudo - bc / mac defined later . the rest of the paper is organized as follows . section [ sec : system model ] defines the achievable rate region and formulates the problems . section [ sec : preliminary ] summarizes the preliminaries on sinr duality and polite water - filling . section [ sec : algorithms ] presents the efficient centralized and distributed algorithms . the performance of the algorithms is verified by simulation in section [ sec : simulation - results ] . the conclusion is given in section [ sec : conclusion ] . [ [ section-1 ] ] let and denote the virtual transmitter and receiver of link equipped with transmit antennas and receive antennas respectively . the received signal at is where is the transmit signal of link and is assumed to be circularly symmetric complex gaussian ; is the channel matrix between and ; and is a circularly symmetric complex gaussian noise vector with zero mean and identity covariance matrix . to handle a wide range of interference cancellation possibilities , we define a coupling matrix as a function of the interference cancellation scheme . it specifies whether interference is completely cancelled or treated as noise : if , after interference cancellation , still causes interference to , and otherwise , . for example , if the virtual transmitters ( receivers ) of several links are associated with the same physical transmitter ( receiver ) , interference cancellation techniques such as dirty paper coding ( successive decoding and cancellation ) can be applied at this physical transmitter ( receiver ) to improve the performance . the coupling matrices valid for the results of this paper are those for which there exists a transmission and receiving scheme such that each signal is decoded and possibly cancelled by no more than one receiver . possible extension to the han - kobayashi scheme , where a common message is decoded by more than one receiver , is discussed in . we give some examples of valid coupling matrices . for a bc ( mac ) employing dpc ( sic ) where the link is the one to be encoded ( decoded ) , the coupling matrix is given by and . in fig . [ fig : sysfig1 ] , we give an example of a b - mac network employing dpc and sic . when no data is transmitted over link 4 and 5 , the following are valid coupling matrices for link under the corresponding encoding and decoding orders : _ is encoded after and is decoded after ; _ b_. is encoded after and is decoded after ; _ c_. is encoded after and is decoded after ; _ d_. there is no interference cancellation. , & \:\mathbf{\phi}^{b}=\left[\begin{array}{ccc } 0 & 1 & 1\\ 0 & 0 & 0\\ 1 & 1 & 0\end{array}\right],\\ \mathbf{\phi}^{c}=\left[\begin{array}{ccc } 0 & 0 & 1\\ 1 & 0 & 1\\ 1 & 0 & 0\end{array}\right ] , & \:\mathbf{\phi}^{d}=\left[\begin{array}{ccc } 0 & 1 & 1\\ 1 & 0 & 1\\ 1 & 1 & 0\end{array}\right].\end{aligned}\ ] ] note that when dpc and sic are combined , an interference may not be fully cancelled under a specific encoding and decoding order . such case can not be described by the coupling matrix of 0 s and 1 s defined above . but a valid coupling matrix can serve for an upper or lower bound . see more discussion in . if not explicitly stated otherwise , achievable regions in this paper refer to the following . note that by definition . the interference - plus - noise covariance matrix of the link is where is the covariance matrix of . we denote all the covariance matrices as then the achievable mutual information ( rate ) of link is given by a function of and the _ achievable rate region _ with _ _ a fixed coupling matrix _ _ and sum power constraint is defined as a bigger achievable rate region can be defined by the convex closure of , where is a set of valid coupling matrices . for example , if dpc and/or sic are employed , can be a set of valid coupling matrices corresponding to various valid encoding and/or decoding orders . the algorithms rely on the duality between the forward and reverse links of a b - mac network . the reverse links are obtained by reversing the transmission direction and replacing the channel matrices by their conjugate transposes . . we use the notation to denote the corresponding terms in the reverse links . for example , in the reverse links of the b - mac network in fig . [ fig : sysfig1 ] , ( ) becomes the receiver ( transmitter ) , and is decoded after and is encoded after , if in the forward links , is encoded after and is decoded after . the interference - plus - noise covariance matrix of reverse link is and the rate of reverse link is given by * ( fop ) * coupling matrix * * * * for the special case of dpc and sic , the optimal _ _ coupling matrix _ _ , or equivalently , the optimal encoding and/or decoding order of * * is partially solved in section * * * * although we focus on the sum power and white noise in this paper for simplicity , the results can be directly applied to a much larger class of problems with a single linear constraint in * fop * ( or objective function in * spmp * ) and/or colored noise with covariance =\mathbf{w}_{l} ] . the covariance transformation for this case is also calculated from the mmse receive beams and power allocation that makes sinrs of the forward and reverse links equalis that the identity noise covariance in is replaced by and the all - one vector in ( [ eq : qpower ] ) is replaced by the vector {m=1, ... ,m_{l},l=1, ... ,l} ] ; the input covariance matrices must satisfy the linear constraint ; and the covariance matrix of the noise at the receiver of link is . [ thm : linear - color - dual]the dual of the network ( [ eq : net - color - linear - constraint ] ) is,\sum_{l=1}^{l}\textrm{tr}\left(\hat{\mathbf{\sigma}}_{l}\mathbf{w}_{l}\right)\leq p_{t},\left[\hat{\mathbf{w}}_{l}\right]\right)\label{eq : net - forward - color - dual}\ ] ] in the sense that 1 ) they have the same achievable rate region ; 2 ) if achieves certain rates and satisfies the linear constraint in network ( [ eq : net - color - linear - constraint ] ) , its covariance transformation achieves better rates in network ( [ eq : net - forward - color - dual ] ) under the linear constraint . _ _ its covariance transformation as . the input covariance matrix is said to satisfies the structure of water - filling over , i.e.,____ s [ thm : wfst]the input covariance matrices of a pareto rate point of the achievable region and its covariance transformation the polite water - filling structure . the following theorem proved in states that having the polite water - filling structure suffices for to have the polite water - filling structure even at a non - pareto rate point . [ thm : fequrgwf]if one input covariance matrix has the polite water - filling structure while other are fixed , so does its covariance transformation , i.e. , satisfies the structure of water - filling over the reverse equivalent channel . further more , can be expressed as where is the polite water - filling level in ( [ eq : wffar ] ) . |
the hamiltonian formulation of classical mechanics assigns simultaneous , arbitrarily accurate , values for the canonically conjugate position and momentum to distinguishable particles .indeed in classical mechanics these simultaneous values are regarded as properties of the particles themselves ; measurement simply reveals these values and need not , in principle , add uncertainty to their determination . in quantum mechanics the conjugate position and momentum are represented by noncommuting operators , and , with =i\hbar ] is where and are the traces over the system and detector states , respectively , is the probability density , and the _ resolution operator _p_1p_2|d_1d_2\rangle \\ & = & \hat{d}(sx_1,\textstyle\frac{1}{s}x_2)\hat{\upsilon}_{\!\sigma}(0,0)\hat{d}(sx_1,\textstyle\frac{1}{s}x_2)^\dag.\end{aligned}\ ] ] the displacement operator \ ] ] and we now define the annihilation operator which satisfies =1\ ] ] and rewrite the displacement operator as where .it can be easily shown that the coherent states are now defined by applying the displacement operator onto the vacuum state they satisfy the following relations |\alpha+\beta\rangle \label{a4}\\ \langle\alpha|\beta\rangle & & = \exp\left(-\textstyle\frac{1}{2}|\alpha|^2-\textstyle\frac{1}{2}|\beta|^2+\alpha^*\beta\right ) \label{a5}\\ \pi^{-1}\int d^2\alpha&&|\alpha\rangle\langle\alpha|=1 . \label{a6}\end{aligned}\ ] ] now defining we can rewrite ( [ up00 ] ) as using the above relations and integrating we obtain where denotes normal ordering .hence , if we now define then the resolution operator becomes : \\ & \equiv & ( 2\hbar)^{-\frac{1}{2}}\hat{\upsilon}_{\!\sigma}(\chi ) .\label{res}\end{aligned}\ ] ] the probability of finding the detector positions in the small area \times[\chi_2,\chi_2+d\chi_2]$ ] is now where and :\end{aligned}\ ] ] is an _ effect density _ .it can be easily shown that hence , defining the notion of a mean for this measurement process we find that or where is the quantum expectation . thus the readout variables and give , respectively , the position and momentum of the system with additional noise dependent on .we obtain maximal information from the system when the variances are at a minimum .that is , when . in this case : \\ & = & \pi^{-1}|\chi\rangle\langle\chi|\end{aligned}\ ] ] and the probability density reduces to the husimi density or function suppose we take a measurement and obtain the outcome . as a consequence of the strength of this measurementthe system state collapses to however when the resolution operator has the asymptotic expansion +o(\textstyle\frac{1}{\sigma^5 } ) .\label{asympx}\end{aligned}\ ] ] using this result one can show that the system state conditioned on the measurement outcome has the expansion -(\hat{a}^\dag-\chi'^*)(\hat{a}-\chi'),\hat{\rho}\bigr\}+o(\textstyle\frac{1}{\sigma^4})\ ] ] and thus , if is large enough , the process of measurement will have negligible effect on the system .consider a sequence of phase space measurements governed by the hamiltonian where after each measurement the detectors are reset into the initial states given by equation ( [ dxi ] ) or ( [ dpi ] ) .the assumption that the detectors are reset is equivalent to making a markov assumption for a single apparatus coupled to the system . by resetting the detector at each time step we ensure that no coherent memory of the system state survives in the states of the apparatus . following we will first derive the master equation for unconditional ( or nonselective ) evolution of the system density operator in the continuous limit , . by unconditional evolutionwe mean that no account is taken of the measured results .thus after each measurement occurs we ignore the result and average over all possible measurement outcomes . if we denote the system density operator immediately before the -th measurement by then where for any operator it is possible to show that \right]+o(\textstyle\frac{1}{\sigma^4}).\ ] ] hence we obtain -\textstyle\frac{1}{{\delta t}\sigma^2}\left[\hat{a},\left[\hat{a}^\dag,\hat{\rho}(n{\delta t})\right]\right]+o({\deltat})+o(\textstyle\frac{1}{\sigma^2})+o(\textstyle\frac{1}{{\delta t}\sigma^4}).\ ] ] by setting and taking the continuous limit , , with held constant , we obtain the master equation for unconditional evolution : -\gamma\left[\hat{a},\left[\hat{a}^\dag,\hat{\rho}\right]\right ] \label{uncon}\\ & = & -\textstyle\frac{i}{\hbar}\left[\hat{h},\hat{\rho}\right]-\textstyle\frac{1}{2\hbar}\gamma_1\big[\hat{x},\big[\hat{x},\hat{\rho}\big]\big]-\textstyle\frac{1}{2\hbar}\gamma_2\big[\hat{p},\big[\hat{p},\hat{\rho}\big]\big ] \label{xpuncondequ}\end{aligned}\ ] ] where and . this equation has already been derived by barchielli et al . , but their approach is somewhat different . by setting in ( [ xpuncondequ ] )we obtain the unconditional master equation for continuous position measurements previously derived in .we now wish to derive the conditional ( or selective ) master equation for the system density operator . in this casethe evolution of the system is conditioned on a history of measurement readouts where is the detector position for the -th measurement .hence , if is the system density operator immediately before the -th measurement , then ^{-1}\hat{\upsilon}_{\!\sigma}(\chi(n{\delta t}))\hat{\rho}(n{\delta t}){\hat{\upsilon}_{\!\sigma}(\chi(n{\delta t}))}^\dag{\hat{u}}^\dag.\ ] ] to proceed we extend the definition of the readout variable by setting and introduce the new variable hence and using ( [ m1 ] ) and ( [ m2 ] ) we obtain where the subscript has been added to emphasize that the mean and variance are conditioned through on the entire history of measurement readouts . now letting we find that where we have set to be constant in anticipation of the continuous limit . hence for small enough we can approximate increments in the variable by where it is understood that the complex it increment is of order and satisfies , . in the continuous limit with constantwe have where is a complex wiener process and the it differential satisfies the algebra to simplify the following we will always replace by and set in anticipation of the above algebra in the continuous limit . using ( [ xincrement1 ] ) together with ( [ xincrement2 ] ) we obtain the following asymptotic expansion for the resolution operator ( [ res ] ) where and it is understood that , and . hence we find that \right]\right)+o({\delta t}^3)\ ] ] and thus , using ( [ asympu ] ) we obtain +\gamma^\frac{1}{2}\left\{\hat{{\cal a}}^\dag\delta \xi + \hat{{\cal a}}\delta\xi^*,\hat{\rho}\right\}-\gamma{\delta t}\left[\hat{{\cal a}},\left[\hat{{\cal a}}^\dag,\hat{\rho}\right]\right]+o({\delta t}^\frac{3}{2 } ) \\ & = & \hat{\rho}-\textstyle\frac{i{\delta t}}{\hbar}\left[\hat{h},\hat{\rho}\right]-\gamma{\delta t}\left[\hat{a},\left[\hat{a}^\dag,\hat{\rho}\right]\right]+\gamma^\frac{1}{2}{\cal h}[\hat{a}^\dag]\hat{\rho}\delta \xi+\gamma^\frac{1}{2}{\cal h}[\hat{a}]\hat{\rho}\delta\xi^*+o({\delta t}^\frac{3}{2})\end{aligned}\ ] ] where we have defined the superoperator \hat{\rho}\equiv\left\{\hat{a}-{\,\text{tr}}(\hat{a}\hat{\rho}),\hat{\rho}\right\}.\ ] ] in the limit we obtain the master equation for conditional evolution : -\gamma\left[\hat { a},\left[\hat{a}^\dag,\hat{\rho}(t)\right]\right]dt+\gamma^\frac{1}{2}{\cal h}[\hat{a}^\dag]\hat{\rho}(t)d\xi(t)+\gamma^\frac{1}{2}{\cal h}[\hat{a}]\hat{\rho}(t)d\xi(t)^*.\end{aligned}\ ] ] it is easy to see that upon averaging this stochastic differential equation we reproduce our original master equation for unconditional evolution ( [ uncon ] ) .however , note that unlike the unconditional equation , this equation preserves the pure state property of .one can easily prove this by showing under the assumption that , where . as a consequence of this fact , the above master equation has an analogue for pure state evolution in terms of a stochastic schrdinger equation : where . in terms of position and momentum variablesthe master equation for conditional evolution reads as -\textstyle\frac{1}{2\hbar}\gamma_1\big[\hat{x},\big[\hat{x},\hat{\rho}\big]\big]dt-\textstyle\frac{1}{2\hbar}\gamma_2\big[\hat{p},\big[\hat{p},\hat{\rho}\big]\big]dt+\hbar^{-\frac{1}{2}}{\gamma_1}^{\frac{1}{2}}{\cal h}[\hat{x}]\hat{\rho}dw_1+\hbar^{-\frac{1}{2}}{\gamma_2}^{\frac{1}{2}}{\cal h}[\hat{p}]\hat{\rho}dw_2 \label{xpcondequ}\ ] ] where and the readout variables and obey the following stochastic processes with by setting in ( [ xpcondequ ] ) we obtain the conditional master equation for continuous position measurements previously derived in .note that and are charged by stationary white noise making their graph highly irregular .it is thus better to represent the measured trajectory by and .we will now investigate the effect of measurement on the system state by setting and defining where and are the expected variances in position and momentum , and is the expected covariance between position and momentum .the average is taken over all possible measurement histories .one can then derive the following set of coupled differential equations : the solutions of which are where , and .hence the process of measurement induces the system state to collapse into a coherent state . if the measurement retrieves no information on momentum , i.e. , then and the growth in the momentum variance is unbounded .similarly , if the measurement retrieves no information on position then the position variance grows unbounded .however , when both position and momentum are measured simultaneously the system state is forced into a coherent state .when we expect that for a suitable choice of , any spreading of the quantum wavepacket caused by nonlinearities in the hamiltonian will be counteracted by the measurement induced localization .one might naively assume that if the measurement only retrieves information on position ( or momentum ) then the state will not localize .however this is not always the case .note that when in equation ( [ vp ] ) the momentum variance will initially decrease .thus if the system dynamics is such that it increases the covariance between position and momentum , then the continuous measurement of position may also localize momentum .for example , consider the hamiltonian describing free particle motion , .when the variances and covariance satisfy although we could not solve these equations analytically , it is easy to see that all physical solutions asymptotically attract to the stable fixed point hence , measurement of position does not introduce a diffusion in momentum , and the state localizes ( this result has been derived previously in ) .however , for free particle motion , if we only measure momentum the state does not localize .the system dynamics accelerates the growth in the position variance .see for more on localization .we will now numerically investigate the solution of the stochastic schrdinger equation ( [ stochschro ] ) for the driven system the numerical method to solve this equation is simple . to take advantage ofthe measurement induced localization we use a local moving number basis truncated at some finite value .the stochastic terms are integrated using the first - order euler method while other terms are integrated by diagonalizing the position and momentum operators and using the split - operator formula .we will first consider an integrable case when , , and .the initial state was chosen to be a coherent state ( ) centered at when .the husimi density of the initial state together with the contours of the hamiltonian are plotted in fig .the husimi density of the evolved state ( ) together with the trajectories for different measurement schemes is plotted in figures 1(b)-(d ) .the evolved state in fig .1(b ) is the result when no measurements occur ( ) . in this case , nonlinearities in the hamiltonian cause the state to shear as it evolves , spreading it along the contours .the trajectory has little meaning when . in fig .1(c ) the evolved state is the result of continuous simultaneous measurement of position and momentum with and . in this casethe state has remained localized as it follows the contours .the continuous measurement of position only ( ) when has also kept the state localized .this is shown in fig .the combined variance of position and momentum is plotted in fig .we must emphasize that only when both position and momentum are measured together does the trajectory correspond via ( [ xa],[xb ] ) to the outcome of an actual measurement .if only position is measured , only observed while is simply the result of a mathematical calculation .now consider the chaotic case when , , , , and .a poincar stroboscopic map with unit strobing frequency is plotted in fig .for an initial state the same as above , the evolved state ( ) together with the trajectories for different measurement schemes is plotted in figures 2(b)-(d ) . when no measurement occurs ( fig .2(b ) ) the chaotic action of stretching and folding spreads the state across the phase space .however when we continuously measure position and momentum ( fig .2(c ) ) the state remains localized and the trajectory resembles classical motion with noise .this noise will vanish as we approach the classical limit . in fig .3(a ) we have plotted the quantum trajectory for the same parameter values as above except with . in this casethe total variance remains below ( fig .3(b ) ) and the noise is not visible .the corresponding classical trajectory is plotted in grey and is only visible when it deviates from the quantum at .the evolution of this system under the continuous measurement of position has already been studied by bhattacharya et al .they also find that the measurement keeps the system state localized .it is not surprising that this is also the case when only momentum is measured ( fig .we have derived an it stochastic schrdinger equation ( [ stochschro ] ) describing the evolution of a quantum system under the continuous simultaneous measurement of position and momentum .the outcome of this measurement is a classical stochastic record obeying ( [ calx ] ) . as a consequence of the measurement ,the system state is forced to remain localized allowing a classical interpretation of the quantum mean of the phase space variables as the trajectory of the system .this trajectory corresponds to the actual measured trajectory minus noise ( [ xa],[xb ] ) .furthermore , the localization property allows a well - defined classical limit via ehrenfest s theorem .indeed , for small , numerical results show that the quantum system approximately follows classical trajectories .however a more complete theoretical understanding of the classical limit under continuous measurement is needed . 9 p. busch , int . j. theor* 24 * , 63 ( 1985 ) .t. bhattacharya , s. habib and k. jacobs , phys .( to be published ) .n. gisin and i.c .percival , j. phys .a : math . gen . * 25 * , 5677 ( 1992 ) .n. gisin and i.c .percival , j. phys . a : math* 26 * , 2233 ( 1993 ) .n. gisin and i.c .percival , j. phys .a : math . gen . * 26 * , 2245 ( 1993 ) .spiller and j.f .ralph , phys .a * 194 * , 235 ( 1994 ) .r. schack , t.a .brun and i.c .percival , j. phys .a : math . gen .* 28 * , 5401 ( 1995 ) .brun , i.c .percival and r. schack , j. phys .a : math . gen . * 29 * , 2077 ( 1996 ) .e. arthurs and j.l .kelly , jr . , bell .j. * 44 * , 725 ( 1965 ) .conway , _ a course in functional analysis _( springer - verlag , berlin , 1985 ) .k. kraus , _ states , effects , and operations : fundamental notions of quantum theory _( springer - verlag , berlin , 1983 ) . m.a .nielsen and i.l .chuang , _ quantum computation and quantum information _( cambridge , 2000 ) .a. barchielli , l. lanz and g.m .prosperi , nuovo cimento b * 72 * , 79 ( 1982 ) .a. barchielli , nuovo cimento b * 74 * , 113 ( 1983 ) .a. barchielli , l. lanz and g.m .prosperi , found .* 13 * , 779 ( 1983 ) .l. disi , phys .a * 129 * , 419 ( 1988 ) .belavkin and p. staszewski , phys .a * 140 * , 359 ( 1989 ) .caves and g.j .milburn , phys .rev . a * 36 * , 5543 ( 1987 ) . g.j .milburn , quantum semiclass . opt . * 8 * , 269 ( 1996 ) .braunstein , c.m .caves and g.j .milburn , phys .a * 43 * , 1153 ( 1991 ) .a. perelomov , _ generalized coherent states and their applications _( springer - verlag , berlin , 1986 ) .gardiner , _ handbook of stochastic methods _ ( springer - verlag , berlin , 1983 ) .l. disi , phys .a * 132 * , 233 ( 1988 ) .percival , j. phys .a : math . gen .* 27 * , 1003 ( 1994 ) .strunz , i.c .percival , j. phys .a : math . gen . * 31 * , 1801 ( 1998 ) .percival , w.t .strunz , j. phys .a : math . gen . * 31 * , 1815 ( 1998 ) .doherty , s.m .tan , a.s .parkins and d.f .walls , phys .a * 60 * , 2380 ( 1999 ) .* fig . 1 .( a ) * contours of and the initial state ( ) . *( b - d ) * the trajectory and final state at when ( b ) , ( c ) and , and , ( d ) and . *( e ) * the combined variance of position and momentum .all quantities are dimensionless . + * fig .2 . ( a ) * poincar map for and the initial state ( ) . *( b - d ) * the trajectory and final state at when ( b ) , ( c ) and , and , ( d ) and . *( e ) * the combined variance of position and momentum . + * fig .* the quantum ( black ) and classical ( grey ) trajectories for the same hamiltonian as in fig .2 except with . *( b)*the combined variance of position and momentum . | classical dynamics is formulated as a hamiltonian flow on phase space , while quantum mechanics is formulated as a unitary dynamics in hilbert space . these different formulations have made it difficult to directly compare quantum and classical nonlinear dynamics . previous solutions have focussed on computing quantities associated with a statistical ensemble such as variance or entropy . however a more direct comparison would compare classical predictions to the quantum for continuous simultaneous measurement of position and momentum of a single system . in this paper we give a theory of such measurement and show that chaotic behaviour in classical systems can be reproduced by continuously measured quantum systems . |
the study of the consumption of cultural goods in general , and that of films in particular , has been traditionally restricted to total demand empirical studies .most of these studies have followed the original guidelines set by the earliest authors , such as baumol & bowen or moore . in these studies variations in quality of the good have been consigned to a residual status , focusing in the effect of prices and income as the main explanatory factors .other studies , such as the one performed by blanco & baos - pino , have considered the availability of alternative leisure activities ; while on the other side , models such as the one presented by thorsby have considered the impact of quality on final attendance , incorporating it either as an expected value at the individual level or as a macro variable obtained ex - post from critic s review indexes or from online reviews .although we acknowledge their efforts , we must say that in the latter works the authors were not concerned with the structure of the consumption cycle as an indicator of film quality .their work is close to ours , only in the sense that reviews act as a form of social interaction , which turns out to be a key ingredient of our model .however useful , demand studies as the ones surveyed above do not account for the dynamics of the consumption life cycle of the cultural good and , more importantly , they do not discuss the ways in which the structure of this cycle is affected by the transmission of information from members who have had access to the cultural good to potential consumers . beside economic analysis, the sociologist lipovetsky has focused his work on the macro structure of the life cycle , mainly in terms of its duration .basic principles which underlie this process , however , are not identified in his work . in this paperwe propose a model based on first principles , which agrees with all the observed behaviors exhibited by the empirical data of the movie industry .the model is concerned with the primary life cycle of the good , but it differs from the others in the sense that the whole structure of the life cycle is recovered , not only the characteristic decay times or the total consumption are considered like in previous studies .although the consumption life cycle of some cultural goods can be potentially unbounded ( we still buy copies from don quixote ) , the aim of this work is to understand the dynamics of the primary life cycle , which finishes when per period consumption decreases below a certain threshold relative to its premiere level .a less ambiguous case is that of performing arts like theater , where producers are forced to cancel presentations when box - office revenues goes below fixed costs . the alternative cost associated with the availability of new options , effect which also applies to the film industry or best - sellers book industry , strengthens the limited duration life cycle that characterizes aggregated consumption in these environments .we present a model that reproduces the life cycle dynamics and is determined by three basic factors : * \(i ) the size of the targeted group * \(ii ) the prior conception about the quality of the good in question ; and * \(iii ) the effect of social interactions , in the form of information about the quality of the good , between agents who have effectively experienced the good and potential consumers. it will be the latter effect the one which determines the particular shape of the consumption life cycle . in relation with the literature studying the diffusion of technological innovations, our model relates to the works of mansfield , and bass , with the difference that the dynamical process that underlies the shape of the cycle are identified and made accountable for it . whereas bass uses a parameterized curve that he fits into the empirical behavior , without incorporating any dynamical justifications for the curves he chooses .word of mouth can be justified as a valid mechanism of social influence by following works like the one presented by moore .he argues that given the scale economy that characterizes certain technologies , e.g. , database software , it is impractical ( or too expensive ) to do small - scale experiments and , thus , word - of - mouth becomes an important part of the diffusion process of these investment goods .going to the movies , however , poses a somewhat different problem for potential consumers .first , `` early adopters '' do not face the risks normally associated with the adoption of a novel technology and thus an important fraction of them will be willing to consume the good before peer s opinions become available .this will be performed based solely on the pre - opening expectations generated by the media and the producers themselves .this leads the life - cycle consumption process to diverge from the standard s - shaped behavior associated with the adoption of a novel technology .on the other hand , new movies which become available for the general public tend to have an initial attendance which is relatively high and usually decrease all along the consumption cycle , the sole exception being small productions or independent films which show a consumption life - cycle that resembles that of novel technology adoption .our model captures both forms of behavior . other family of models , which takes into account social interaction effects in the dynamics of consumption , is that of `` fashion cycles '' . in our case , however , the demand of a good does not decrease due to the consumption of it by other agents , the movie does not become worn out , like a fashion design , in fact what the model attempts to capture is the fact that agents who have seen the movie have an impact on the expected value of the good of agents who have not seen it . a different approach for the diffusion of innovations is to consider site percolation on regular lattices .these models are based on the assumption that agents occupy the vertex of a regular lattice in dimensions and are represented by a random number .innovations diffuse by percolating the lattice according to a certain quality value which when above the percolation threshold generates a giant consumption cluster .although we acknowledge this efforts we believe that spatially realistic models should consider a network substrate instead of a regular lattice .social networks exhibit topological properties which are completely different to the substrate defined by a regular lattice , such as the scaling properties of its degree distribution , the community structure and the short average path length to name a few . in this paperwe present a model that does not take this substrate into account , it could be consider a mean field solution or jelly model , regardless of this we are able to reproduce all the types of aggregated behavior .we now present a model of cultural consumption based on the following assumptions . *\(i ) each agent goes to see a movie at the theater only once .( we neglect the probability of going twice assuming that the probability of going to the theater to see the same movie times is a rapidly decaying function of ) . *\(ii ) the probability that an agent goes to the movies is affected by the interactions with agents who have already seen it .the different quantities associated with the model will be expressed using the following notation . will represent the number of agents that have not watched the movie at discrete time , while will represent the probability that an agent who has not watched the movie decides to attend it at discrete time .the observable quantity that can be measured is the number of agents who have seen the movie at discrete time .we will call this quantity . in our model , it will be given by the product , which is nothing more than the expected number of agents attending the good . before the movie is available for its potential consumers , agents have a prior conception about its quality , which comes from the information of pre - observable features such as its budget , the cast , and advertisement .the availability of this information will depend on the marketing strategies of the producers and distributors .irrespectively of the way in which these strategies are conceptualized , i.e. , whether publicity is taken as an information provider or as a persuasive device , its effects on our model are equivalent , affecting the prior conception about the movie in question and , therefore , the agents likelihood of actually purchasing it at the beginning of the process .we will denote this likelihood by , and we will call the initial target population ; which represents the total number of potential attendants of the good before it becomes available . it can be argued that the length of the consumption cycle that the paper takes into consideration is unrealistically short , since the energy of film producers is directed to finding , successfully , new ways of lengthening it .sedgwick claims that more than 70 per cent of film revenue is derived from non - theatrical resources .however , given the dramatic effect that the primary cycle of consumption has in the subsequent ones , we believe that this primary cycle is worth of investigation on its own .the simplest case is the one of an atomized society where agents decisions are independent from each other , both in terms of information the opinion of agents who have watched the movie do not influence the decision of potential consumers and in terms of consumption there are no network externalities associated with timely coordinated consumption in this case , we will also assume that restrictions on the supply side do not apply .the whole targeted population could simultaneously go to the theater and the horizon of its exhibition has no time limit . in this casethe probability of attending the theater does not depend on time , and it is equal to its initial value .the system dynamics can be described by considering that the expected attendance at a given time is given by the product between and ; and that at the beginning of the process no agents have seen the movie .after the first time step some agents will have attended the theater , and we will therefore have to subtract them from the ones who have not .thus the temporal variation of will be given by if we approximate the discrete variables by continuous ones and notice that in the first time step attendance is given by , we can conclude that the general solution of equation ( 1 ) has the traditional form in the next section we will see how social interactions alter this behavior .once the effect of social interactions are considered , becomes a dynamical variable and its change in time is then due to two main contributions .the first one is the one associated with the transmission of information about the observed value of the movie , and it represents the change in the probability of attending the theater induced by the information about it transmitted between agents . after the first period of attendance , residual potential consumers have access to the opinion of the first period audience .this effect is cumulative as the residual consumers have potential access to the opinions of audiences from the previous time steps order where opinions are transmitted to residual consumers also via agents who have not yet participated in the consumption of the cultural good .qualitative properties of an order process in which the effect s strength diminishes over time , would be equivalent to the order process investigated here .wu et al. have shown that information flow remains bounded as long as the likelihood of transmitting information remains sufficiently small ] .we assume that the change induced in by this effect will be proportional to the probability that agents who have not seen the movie meet with agents who have .the proportionality constant associated with this term will be the one representing the strength of it .the second contribution we considered is the one associated with coordinated consumption . in the case of performing arts , agents usually attend in groups ; therefore agents who have not seen the movie are less likely to go if they are not able to find other agents to keep them company .the change induced by this effect will always reduce attendance likelihood , and it is also proportional to the probability that agents who have attended the good meet with ones who have not .therefore , when we consider the effect of social interactions , we can write the change in the probability of attending the good as where represents the proportionality constant associated with the effects of information flow while is the one associated with the social coordination effects . from now on we will focus on the case in which can take positive and negative values representing the fact that information transmitted between agents can stimulate or inhibit the future attendance of other agents . whereas will always contribute negatively .this is because coordinated consumption can only reduce the likelihood of going to the theater for un - coordinated agents .both terms , it is worth noticing , act directly on the population , altering the likelihood that agents will attend or purchase the good .although agents are not explicit rational optimizers who make their decision over the basis of a bayesian update of their beliefs about the movie s quality , one could understand this model as the outcome of rational searching behavior in an incomplete information environment .the continuous version of the system can be solved analytically in the case in which , do not depend on time and the effects are not cumulative . in this case ( 1 ) and ( 4 ) reduce to which in the continuous case can be represented by where to find a solution we notice that equation ( 7 ) can be substituted into equation ( 8) giving us a third differential equation relating and which can be solved and used to introduce the initial conditions of the system .this relation can be expressed as replacing [ 9 ] back on equation ( 7 ) and integrating it , we find that the solution of the system is given by where the constant is defined as ( a ) fraction of agents attending the theater a a function of time according to the model described by eqns .( [ 7 ] ) and ( [ 8 ] ). the segmented line represents the atomized behavior described by eqn .( [ 3 ] ) while the lines on top of it and below of it represent the behavior of positive and negative values respectively .( b ) shows the accumulated attendance , or in other words , the integral in time of ( a).,scaledwidth=100.0% ] figure [ fig1 ] shows the attendance as a function of time as well as the accumulated attendance normalized by the total target population ( and ) .the segmented line represents the atomized behavior occurring when remains constant throughout the process , it also divides the behavior of the system in two , the solutions that lie above it are examples of cases in which is a positive quantity , these are examples of systems which are dominated by a strongly favorable flow of information stimulating the consumption of the good . on the contrary ,the lines that lie below the dashed one are the ones for which is a negative quantity and are dominated by coordination effects .summarizing , we can see that two classes of behavior are predicted .the first class is characterized by a monotonic decay with an exponential tail .whereas , when the strength of social interactions is large enough , a second class of behavior emerges . in this case a period of increasing attendance exists until a maximum is reached . after this , the first class of behavior develops .the latter case has an accumulated behavior that resembles the standard ogive s - shape , which characterizes technological diffusion .the model represented by equations ( 5 ) and ( 6 ) was validated by comparing it with the u.s .box - office data available on the internet movie database ( imdb ) web site .we considered the 44 movies with the highest budgets of 2003 as our sample set and performed a estimation procedure on all movies in order to find the parameter set which most accurately fitted the empirical results .the model has 3 free parameters , the two initial conditions and , and the constant representing the strength of social interactions .the parameters used in the minimization process were and while was determined by matching the first data point on the data set with the first data point in the model-- .this reduced the number of free parameters to just two .the data set considered did not contain weekly attendance as a function of time but the weekly gross collected by the movie .we consider that the amount of money collected in the box office by a movie is a linear function of the number of people that attended it .this allow us to do our empirical analysis based on the weekly amount of money collected by a particular movie , which is equivalent to making the analysis based on the number of agents attending it .three examples of the fitting procedure are shown for ( a ) lord of the rings i ( b ) blade ii ( c ) kissing jessica stein.,scaledwidth=100.0% ] figure ( [ fig2 ] ) shows three plots comparing the model with the empirical data . both , the lord of the rings : fellowship of the ring , and blade ii represent two examples of the first class of behavior identified in the previous section .it is worth noticing that the slope followed by the data changes in time .the first points have a negative slope , which is considerably more pronounced than the ones in the tail .a simple exponential fit will be accurate in only one section of the curve , but would fail to fit both behaviors that were expected from the model and that appear to be present in nature in order to interpret the relative values of , and therefore compare different motion pictures , it is important to consider the behavioral class in which the particular films are located .blockbuster movies have a high initial attendance , which causes population finite size and social coordination effects to be very strong .this tell us that movies in this category will usually have a negative value for , and the magnitude of it will represent the observed value of the good .small negative values are associated with movies in which the decay is not accelerated due to the observed quality , but exhibit some acceleration due to social coordination effects . on the other hand ,large negative values map into accelerated decays that we believe are due to social coordination effects plus poor values in the observed quality of the good . in the second class of behavior ,the premier level of attendance is low , thus coordination effects do not act strongly on the system , because of the initially slow depleting of potential customers . in this case ,small values of represent a movie in which agents attended in a random fashion and the decay is not accelerated or damped due to social effects . on the other hand when relatively large values of are observed the attendance increases during the first time steps .this increase in consumption is due to the fact that the movie was well evaluated and that the pool of possible attendants was not depleted during the first couple of weeks . summarizing , in order to interpret the parameter associated with the observed quality of the good , it is important to consider two things .first we consider the class of behavior to which the movie actually pertains , this can be done by observing if the premiere attendance represents a large fraction of the total one .then , once the movie has been associated with a certain class of behavior , the actual value of sigma should be interpreted relative to the distribution of values associated with movies in that particular class from the empirical point of view and after observing the whole data set it becomes apparent that the values of sigma that we found tend to lie in a well defined distribution . figure ( [ fig4 ] )shows the distribution of values for obtained using this data set and the procedure explained above .given the selection criterion used , movies with the highest budgets , it is natural to see that this curve shows values belonging to the blockbuster class of behavior .a negative bias is observed when we look at the distribution of presented in figure ( [ fig4 ] ) .this negative bias tells us that in most cases , the coordination effect , which could also be associated with the underlying effect of cultural competition , dominates the dynamical properties of the system . for the 44 studied movies .[ fig4],scaledwidth=50.0% ] on the basis of the conjecture that coordinated consumption has the same effect on all motion pictures inside the same behavioral class , we can interpret that the value extracted for represents the actual effect that the flow of the information associated with the observed value has on the film , and therefore on its consumption life cycle .this would indicate that coordinated consumption only introduces a shift in the value of and that the deviation from this well - defined mean represents the actual value of the movie as given by the targeted audience .for instance , the differences in the estimated parameter for the lord of the rings and blade ii , would indicate that the former film was much better received than the latter one . *what is interesting about this result is that we do not claim an actual value based on the opinion of a film critic or an audience survey , we infer it from the structure of attendance behavior we observe*. this reversed engineering , definition of the observed value of subjective _ film quality _ presented here can be used to characterize consumers response to specific genres and thus help target film publicity and distribution in a more effective way .a dynamical model representing the life cycle of motion pictures was introduced .the assumptions of the model were that * \i ) agents do not go to the theater to see a particular movie more than once , and that * \ii ) the probability that agents go to the theater changes in a way which is proportional to the number of agents who have attended the theater times the ones who have not .the first of these assumptions is the one that gives rise to the exponential decay that characterize the tail of this process , while the second one is the one that allows the system to adjust its decay according to the social interactions present in cultural consumption , and consequently allows the model to accurately fit the different classes of observed behavior , namely , * \i ) a monotonic decay with an exponential tail , and * \ii ) an exponential adoption followed by an exponential tail which is traduced into an ogive s - shaped behavior when the accumulated theater attendance is observed .in addition , under certain assumptions , our model can be used to infer a quantitative estimator of the subjective reception of a particular film .an estimate which is independent of critics review and is inferred only from the shape of the consumption life cycle .further research in this area can be directed in a variety of ways .a natural extension of the model analyzed here would be to consider the more general setting in which the agents face a number of cultural options and a limited budget for a given period of time .it is also an area interest to investigate the link between the primary life of consumption and subsequent phases of consumption .another route for more empirically oriented research could be associated with the investigation of longitudinal processes in order to explore how the structure of the consumption life cycle has evolved along the last decades , and transversal processes to see the differences and correlations between the reception of particular films in different geographical regions .a more complete analysis of the movie industry database should be carried out in order to understand the nature of these structures .we thank the comments of samuel bowles , peter richerson , marcos singer and the participants of the workshop on evolution and the social sciences at the ceu , budapest .we also acknowledge financial support from fundacion andes grant c-13960 . | we model the consumption life cycle of theater attendance for single movies by taking into account the size of the targeted group and the effect of social interactions . we provide an analytical solution of such model , which we contrast with empirical data from the film industry obtaining good agreement with the diverse types of behaviors empirically found . the model grants a quantitative measure of the valorization of this cultural good based on the relative values of the coupling between agents who have watched the movie and those who have not . this represents a measurement of the observed quality of the good that is extracted solely from its dynamics , independently of critics reviews . _ department of physics and center for complex network research , university of notre dame , notre dame , in 46556 . + kellogg institute , 130 hesburgh center , notre dame , in , 46556 . + department of physics , university of michigan , ann arbor 48104 . + department of sociology , pontificia universidad catlica de chile , vicua mackenna 4860 , macul , santiago chile . + fe institute , 1399 hyde park road , santa fe , nm 87501 . usa _ |
a record is an entry in a discrete time series that is larger ( _ upper record _ ) or smaller ( _ lower record _ ) than all previous entries .thus , records are extreme values that are defined not relative to a fixed threshold , but relative to all preceding events that have occurred since the beginning of the process .statistical data in areas like meteorology , hydrology and athletics are naturally represented in terms of records .records play an important role in the public perception of issues like anthropogenic climate change and natural disasters such as floods and earthquakes , and they are an integral part of popular culture . indeed , the _ guinness book of records _ , first published in 1955 , is the world s most sold copyrighted book .the mathematical theory of records was initiated more than 50 years ago , and it is now a mature subfield of probability theory and statistics ; see for reviews and for an elementary introduction .most of this work has been devoted to the case when the time series under consideration consists of independent , identically distributed ( i.i.d . )random variables ( rv s ) . for the following discussion, it will be useful to distinguish between the _ record times _ at which the current record is broken and replaced by a new one , and the associated _ record values_. one of the key results of record theory is that the statistical properties of record times for real - valued i.i.d .rv s are completely independent of the underlying distribution .to illustrate the origin of this universality , we recall the basic observation that the probability for a record to occur in the time step ( the _ record rate _ ) is given by for i.i.d .rv s , because each of the first entries , including the last , is equally likely to be the largest or smallest . the mean number of records up to time , , is therefore given by the harmonic series with .further considerations along the same lines lead to a remarkably complete characterization of record times , which will be briefly reviewed below in section [ sec : iid ] .the universality of record times can be exploited in statistical tests of the i.i.d .property of a given sequence of variables , without the need for any hypothesis about the underlying distribution .by contrast , distributions of record values fall into three distinct universality classes , which are largely analogous to the well - known asymptotic laws of extreme value statistics for distributions with exponential - like tails ( _ gumbel _ ) , bounded support ( _ weibull _ ) and power law tails ( _ frchet _ ) , respectively .the decay of the record rate ( [ rate ] ) with increasing implies that the record breaking events form a non - stationary time series with unusual statistical properties , which will be further discussed below in section [ sec : iid ] . _ record dynamics _has therefore been proposed as a paradigm for the non - stationary temporal behaviour of diverse complex systems ranging from the low - temperature relaxation of spin glasses to the co - evolution of biological populations .in fact , records appear naturally in the theory of biological adaptation , because any evolutionary innovation that successfully spreads in a population must be a record , in the sense that it accomplishes some task encountered by the organism in a way that is superior to all previously existing solutions .consequently the statistics of records and extremes has been invoked to understand the distribution of fitness increments in adaptive processes as well as the timing of adaptive events . in the biological contextthe universality of record time statistics is particularly attractive , because genotypical fitness is a somewhat elusive notion that is hard to quantify in terms of explicit probability distributions .surprisingly few result on record statistics are known that go beyond the standard setting of i.i.d .rv s , and thus consider correlated and/or non - identically distributed rv s . in the present article we focus exclusively on the latter issue , while maintaining the independence among the entries in the sequence .a simple example of this type was introduced by yang in an attempt to explain the frequency of occurrence of olympic records , which is much higher than would be expected on the basis of the i.i.d .theory . in his modela specified number of i.i.d .rv s become available simultaneously in each time step , corresponding , in the athletic context , to a variable ( growing ) population from which the contenders are drawn .much of the standard theory can be extended to this case ( see section [ sec : growing ] for a brief review ) .in particular , one finds that the record rate becomes asymptotically constant for exponentially growing populations .an application of yang s model to evolutionary searches in the space of genotypic sequences can be found in . a second line of research has addressed the case of sequences with a linear trend , in which the entry is of the form with i.i.d .rv s and . also in this case the record rate becomes asymptotically constant , see section [ sec : growing ] for details .the effect of trends on the occurrence rate of records is a key issue in the ongoing debate about the observable consequences of global warming . in this contextit has been pointed out that climate _ variability _ is presumably a more important factor in determining the frequency of extreme events than averages .it is therefore of considerable interest to investigate the record statistics of sequences of uncorrelated rv s in which the _ shape _ of the underlying probability distribution changes systematically with time .to initiate such an investigation is the goal of the present paper . throughoutwe assume that the probability density of the entry is of the form where is a fixed normalized distribution and the usually have a power - law time dependence so that ( ) corresponds to a broadening ( sharpening ) distribution .after a brief review of a few important classic results of the theory of records in section [ sec : classic ] , our new results for non - indentically distributed random variables will be presented in section [ sec : record ] .we focus on the asymptotic behaviour of the record rate and the mean number of records .preliminary numerical results for the variance of the number of records are reported in section [ sec : numerics ] , but a more complete characterization of record times and record values is left for future work . finally , some concluding remarks are offered in section [ sec : discussion ] .given the distributions of the entries in a sequence of independent rv s , the probability that the entry is an upper record is equal to the probability that for all .hence where is the cumulative distribution of .similarly the probability that is a lower record reads .\ ] ] equations ( [ upper ] ) and ( [ lower ] ) form the basis for most of what follows . for i.i.d .rv s the integral ( [ upper ] ) can be performed by noting that and , which yields the universal result ( [ rate ] ) .to arrive at a characterization of the record time process beyond the mean number of records we introduce the _ record indicator variables _ , which take the value iff is a record , and else .it turns out that the are independent , and hence they form a bernoulli process with success probability . to see why this is so , consider the two - point correlation function and assume that .then the key idea is that the right hand side of \ ] ] can be split into independent events according to \times { \mathrm{prob}}[x_j = \max(x_{i+1}, ... ,x_j ) ] \times \nonumber \\ \times { \mathrm{prob}}[\max(x_1, ... ,x_i ) < \max(x_{i+1}, ... ,x_j)].\end{aligned}\ ] ] following the symmetry argument used to derive ( [ rate ] ) , the first two factors are and , respectively , and the third factor can be written as = \frac{j - i}{j}.\ ] ] we conclude that higher order correlations can be shown to factorize in the same way .the number of records up to time can then be expressed in terms of the indicator variables as and the variance of is for .the _ index of dispersion _ of the record time process , defined as the ratio of the variance to the mean thus tends to unity , and the distribution of the becomes poissonian with mean for large .the record times form a _ log - poisson _process .a second useful observation concerns the ratios between consecutive record times .let denote the time of the record , with by convention . repeating the symmetry argument used to derive ( [ rate ] ), we expect that given , the preceding record occurs with equal probability anywhere in the interval ] for large .moreover the become independent in this limit .this allows us to highlight a peculiar property of the sequence of record breaking events : the expected value of , given , is but the reverse conditioning yields an _ infinite _ expectation , because in this sense , the occurrence of records can be predicted only with hindsight , but not forward in time . in the model for growing populations introduced by yang and elaborated by nevzorov , a number of of i.i.d .rv s becomes available simultaneously at time .the symmetry argument in section [ sec : intro ] is easily extended to this case : because of the i.i.d .property , the probability that there is a record among the newly generated rv s is equal to the ratio of to the total number of rv s that have appeared up to time , and hence the independence of the record indicator variables introduced above in section [ sec : iid ] continues to hold , so again the sequence of record breaking events is a bernouilli process with success probability . to give a simple example for the consequences of ( [ rate_yang ] ) , suppose the grow exponentially as as with .this could model a sequence of athletic competitions in an exponentially growing population , where each athlete is assumed to be able to participate only in one event .then the evaluation of ( [ rate_yang ] ) yields and the distribution of inter - record times is geometric . in his analysis of olympic records yang estimated a growth factor of for the four - year period between two games , and concluded that this growth rate was insufficient to explain the observed high frequency of records . motivated by this outcome , ballerini and resnick considered a model of _ improving _ populations , where the sequence of rv s displays a linear drift according to ( [ trend ] ) .they showed that the record rate tends to an asymptotic limit given by where is the probability density of the i.i.d .rv s in ( [ trend ] ) and = \lim_{n \to \infty } \prod_{k=1}^{n-1 } q(y + ck),\ ] ] with .the function has the obvious limits and , but the explicit evaluation is generally difficult .a simple expression is obtained when is of gumbel form , $ ] , which yields . for further details on the model ( [ trend ] ) and applications to athletic datawe refer to .results for specific distributions and an application to global warming can be found in .in this section we want to evaluate the record rates ( [ upper ] ) and ( [ lower ] ) for distributions of the general form ( [ shape ] ) .introducing the cumulative distribution corresponding to , the record rates of interest can be written as and ,\ ] ] which makes clear the obvious fact that the overall scale of the s is without importance . in some special casesthe record rates can be evaluated exactly for arbitrary choices of the s .for example , for the exponential distribution we have , and the evaluation of the lower record rate ( [ lower1 ] ) yields inserting the power law behaviour ( [ power ] ) we see that the denominator converges to the riemann zeta function for , so that for large , and the expected number of lower records remains finite for . for we have instead that for large , and hence asymptotically .as would be intuitively expected , the occurrence of lower records is enhanced for sharpening distributions ( ) and suppressed for broadening distributions ( ) .finally , in the borderline case we find which is our first example of a nontrivial asymptotic law that differs qualitatively from the i.i.d . result ( [ harmonic ] ) .a simple explicit expression for the upper record rate can be obtained for the uniform distribution characterized by when the are increasing , in the sense that for all , i.e. for the case of a sharpening uniform distribution .then the arguments of on the right hand side of ( [ upper1 ] ) are all less than unity , and direct integration yields inserting the power law form ( [ power ] ) with one finds that the record rate decays exponentially as , and hence the asymptotic number of records is finite for all . in this sectionwe focus on broadening distributions , , and evaluate the upper record rate ( [ upper1 ] ) asymptotically for representatives of all three universality classes of extreme value statistics .the starting point is to replace the product on the right hand side of ( [ upper1 ] ) by the exponential of a sum of logarithms , and to replace the latter by an integral .it then follows that the asymptotic behaviour of the record rate is given by the second representation will prove to be useful in the final evaluation of . note that can always be expressed in terms of because .the function is given by where in the second step it has been used that the integral in ( [ asym ] ) is dominated for large by the region where and .it is therefore clear that the asymptotic behaviour of the record rate depends only on the tail of , and hence universality in the sense of standard extreme value statistics should apply .the evaluation of ( [ galpha ] ) is straightforward for the frchet class of distributions with power law tails .we set and obtain inserting this into ( [ asym ] ) yields for large , and hence this result remains valid for negative as long as .when the evaluation of shows that and thus the asymptotic number of records remains finite .the gumbel class comprises unbounded distributions whose tail decays faster than a power law .a typical representative is the exponentical distribution ( [ exponential ] ) with .evaluation of ( [ galpha ] ) yields where denotes the incomplete gamma function . for large have so that which yields = \int_0 ^ 1 dv \ ; \exp[-n v/(\alpha \ln(1/v))].\ ] ] to further evaluate the integral we substitute and obtain \to \nonumber \\\to \frac{\ln n}{n } \int_0^\infty dw \ ; e^{-w/\alpha } = \frac{\alpha \ln n}{n}\end{aligned}\ ] ] for .correspondingly the mean number of records grows as a second important representative of the gumbel class is the gaussian ( normal ) distribution , for which proceeding as before , we find which becomes identical to ( [ galpha_exp ] ) upon replacing by .we conclude that and for the gaussian case .although this does not constitute a strict proof , it strongly indicates that the behaviour is _ universal _ within this class of probability distributions . as a representative of the weibull class of distributions with finite support we first consider the uniform distribution ( [ uniform ] ) .the integral on the right hand side of ( [ galpha ] ) can then be evaluated without approximating by , and one obtains this is a negative monotonically increasing function which vanishes quadratically in near , the evaluation of the record rate ( [ asym ] ) then yields \approx \sqrt { \frac{\alpha \pi}{2 n}}\ ] ] for large , and the number of records grows asymptotically as . the specific power is clearly related to the quadratic behaviour of near , which in turn reflects the behaviour of near the upper boundary .more generally we may consider bounded distributions of the form with and the uniform case corresponding to . to extract the leading order behaviour of for we write for .hence the record rate decays as and the mean number of records grows as the asymptotic laws ( [ r_frechet ] , [ mean_exp ] , [ weibull_mean ] ) were first discovered in simulations , and they have subsequently been numerically verified for a variety of parameter values . as an example , we show in figure [ figure1 ] numerical data for the mean number of records obtained for distributions in the gumbel class .there are significant corrections to the asymptotic behaviour for the gaussian distribution as well as for the exponentical distribution with .this is not surprising in view of the approximations used in the derivation of ( [ mean_exp ] ) ; for example , the last step in ( [ pn_exp2 ] ) requires that which is true only for enormously large values of . , and .the dashed line shows data obtained for the gaussian distribution and .the thin dotted line is the harmonic series ( [ harmonic ] ) which applies universally for .the short bold dotted lines show the predicted slope for the exponential case and in the gaussian case .all data were obtained from realizations of time series of length .,scaledwidth=70.0% ] simulations have also been used to investigate the occurrence of correlations in the record time process for .we have seen in section [ sec : iid ] that the poisson statistics of is a consequence of the fact that the record indicator variables are independent in the i.i.d . case . in particular , ( [ rn_variance ] ) shows that the variance of is asympotically equal to the mean whenever the are uncorrelated and the record rate tends to zero for in such a way that diverges .as this is true for in all cases that we have considered , the index of dispersion ( [ rho ] ) can be used as a probe for correlations .the data displayed in figure [ figure2 ] clearly show that the asymptotic value of is less than unity and independent of for the uniform distribution .similar results have been obtained for the exponential distribution , whereas we find that for the power law case .we conclude that , at least in certain cases , the record time process becomes more regular than the log - poisson process when the underlying distribution broadens with time ., 1/2 , 1 and 2 . while the data for approach the asymptotic poisson limit of unity according to ( [ rn_variance ] ) , the data for converge to a universal sub - poissonian value .the data were obtained from realizations of time series of length .,scaledwidth=70.0% ]the main results of this paper are the asymptotic laws ( [ r_frechet ] , [ mean_exp ] , [ weibull_mean ] ) for the mean number of records in sequences of random variables drawn from broadening distributions . in all three casesthe exponent governing the time dependence of the width of the distribution enters only in the prefactors and does not affect the functional form of the result . comparing the three cases , we see that the effect of the broadening on is stronger the faster the underlying distribution decays for large arguments : for fat - tailed power law distributions the number of records remains logarithmic , for exponential - like distributions it changes from to , while for distributions with bounded support the logarithm speeds up to a power law in time . apart from the presentation of new results , a secondary purpose of this paper has been to advertise record dynamics as a paradigm of non - stationary point processes with interesting mathematical properties and wide - spread applications ranging from fundamental issues in the dynamics of complex systems to the consequences of climatic change . in the present work we have combined the intrinsic non - stationarity of record dynamics with an explicit non - stationarity of the underlying sequence of random variables .this turns out to be a relevant modification which may alter the basic logarithmic time - dependence of the mean number of records , and it can induce correlations among the record times , as detected in deviations of the index of dispersion ( [ rho ] ) from unity . it is worth noting that evidence for such correlations can also be found in recent applications of record dynamics in simulations of complex systems .an analytic understanding of the origin of correlations in the models presented here is clearly an important goal for the near future .i am grateful to kavita jain for her contributions in the early stages of this project , and to sid redner for useful correspondence and discussions .this work was supported by dfg within sfb - tr 12 _ symmetries and universality in mesoscopic systems_.10 hoyt d v 1981 _ climatic change _ * 3 * 243 bassett jr .gw 1992 _ climatic change _ * 21 * 303 benestad re 2003 , _ climate research _ * 25 * 3 redner s and petersen mr 2006 _ phys .e _ * 74 * 061114 matalas nc 1997 _ climatic change _ * 37 * 89 vogel rm , zafirakou - koulouris a and matalas nc 2001 _ water res .research _ * 37 * 1723 gembris d , taylor jg and suter d ( 2002 ) _ nature _ * 417 * 506 chandler kn 1952 _ j. roy . statb _ * 14 * 220 glick n 1978 _ amer . math . monthly _ * 85 * 2 nevzorov vb 1987 _ theory probab . appl . _* 32 * 201 arnold bc , balakrishnan n and nagaraja hn 1998 _ records _( new york : wiley ) nevzorov vb 2001 _ records : mathematical theory _( providence : american mathematical society ) schmittmann b and zia rkp 1999 _ am . j. phys . _* 67 * 1269 galambos j 1987 _ the asymptotic theory of extreme order statistics _ ( malabar : r.e .krieger ) sornette d 2000 _ critical phenomena in natural sciences _( berlin : springer ) sibani p and littlewood p 1993 _ phys .lett . _ * 71 * 1482 sibani p and dall j 2003 _ europhys .lett . _ * 64 * 8 anderson pe , jensen hj , oliveira lp and sibani p 2004 _ complexity _ * 10 * 49 gillespie jh 1991 _ the causes of molecular evolution _ ( new york : oxford university press ) orr ha 2005 _ nature rev* 6 * 119 kauffman sa and levin s 1987 _ j. theor .biol . _ * 128 * 11 sibani p , brandt m and alstrm p 1991 _ int . j. modb _ * 12 * 361 krug j and karl c 2003 _ physica a _ * 318 * 137 krug j and jain k 2005 _ physica a _ * 358 * 1 jain k and krug j 2005 _ j. stat .mech . : theory and experiment _p04008 sire c , majumdar sn and dean ds 2006 _ j. stat .mech . : theory and experiment _l07001 yang mck 1975 _ j. appl .* 12 * , 148 ballerini r and resnick s 1985 _ j. appl .* 22 * 487 ballerini r and resnick s 1987 _ adv ._ * 19 * 801 borovkov k 1999 _ j. appl .prob . _ * 36 * 669 katz rw and brown bg 1992 _ climatic change _ * 21 * 289 cox dr and isham v 1980 _ point processes _ ( london : chapman and hall ) tata mn 1969 _ z. warsch .* 12 * 9 shorrock rw 1972 _ j. appl .prob . _ * 9 * 316 gradshteyn is and ryzhik i m 2000 _ table of integrals , series and products _( san diego : academic press ) | in the context of this paper , a record is an entry in a sequence of random variables ( rv s ) that is larger or smaller than all previous entries . after a brief review of the classic theory of records , which is largely restricted to sequences of independent and identically distributed ( i.i.d . ) rv s , new results for sequences of independent rv s with distributions that broaden or sharpen with time are presented . in particular , we show that when the width of the distribution grows as a power law in time , the mean number of records is asymptotically of order for distributions with a power law tail ( the _ frchet class _ of extremal value statistics ) , of order for distributions of exponential type ( _ gumbel class _ ) , and of order for distributions of bounded support ( _ weibull class _ ) , where the exponent describes the behaviour of the distribution at the upper ( or lower ) boundary . simulations are presented which indicate that , in contrast to the i.i.d . case , the sequence of record breaking events is correlated in such a way that the variance of the number of records is asymptotically smaller than the mean . |
compressive sensing ( cs ) signal acquisition paradigm asserts that one can successfully recover certain signals sampled far below their nyquist frequencies given they are sparse in some dictionary .the fourier dictionary for frequency sparse signals is an example of this .encouraged by this assertion , the usual sample and then compress setup can be combined into a single efficient step .signals acquired in this fashion do , however , have to be reconstructed which , in the noiseless case , entails a non - convex optimisation problem of the form : where is the reconstructed signal , is a known measurement matrix , and is the measured signal with . in the cs context, is the number of samples sensed while is the number of samples in the original signal .we take to denote the pseudo norm from , i.e. the number of non - zero entries in .solving the combinatorial problem in ( [ eq : compressive_sensing - general_problem ] ) by an exhaustive search is generally infeasible .one feasible approach in reconstructing the signal is to relax the problem in ( [ eq : compressive_sensing - general_problem ] ) by substituting the norm for the making the problem a linear program ( lp ) .another feasible approach is taken by the family of so - called iterative greedy algorithms . in these , the problem in ( [ eq : compressive_sensing - general_problem ] ) is reversed by minimising the residual of the energy of subject to some sparsity enforcing constraint .abstractly , the greedy algorithms can be separated into two classes : 1 ) simple one stage algorithms which use a single greedy step in each iteration . examples are matching pursuit ( mp ) and iterative hard thresholding ( iht ) .2 ) composite two stage algorithms which combine a greedy step with a refinement step in each iteration .examples are orthogonal matching pursuit ( omp ) and cosamp .the main advantage of the greedy algorithms over the approach is that they are computationally less complex and require less computation time than state - of - the - art lp solvers .in addition to the computation time , a measure of the reconstruction quality must be considered .recently , the measure of phase transition has become a standard way to specify reconstruction capabilities , see e.g. , , , .phase transitions evaluate the probability of successful reconstruction versus the indeterminacy of the constraints and the true sparsity of . in general, the main advantage of the approach over the greedy algorithms is that it is superior in terms of phase transition . in search of a fast algorithm with a phase transition similar to that of the approach, it has been proposed to solve ( [ eq : compressive_sensing - general_problem ] ) by approximating the norm with a continuous function .the resulting smoothed norm ( sl0 ) algorithm has a better phase transition than the greedy algorithms while requiring considerably less computation time than the state - of - the - art lp solvers . in this paper , we show that a few key parameters must be carefully selected and knowledge of the indeterminacy exploited to fully unleash the potential of sl0 .we provide a set of empirically determined recommended parameters for a modified sl0 algorithm that may dramatically improve its phase transition . through extensive simulations , the claim of superiority of the recommended parameters is supported .finally , we discuss implementation strategies that speed up the algorithm by exploiting knowledge of the indeterminacy .the paper is organised as follows . in section [ sec :sl0 ] , we restate the sl0 algorithm and present the proposed algorithm .implementations of sl0 that yield reduced computation time are discussed in section [ sec : fast_implementations ] .section [ sec : simulation_framework ] describes the setup used for simulations while section [ sec : results ] provides the simulation results .a discussion of the results is given in section [ sec : discussion ] .finally , conclusions are stated in section [ sec : conclusions ] .sl0 attempts to solve the problem in ( [ eq : compressive_sensing - general_problem ] ) by approximating the norm with a continuous function . consider the continuous gaussian function with the parameter : the parameter may be used to control the accuracy with which approximates the kronecker delta . in mathematical terms, we have : define the continuous multivariate function as : since the number of entries in is and the function is an indicator of the number of zero - entries in , the norm of the reconstructed vector is approximated by : substituting this approximation into ( [ eq : compressive_sensing - general_problem ] ) yields the problem : the approach is then to solve the problem in ( [ eq : compressive_sensing - sl0_problem ] ) for a decreasing sequence of s .the underlying thought is to select a which ensures that the initial solution is in the subset of over which the approximation is convex and gradually increase the accuracy of the approximation . by careful selection of the sequence of s , ( hopefully ) non - convexity and thereby local minima are avoided . in the sl0 algorithm stated below ,we let denote the moore - penrose pseudo - inverse of the matrix and let denote the hadamard product ( entry wise multiplication ) of the vectors and . furthermore, we let ^t ] where the number of entries equals the number of s used .furthermore , an inversely proportional relation between and the initial value of yielded the most promising phase transition .specifically , we choose an initial and combine this choice with a . finally , a gradually increasing for decreasing still provides an improvement for the updated parameter choices . here, we choose a geometric sequence starting with and increasing by a factor of for each update of . with an increased value of and gradually increasing values of , the computation time is bound to increase .this effect can , however , be counteracted by introducing a stopping criterion in the inner loop of the sl0 algorithm .therefore , we choose to terminate the inner loop when the relative change falls below where .generally , this measure has proved to be a good indicator of convergence and significantly reduced the average number of iterations taken in the inner loop .we now propose the smoothed norm algorithm with modified step - size ( sl0 mss ) which incorporates all of the above findings .* sl0 mss algorithm : * * initialise : * , , , + , , [ alg : sl0_mod - ls_init ] {^{\mathrm{t}}} ] . in general , reconstruction is easier for larger and smaller and becomes increasingly difficult when decreasing and increasing .somewhere in - between , the phase transition curve separates the phase space into a phase where reconstruction is likely and a phase where it is unlikely .this phase transition curve is continuous in for fixed . obviously , it is desirable to have a phase transition curve which is as close to as possible .different suites of problems , i.e. different combinations of ensembles of and generally result in different phase transitions .choosing from the uniform spherical ensemble ( use ) and the non - zero entries in from the rademacher distributed generally yields the most difficult problem suite in terms of obtaining good phase transitions . in the simulations , we consider this problem suite along with a the problem suite where is chosen from the use and the non - zero entries in are chosen from the zero mean , unit variance gaussian ensemble . in , it is shown that the probability of reconstruction versus for fixed and can be modelled accurately by logistic regression for a specific set of algorithms .logistic regression is used more generally in to determine the location of the phase transition curve .we adopt the logistic regression approach to estimate the location of the phase transition curve and fix as proposed in .we then attempt reconstruction on a uniform grid in the phase space specified by : for each point in the grid , we do 10 monte carlo simulations where each simulation features a new draw of and . from we adopt that an attempted reconstruction is considered successful when : where and are the reconstructed and true signal , respectively . if the criterion is not met , the attempted reconstruction is considered unsuccessful , i.e. , the attempted reconstruction can not be considered indeterminate . to evaluate the required computation time for the different algorithms, we measure the absolute time spent on reconstruction when it succeeds .the problem suite formed by choosing from the use combined with rademacher distributed non - zero entries in is used in this test . a uniform grid in the phase space is formed by : an algorithm is tested on all points in the grid that are at least 0.025 below its empirically determined phase transition ( measured on the -axis ) . the time spent on reconstruction for the successful part of 10 monte carlo simulations in each pointis then averaged .considered problem sizes are : the simulations have been conducted on an intel core i7 970 6-core based pc with ddr3 ram .the os used is 64-bit ubuntu 12.04 lts linux and the enthought python distribution ( epd ) 7.2 - 2 ( 64-bit ) .all simulations are carried out in double precision . to validate the results obtained from our simulation framework, we have simulated the iterative hard thresholding algorithm presented in .the phase transition obtained in our simulation framework has then been compared with the phase transition obtained in the simulation framework of . due to the non - deterministic nature of the monte carlo simulations used in both simulation frameworks , the two phase transitions will inevitably differ slightly .however , we have observed that they are almost identical and therefore concluded , that our simulation framework works as intended .four algorithms have been simulated : 1 ) sl0 std which is the sl0 algorithm presented in section [ sec : sl0 ] .2 ) sl0 min which is the same algorithm except it is modified such that is multiplied by each time is decreased .3 ) sl0 mss which is the algorithm presented in section [ sec : improving_phase_transition ] .4 ) iht which is the iterative hard thresholding algorithm described in . in the case of the sl0 mss algorithm ,two implementations have been simulated : the sl0 mssslowromancap1@ implementation based on ( [ eq : sl0_mod_mssi_1 ] ) and ( [ eq : sl0_mod_mssi_2 ] ) and the sl0 mssslowromancap2@ implementation based on ( [ eq : qr_methods - split1 ] ) and ( [ eq : qr_methods - split2 ] ) .the experimental results are presented in figure [ fig : results - rademacher - phase ] , [ fig : results - gaussian - phase ] , [ fig : results - computation_times ] , and [ fig : results - scaling ] .figure [ fig : results - rademacher - phase ] shows the phase transitions for rademacher distributed non - zero entries in while figure [ fig : results - gaussian - phase ] shows the phase transitions for zero mean , unit variance gaussian non - zero entries in . in both figures , the theoretical curve from is included for reference .figure [ fig : results - computation_times ] shows the measured average computation times versus indeterminacy and figure [ fig : results - scaling ] shows the measured average computation times versus problem size . note the abrupt ending of the sl0 std curve in figure [ fig : results - computation_times ] which is due to failure of reconstruction in the tested grid for .also , note that the measured computation times of the sl0 min implementation have been divided by 20 . in summary ,sl0 mss shows the best phase transition among the tested algorithms for both rademacher and gaussian non - zero entries in . in large portions of the phase spaceit even surpasses the theoretical curve .the only exception is in the gaussian case for where iht shows better phase transition . regarding computation time , iht is faster than the sl0 approaches among which sl0 min is consistently more than 20 times slower than sl0 mss. furthermore , iht scales slightly better with problem size than the sl0 approaches .for rademacher non - zero entries in , figure [ fig : results - rademacher - phase ] reveals that sl0 std , sl0 min , and iht are by far outperformed by sl0 mss in terms of phase transistion .even the theoretical curve is surpassed by sl0 mss at around .for , sl0 mss shows the same phase transition as the theoretical curve . the curve for sl0 min is a clear example of the improvement in phase transition obtainable using more iterations in the inner loop of the sl0 algorithm . however , sl0 mss further improves on this , especially for and . the results in figure [ fig : results - computation_times ]settle that sl0 does indeed require more computation time than iht .iht is around two to four times faster ( depending on ) than the fastest sl0 implementation for a problem size of .the important thing to note though , is that sl0 provides a trade - off between phase transition and computation time .the price paid in computation time for using a lot of iterations to get better phase transition is clear from the sl0 min curve .this is , however , not the case for sl0 mss , which requires less than or about the same computation time as sl0 std depending on .thus , a much better phase transition is obtained using largely the same computation time in going from sl0 std to sl0 mss .to obtain such a result , it is necessary to switch from sl0 mssslowromancap1@ to sl0 mssslowromancap2@ at around .an assessment of the scaling of average computation time with problem size reveals that all three sl0 algorithms seem to scale in an equivalent way .iht scales better than sl0 and hence requires relatively less computation time as the problem size increases .the scaling depicted in figure [ fig : results - scaling ] is for which provides a rough average computation time across all values of as can be seen from figure [ fig : results - computation_times ] .the parameters for sl0 mss stated in section [ sec : improving_phase_transition ] are ( locally ) optimal in terms of phase transition for rademacher non - zero entries in .gaussian non - zero entries in are known to be in favour of greedy algorithms , which is also the case for iht in our simulations , especially for where the iht curve surpasses the theoretical curve . for , sl0 min and sl0 mss demonstrate the best phase transition among the shown algorithms .min and sl0 mss phase transitions are about the same , though .comparing our results for sl0 mss in figure [ fig : results - gaussian - phase ] with the ones given for sl0 in figure 6 in shows about the same phase transition .the slightly better phase transition for in may be due to the sl0 mss parameters not necessarily being optimal for gaussian non - zero entries in .although the above simulations are quite encouraging , they are based on an empirically tuned algorithm . thus , to reach a final verdict of the success of sl0 mss , the validity of the simulation results must be exhaustively studied for a broader set of problem suites .alternatively , more sound mathematical proofs must be presented .we have proposed a new compressive sensing reconstruction algorithm named sl0 mss based on the smoothed norm .it turns out that sl0 phase transitions heavily depend on parameter selection .sl0 mss attempts to improve on phase transition by exploiting the known indeterminacy combined with carefully selected parameters .a trade - off between phase transition and computation time is provided by sl0 .improved phase transition has been measured for sl0 mss compared to standard sl0 while maintaining the same computation time .10 [ 1]#1 url [ 2]#2 [ 2 ] l@#1=l@#1#2 e. j. cands and m. b. wakin , `` an introduction to compressive sampling , '' _ ieee signal processing magazine _ , vol .25 , no . 2 , pp .2130 , mar .d. l. donoho , `` compressed sensing , '' _ ieee transactions on signal processing _ ,52 , no . 4 , pp .12891306 , apr . , `` for most large underdetermined systems of linear equations the minimal 1-norm solution is also the sparsest solution , '' department of statistics , stanford university , tech . rep .2004 - 9 , sep . 2004 .[ online ] .available : http://statistics.stanford.edu/~ckirby/techreports/gen/2004/2004-09.pdf a. maleki and d. l. donoho , `` optimally tuned iterative reconstruction algorithms for compressed sensing , '' _ ieee journal of selected topics in signal processing _ , vol . 4 , no . 2 , pp .330341 , apr .s. g. mallat and z. zhang , `` matching pursuits with time - frequency dictionaries , '' _ ieee transactions on signal processing _ , vol .41 , no . 12 , pp . 33973415 , mar .t. blumensath and m. e. davies , `` iterative hard thresholding for compressed sensing , '' _ applied and computational harmonic analysis _27 , no . 3 , pp . 265274 , 2009 . j. a. tropp and a. c. gilbert , `` signal recovery from random measurements via orthogonal matching pursuit , '' _ ieee transactions on information theory _ , vol .53 , no . 12 , pp .46554666 , dec . 2007 .d. needell and j. tropp , `` cosamp : iterative signal recovery from incomplete and inaccurate samples , '' _ applied and computational harmonic analysis _ ,26 , no . 3 , pp . 301321 , 2009 . w. dai and o. milenkovic , `` subspace pursuit for compressive sensing signal reconstruction , '' _ ieee transactions on information theory _55 , no . 5 , pp .22302249 , may 2009 .t. blumensath and m. e. davies , `` normalized iterative hard thresholding : guaranteed stability and performance , '' _ ieee journal of selected topics in signal processing _ , vol . 4 , no . 2 , pp .298309 , apr .d. l. donoho and j. tanner , `` precise undersampling theorems , '' _ proceedings of the ieee _ , vol .98 , no . 6 , pp . 913924 , jun .d. l. donoho , a. maleki , and a. montanari , `` message passing algorithms for compressed sensing : ii .analysis and validation , '' in _ ieee information theory workshop ( itw ) _ , cairo , egypt , jan . 68 , 2010 , pp . 15 .p. jain , a. tewari , and i. s. dhillon , `` orthogonal matching pursuit with replacement , '' in _ twenty - fifth annual conference on neural information processing systems _ , granada , spain , dec .1215 , 2011 , pp . 16721680. b. l. sturm , m. g. christensen , and r. gribonval , `` cyclic pure greedy algorithms for recovering compressively sampled sparse signals , '' in _45th ieee asilomar conference on signals , systems , and computers _ , pacific grove ( ca ) , usa , nov . 69 , 2011 , pp . 11431147 .g. h. mohimani , m. babaie - zadeh , and c. jutten , `` a fast approach for overcomplete sparse decomposition based on smoothed l0 norm , '' _ ieee transactions on signal processing _ ,57 , no . 1 ,pp . 289301 , jan .z. cui , h. zhang , and w. lu , `` an improved smoothed l0-norm algorithm based on multiparameter approximation function , '' in _12th ieee international conference on communication technology ( icct ) _ , nanjing , china , nov . 1114 , 2010 , pp .. h. mohimani , m. babaie - zadeh , i. gorodnitsky , and c. jutten , `` sparse recovery using smoothed l0 ( sl0 ) : convergence analysis , '' _ arxiv _ , 2010 , submitted ( on 24 january 2010 ) to ieee transactions on information theory .[ online ] .available : http://arxiv.org/abs/1001.5073 s. boyd and l. vandenberghe , _convex optimization_.1em plus 0.5em minus 0.4emcambridge university press , 2004 , 9th printing .d. donoho and j. tanner , `` observed universality of phase transitions in high - dimensional geometry , with implications for modern data analysis and signal processing , '' _ phil .a _ , vol .1906 , pp . 42734293 ,j. tanner .phase transitions of the regular polytopes and cone : tabulated values .the university of edinburgh . accessed : 22 - 05 - 2012 .[ online ] .available : http://ecos.maths.ed.ac.uk/polytopes.shtml | signal reconstruction in compressive sensing involves finding a sparse solution that satisfies a set of linear constraints . several approaches to this problem have been considered in existing reconstruction algorithms . they each provide a trade - off between reconstruction capabilities and required computation time . in an attempt to push the limits for this trade - off , we consider a smoothed norm ( sl0 ) algorithm in a noiseless setup . we argue that using a set of carefully chosen parameters in our proposed adaptive sl0 algorithm may result in significantly better reconstruction capabilities in terms of phase transition while retaining the same required computation time as existing sl0 algorithms . a large set of simulations further support this claim . simulations even reveal that the theoretical curve may be surpassed in major parts of the phase space . |
aggregated tax income , italy , benford s lawin the present information age the collection , processing and dissemination of data from different financial activities , of utmost importance for human welfare , is quite easy and convenient .however , for policy makers , the extraction of meaningful information , critical for plausible strategic decisions , from the sea of available data is a formidable challenge .the far reaching , adverse , consequences of using flawed data have been exemplified by the 2007 financial crisis in the initiation of which data of questionable quality being used in corporates and governments played a significant role . one statistical tool which can serve as a first check on the quality of ( large ) numerical data and thereby , to a great extent , simplifies the deciphering of anomalies present in ( large ) data sets is the so - called benford s law .the law describes the counter - intuitive uneven distribution of numbers in large data sets .usually , it appears that the occurrence of the first digits of numbers has nothing to do with their abundance within the data .in fact , in a large data set , the appearance of each digit from 1 to 9 as first digit is equally likely with a proportion of about 11% .however , according to benford s law , the appearance of digits is such that the distribution of the first digits tends to be logarithmic with numbers having smaller first digits appearing more frequently than those having larger first digits .thus , the distribution is heavily skewed in favor of smaller digits with digits 1 , 2 and 3 taking about 60% of the total occurrences as first digits and the remaining six digits i.e. 4 to 9 left with only 40% of the occurrences .benford s law was first reported by newcomb following his observation that the pages of logarithmic table books get progressively cleaner as one moves from initial to latter pages , the first page being the dirtiest .about four decades latter , benford rediscovered the phenomenon through a similar observation and established it on a more solid footing by testing its accuracy on a large volume of data he collected from diverse fields , e.g. physical constants , atomic and molecular masses , street addresses , length of rivers etc . and concluded that the occurrence of first significant digits follow a logarithmic distribution where p(d ) is the probability of a number having the first non - zero digit d and is logarithmic to base 10 .+ the theoretical proportions for each of the digits from 1 to 9 to be first significant digit are as shown in table 1 .the development of research on benford s law into a full fledged field is as fascinating as the discovery of the law itself .firstly , though the mathematical form of the law is very simple a complete mathematical understanding is yet to be achieved .furthermore , the law represents the only distribution of leading digits invariant under scale and base transformations .secondly , numerous data sets from diverse fields conform to the law . for an exhaustive listrefer to .yet , there is _ a priori _ no set of criteria to predict what type of data should conform to the law .necessary and sufficient conditions are much in debate .nevertheless , ( i ) the presence of sufficient number of entries in the data , for digits to manifest themselves , ( ii ) spanning of several orders of magnitude , and ( iii ) the absence of any human restrictions , i.e. there are no built in minima or maxima , on the appearance of numbers in data are some properties that data under investigation must posses for conformity to the law .after the seminal work of benford , the first digit phenomenon again came to prominence following the efforts of nigrini who reported its frequent emergence in financial data .furthermore , nigrini provided the first practical application of benford s law in the detection of tax evasion by hypothesizing that to save on their tax liabilities individuals might understate income items and overstate deduction items in their tax return files leading to overall distortion of digital frequencies .a benford s law test successfully captured the manipulation of digital frequencies in the submitted data .furthermore , falsification of financial documents , manipulated trade invoices and tax returns submitted by companies have been clearly unraveled .today the law is routinely used by forensic analysts to detect error , incompleteness and wilfull manipulation of the financial data .the basic premise of the test is that first digits in real data , in general , have a tendency to approach the benford distribution whereas people intending to play with the numbers , when unaware of the law , try to place the digits uniformly .thus any departure from the law raises suspicion about the quality of the data and/or in the process involved in its generation .benford s law has been successfully utilized to expose the intention of cheating in both corporates and the governments . in order to attract investment, firms must posses a strong financial basis .moreover , to project a healthy , though sometimes superficial , picture they report enhanced profits and reduced losses which are not always the actual values .such a manipulation of financial statements , known as cosmetic earning management , is achieved through the rounding of numbers .this was detected first for firms in new zealand where the frequency of zeros and nines as second digit of reported earnings was respectively more and less than could be expected from benford s law .subsequently this unethical phenomenon has been reported to be the practice of the day for the firms worldwide .a recent example of corporate data manipulation , on a truly global scale , is the 2011 libor scandal in which a cartel of banks distorted the interest pricing process of the inter - bank loans .countries , like firms , also falsify economic data when it is strategically advantageous .thus , questions have been raised about the data submitted by greece to the eurostat to meet the strict deficit criteria set by the european union ( eu ) . among all the member states of eu, the data submitted by greece has been found to have the greatest deviation from benford s law .similarly , the macroeconomic data of china is a subject of much debate , as it has been alleged to be overstating its gdp numbers to mislead investors .furthermore , a benford s law based assessment of the macroeconomic data submitted by countries to the world bank hints at deliberate falsification of the data from the developing countries .the manipulative behavior percolates down to the local governments as recent studies using benford s law have uncovered deficiencies in the data of municipalities and states of several countries .the cases in point are the valejo city , orange county and jefferson county in u.s . , whose local authorities have filed for bankruptcy .the digit distributions of the financial statements of all the three municipalities have been shown to have significant departures from that expected on the basis of benford s law , thereby raising questions about the credibility of the statements on revenues and spending .further indications of data tampering by local governments have been uncovered through a benford analysis of the official financial reports of the fifty states of u.s .another study from brazil analyzing the digit distribution of 134,281 contracts issued by 20 management units in two states found significant deviations from benford s law and concluded that there is a tendency to avoid conducting the bidding process and the rounding in determining the value of contracts . in italy ,the municipalities represent the lowest level of the government responsible for providing services to the local residents .some of these activities are property issues , such as building permits , street lump , garbage collection , public transport , etc . , and social activities , such as child and elderly care services , etc . .tax collection is a fundamental source of revenues for local governments enabling the efficient delivery of services . on the other hand , tax evasionis known to be widespread across italy with studies estimating between quarter and half of the country s gdp to be hidden from authorities in the form of underground economy .the evasion of taxes is detrimental to the financial health of municipalities . to contain any financial distress ,the municipalities resort to scaling down of expenditure like cutting on the number of employees , reducing the salaries of those that continue to be employed and even complete stopping of some of the services .thus , any financial distress of municipalities has severe repercussions on the lives of the taxpayers and municipal employees .it is important to have better oversight of the quality of financial statements and accountability over the use of funds .the concerns on data quality and the poor auditing procedures being used have resurfaced more vigorously following the bankruptcy of number of local government bodies across several industrialized countries during recent financial crisis . in the present study, we analyze the yearly aggregated tax data of all the italian municipalities ( cities ) for a period of five years from 2007 to 2011 to see if there are any deviations from benford s law .beforehand , given the scale of tax evasion in italy , one expects that the digit distribution of tax data would be in complete disagreement of the predictions of benford s law .however , we find that the tax data of all the italian cities shows an impressive compliance to the law . furthermore , we also analyze the yearly ( city cumulated ) data from three italian regions of calabria , campania and sicily .the municipalities of these regions have , from common knowledge or assumption , a low level of governance , poor delivery of services , and substantial presence of mafia and organized crime .thus given the poor tax administration in the municipalities of these regions , we again anticipated to find some hints of tax evasion through deviations from benford s law .surprisingly , the data from these three regions also satisfy the law , except the years 2007 and 2008 data for campania which show large deviations from benford s law .the italian state is organized in four levels of government ( i ) a central government ( ii ) 20 regional governments ( iii ) 110 provincial governments and ( iv ) more than 8000 municipalities , at the time of this writing .the municipalities represent the lowest level of the government in the administrative structure of italy .each municipality belongs to one and only one province , and each province is contained in one and only one region .the total number of municipalities has slightly varied over the years .this is due to occasional administrative reorganization , through the acts of the italian parliament , leading to the creation of new municipalities and sometimes also the merger of two or more municipalities into one .thus we have a total of 8101 , 8094 , 8094 , 8092 and 8092 municipalities respectively for year 2007 , 2008 , 2009 , 2010 and 2011 , respectively . during this time interval ,7 municipalities have changed from a province to another one , - in so doing also changing from a region ( marche ) to another ( emilia - romagna ) , in 2008 .we have analyzed the yearly aggregated tax income ( ati ) data of all municipalities for the period of five years from 2007 to 2011 .the data has been obtained by ( and from ) the research center of the italian ministry of finance and economy ( mfe ) .we have disaggregated contributions at the municipal level to the italian gdp. the standard methodology of a benford analysis is to first count the appearances of each digit from 1 to 9 as the first digit of numbers in the data .then the corresponding theoretical frequency of each digit as first digit is determined from benford s law .this is followed by estimating the goodness of fit of the theoretical and observed digit distributions both graphically and also using suitable fitness test .these steps are explained through the analysis of the yearly tax data for all the italian cities in table 2 .the , the number of times each digit from 1 to 9 appears as the first significant digit in the corresponding data are shown for each yearly data set in columns 2 , 4 , 5 , 7 , 8 of table 2 .also shown are , the corresponding counts ( in brackets ) for each digit as predicted by benford s law : where for each year is the total number of records i.e. the number of municipalities .for example , the total number of municipalities for year 2007 is , as shown in column 2 of table 2 .the root mean square error ( ) is calculated from the binomial distribution the observed count for digit 1 as first significant digit is 2433 for 2007 , whereas the expected count from benford s law is 2438.64 with an error of about 41.29 .the expected count from benford s law and the corresponding error depends only on the number of records in the sample of data . for the yearly ati datathe number of records for years 2008 and 2009 is 8094 and is 8092 for years 2010 and 2011 .thus , the expected count from benford s law and the corresponding errors are shown only once in column 4 and 6 respectively for such cases . from a visual inspection of table 2, it is found that the observed and expected digit distributions are in reasonable agreement within the margins of the calculated error .this is further illustrated in fig .1 , where the observed proportion of the first digits are compared with those expected from benford s law .contrary to the _ a priori _ expectations , for the ati data of all these years , the agreement between the observed and theoretically predicted distributions is quite remarkable . in view of the statistical quantification of the closeness between theobserved and predicted digits distributions , the pearson s , the most widely used test in benford s law literature , is thereafter used .of course , zero is not the significant digit when it occupies the extreme left of a number .thus , in case of the first digit analysis , we have `` data points '' , whence degrees of freedom .the critical value of , under confidence level , for acceptance or rejection of null hypothesis - the observed and theoretically predicted digit distribution are same , is =15.507 .if the value of the calculated is less than the critical value , then we accept the null hypothesis and conclude that the data fits benford s law . for the 2007 ati data ( column 2 of table 2 ) ,the is 15.36 ( the last row and column 2 of table 2 ) , a value smaller than ; whence the null hypothesis is accepted indicating in turn that the tax data for year 2007 follows benford s law .the calculated for the year 2008 is 27.52 and for 2009 is 18.96 which are far greater than the critical value of 15.507 .therefore , the null hypothesis must be rejected .furthermore , the for year 2010 is 12.29 and for 2011 is 13.72 and both values are less than the critical value , such that the null hypothesis must be accepted . from a visual examination of fig .1 , showing the comparison of the observed proportion of the first digits with those expected from benford s law , the conclusion is of an excellent compliance for all the years .thus for the 2008 and 2009 ati data , conclusions on compliance to benford s law drawn from fig.1 and the test appear to be contradictory .however , it is known that the results of the test are sensitive to the number of records in the data under analysis .the rejection of the null hypothesis becomes difficult for samples of small sizes , called type ii error , whereas for large samples the test suffers from `` excessive power '' , wherein even a small deviation from benford s law turns out to be significant , called type i error .this leads to a wrongful rejection of the null hypothesis .the large data sets require increasingly better fits to pass the threshold for conformity , although by inspection they give better fits than small data sets , and often fail a test that the small dataset passes .thus , larger values of for the tax data of years 2008 and 2009 , in our study , are likely due to this excessive power , despite the fact that they visually appear to show clear and extraordinary conformity to benford s law .in fact , it is due to this excessive power that the calculated for 2007 , 2010 and 2011 yearly data sets are only marginally smaller than the critical value of 15.507 , though the close visual conformity , evident from fig . 1 , compels one to expect much smaller values .thus , rather than being an indication of departure of the ati data from benford s law , the high values are a manifestation of the limitations of the pearson s test itself .a second analysis is in order .it is widely accepted that the so - called underground or black economy is larger in the southern regions of italy than elsewhere .studies have shown that personal income tax and value added tax evasion is highest in calabria , sicily and campania . in anticipation to find some support for this conjecture through the deviations from benford s law , we specifically analyzed the data from these three regions .calabria consists of 409 municipalities grouped in 5 provinces .we show the analysis for the tax data of all years for calabria in table 3 . againsince the number of records is same for all the yearly data sets the benford expected frequency of each digit and the corresponding error are same and are only shown once in column 6 of the table 3 .the pearson s are all much smaller than the critical value of 15.507 .therefore , we conclude that the ati data follows benford s law , a fact which is also clearly attested by fig .2 , where the observed and expected digit distributions are compared .the analysis for sicily region is shown in table 4 . againboth the calculated and the corresponding graphical representation of fig .3 show an excellent compliance to the law .the results for the campania region are shown in table 5 . here , the respective for 2007 and 2008 are large than the critical value for acceptance of null hypothesis .thus , for these two years the ati data of campania region clearly deviates from benford s law .the for years 2009 to 2011 are less than the critical value of 15.507 , though only marginally .it may be noted here that unlike the case of 2008 and 2009 yearly data for all the municipalities , for which the results of the test are contradictory to the inferences , evident from the corresponding figure fig .1 , for campania , the calculated and the observations from the corresponding figure fig . 4are complimentary .the departure of campania data from benford s law can be clearly seen from fig .4 where the frequency of digit 3 is much less and that of digit 6 is much greater than expected from benford s law .though tax evasion in italy is a phenomenon of gigantic proportions its estimation is easier said than done . nevertheless , the literature on estimation of tax evasion in italy is plentiful .the levels of tax evasion have been shown to vary across different regions of italy and also across different sections of the italian society along its economic landscape .furthermore , political and social factors responsible for the thriving of tax evasion have been pointed out .for example , the lack of governance and the levying of higher taxes are known to discourage taxpayers .the congestion of tax evaders is another factor encouraging evasion as tax authorities are overburdened thereby reducing the risk of detection .the modeling of tax evasion is difficult as evasion in itself consists of partial or complete concealment of a significant proportion of economic activities from the authorities .thus researchers rely on sample surveys believing that people will disclose their true incomes when promised anonymity .the data from surveys are then compared with the official data maintained by italian mfe .the latter is in fact the primary source of data invariably employed in modeling the different aspects of tax evasion .however , there is some general concern and much skepticism on the quality of tax data being maintained by mfe .the existence of several inconsistencies related to format , syntax and semantic have been pointed out .more serious , from the point of view of the subject in this present study , are the missing , obsolete , or incorrect data values and undiscussed outliers .the present study assesses the quality of the mfe tax data relating to municipalities through the application of benford s law in order to see whether there are in the ati set , numerical anomalies which might hint at deliberate attempts of manipulation .any manipulation , if found , would in turn likely signal the intention of tax evasion .when people resort to under - reporting their taxable incomes , the occurrence pattern of digits in the tax return files will be altered in a major way .this has been shown to be true by nigrini , in the first ever practical application of benford s law , after analyzing tax return files of individuals on u.s .internal revenue services .it was revealed that tax payers with low incomes resorted to blatant manipulation of line items at filing time to a greater extent than high income tax payers .furthermore , prior studies suggest that benford s law successfully captures the manipulation of tax data to the point that the deviations from the law are acceptable as evidence in courts of several usa states .our present study shows that overall tax data of italian municipalities are in complete agreement with benford s law , thereby negating the presence of evasion . at first , these results are somewhat surprising , since large scale presence of tax evasion in italy is well documented .however , on a more close scrutiny the reason for the compliance of data in our study becomes apparent . in our analysis ,each municipality reports only one record of tax data for a particular year ( in contrast to nigrini analysis of the data records of individual tax payers ) .since municipalities are composed of hundreds and thousands of inhabitants the tax value is an aggregate value which , of course , is a result of combinations of tax receipts of a large number of citizens .thus , any individual deviation from benford s law might be lost due to the multiple mathematical operations .in fact , the numbers obtained after multiplying a lot of numbers together have been found to agree with benford s law .this has been conjectured to be one of the possible mathematical explanations for the validity of the law .it is relevant to mention here that another type of evasion comprises of complete concealment of incomes from the authorities .such incomes usually arise from underground illegal activities like thefts , drugs , kickbacks , skimming and contract rigging .since no record / trace of transactions exists at all the detection of tax evasion through such `` underground '' economic activities can not be achieved with benford s law .previous studies have shown that personal income tax and value added tax evasion is highest in calabria , sicily and campania i.e. the southern regions of italy .the relative backwardness of these regions may be one reason for the high tax evasion since inefficiency of the municipalities in delivering services discourages the payment of taxes as tax payers do not see any proper return for their paid taxes .the weak governments of poorer regions are generally less efficient in tax administration further affecting the realization of tax revenues due to the strong presence of black economy , arising out of illegal and underground activities of mafia , ( always ) hidden from the authorities .furthermore , extortion by mafia compels legal businesses to evade taxes .fearful of their extensive reach and the deadly consequences for not obeying them businesses pay taxes to mafia rather than to government .furthermore , the mafia infiltration of local governments of municipalities across these regions are pervasive and in order to restore the law enforcement against such infiltration the central government from time to time has resorted to the dissolution of the municipal administration . over the yearsthere have been a total 217 dissolution of local municipalities across italy .however , most of the dissolutions have been invoked in the municipalities of south - italian regions with only 4 dissolutions being reported outside these regions . in the light of the above discussion, there are sufficient reasons to suspect that the tax data of these regions would show large departures from benford s law . on the contrary, we found that income tax data from these regions shows strong submission to benford s law except for the 2007 and 2008 yearly data of campania .this is somewhat surprising since the calabrian ndrangheta is the most powerful amongst all the italian mafias .we have analyzed the yearly aggregated tax income data of all the italian municipalities to search whether there are anomalies which might hint at deliberate attempts of manipulation for tax evasion , a phenomenon widespread across the country , according to common knowledge .however , the overall data showed excellent compliance with benford s law , thereby negating the presence of manipulations at this aggregate level .it might be that the aggregation process hides some individual breakdowns .however , the aggregation is not a multiplicative process .we have also analyzed the municipality tax data of three regions ( calabria , campania and sicily ) known for the strong presence of mafia .again the data showed compliance to benford s law except for the 2007 and 2008 data for campania which showed significant departures from the law .our findings suggests to reconsider the campania data .no need to say that the other ( 17 ) regions might be similarly investigated .moreover , one possibility for further probing the data and maybe concluding on some reason for the breakdown of benford s law , in a few cases , would be to investigate the province level .the reader will easily understand that this demands 110 or so tests / year .this activity goes beyond the scope of the present report and is left open for further research .up to now , necessary and sufficient conditions for the application , and the more so , explanation of benford s law are not known , in spite of intense applications .the present report indicates that more theoretical and numerical investigations are still of interest .this paper is part of ausloos s scientific activities in cost action cost action is1104 , `` the eu in the new complex geography of economic systems : models , tools and policy evaluation '' and in cost action td1210 `` analyzing the dynamics of information and knowledge landscapes '' .s. newcomb , note on the frequency of use of different digits in natural numbers , am . j. math .4 ( 1881 ) 39 - 40 .f. benford , the law of anomalous numbers , proc . am .78 ( 1938 ) 551 - 572 .a. berger , t. p. hill , benford s law strikes back : no simple explanation in sight for mathematical gem , the mathematical intelligencer , 33 ( 1 ) ( 2011 ) 85 - 91 .e. canessa , theory of analogous force on number sets , physica a 328 ( 1 ) ( 2003 ) 4452 .n. gauvrit , j - p .delahaye , pourquoi la loi de benford nest pas myst ' erieuse , math . & sci .182 ( 2008 ) 7 - 15 .n. gauvrit , j - p .delahaye , loi de benford g ' en ' erale , math . & sci .sci . 186 ( 2009 ) 5 - 15 .t. p. hill , base - invariance implies benford s law , proc . am .123 ( 3 ) ( 1995 ) 887 - 895 .t. p. hill , a statistical derivation of the significant - digit law , stat .10 ( 4 ) ( 1995 ) 354 - 363 .r. s. pinkham , on the distribution of first significant digits , the annals of mathematical statistics , 32(4 ) ( 1961 ) 1223 - 1230 .l. pietronero , e. tosatti , v. tosatti , a. vespignani , explaining the uneven distribution of numbers in nature : the laws of benford and zipf , physica a 293 ( 2001 ) 297 - 304. t. a. mir , the law of the leading digits and the world religions , physica a 391 ( 2012 ) 792 - 798. t. a. mir , the benford law behavior the religious activity data , physica a 408 ( 2014 ) 1 - 9. t. a. mir , the leading digit distribution of worldwide illicit financial flows , j. c. pain , benford s law and complex atomic spectra , phys .e. 77 ( 2008 ) 012102 .l. shao , b. q. ma , first digit distribution of hadron full width , mod .a 24 ( 2009 ) 3275 - 3282 .m. sambridge , h. tkalci , a. jackson , benford s law in the natural sciences , geo .a 37 ( 2010 ) l22301 .g. judge , l. schechter , detecting problems in survey data using benford s law , journal of human resources .44 ( 2009 ) 1 - 24. w. r. mebane , jr . , the wrong man is president !overvotes in the 2000 presidential election in florida , presp . on polit . 2 ( 2004 ) 525 - 535. b. f. roukema , benford s law anomalies in the 2009 iranian presidential election , .w. k. t. cho , b. j. gaines , breaking the ( benford ) law : statistical fraud detection in campaign finance , the american statistician 61 ( 2007 ) 218 - 223 .p. clippe , m. ausloos , benford s law and theil transform of financial data , physica a 15 ( 2012 ) 6556 - 6567 e. den heijer and a. e. eiben .using aesthetic measures to evolve art . in : ieee congress on evolutionary computation ( cec 2010 ) , barcelona , spain , 18 - 23 july 2010 .ieee press .f. sandron , do populations conform to the law of anomalous numbers ?population - e 57 ( 2002 ) 755 - 761 .t. alexopoulos , s. leontsinis , benford s law and the universe , in press journal of astronomy and astrophysics ( 2014 ) .m. ausloos , c. herteliu , b. ileanu , breakdown of benford s law for birth data . submitted .t. p. hill , a. berger , benford online bibliography , c. durtschi , w. hillison , c. pacini , the effective use of benford s law to assist in detecting fraud in accounting data , journal of forensic accounting 1 ( 2004 ) 17 - 34 . m. j. nigrini , taxpayer compliance application of benford s law , the j. am .18 ( 1 ) ( 1996 ) 72 - 92. m. j. nigrini , l. j. mittermaier , the use of benford s law as an aid in analytical procedures , auditing : j. pract .theory 16 ( 2 ) ( 1997 ) 52 - 67 .m. j. nigrini , benford s law : applications for forensic accounting , auditing and fraud detection , new jersey , usa : wiley publications ( 2012 ) .t. p. hill , the difficulty of faking data , chance 12 ( 3 ) ( 1999 ) 27 - 31 .h. varian , benford s law , american statistician 23 65 - 66 ( 1972 ) . c. carslaw , anomalies in income numbers : evidence of goal oriented behavior , accounting rev .63(2 ) ( 1988 ) 321 - 327 .j. kinnunen , m. koskela , who is miss world in cosmetic earnings management ?an analysis of small upward rounding of net income numbers among 18 countries , j. intern .res . 2 ( 2003 ) 39 - 68 .abrantes - metz , g. judge , s. villas - boas , tracking the libor rate , applied economics letters , 10(10 ) ( 2011 ) 893 - 899 .t. michalski , g. stoltz , do countries falsify economic data strategically ?some evidence that they might , the review of economics and statistics , 95 ( 2013 ) 591 - 616 .b. rauch , m. gttsche , g. brhler , s. engel , fact and fiction in eu - governmental economic data , german economic review 12 ( 2011 ) 243 - 255 . c. a. holz , the quality of china s gdp statistics , stanford university , scid working paper 487 , 2 december 2013 .j. nye , c. moul , the political economy of numbers : on the application of benford s law to international macroeconomic statistics .be journal of macroeconomics , 7(1 ) , article 17 ( 2007 ) .a. h. haynes , detecting fraud in bankrupt municipalities using benford s law , scripps senior theses . , paper 42 .( 2012 ) available at g. g. johnson , j. weggenmann , exploratory research applying benford s law to selected balances in the financial statements of state govenments , academy of accounting and financial studies journal , 17(1 ) ( 2013 ) 31 - 44 .j. costa , j. santos , s. travassos , an analysis of federal entities compliance with public spending : applying the newcomb - benford law to the 1st and 2nd digits of spending in two brazilian states , r. cont .- usp , sao paulo , 23(60 ) , 187 - 198 , set./out./nov./dez . 2012 .d. bartolini , r. santolini , political yardstick competition among italian municipalities on spending decisions , the annals of regional science , 49(1 ) ( 2012 ) 213 - 235 .e. padovani , e. scorsone , measuring financial health of local governments a comparative framework , year book of swiss administrative sciences ( 2011 ) .f. schneider , the increase of the size of the shadow economy of 18 oecd countries : some preliminary explanations , ifo working paper , 306 ( 2006 ) .g. jones , italy approves decree to stave off bankruptcy for rome council , feb .28 ( 2014 ) .available at .g. brosio , a. cassone , r. ricciuti , tax evasion across italy : rational noncompliance or inadequate civic concern , public choice , 112(3 ) ( 2002 ) 259 - 273 .f. calderoni , where is the mafia in italy ? measuring the presence of the mafia across italian provinces , global crime , 12(1 ) ( 2011 ) 41 - 69 . f. schneider , the value added of underground activities : size and measurement of the shadow economies of 110 countries all over the world , world bank working paper washington , d.c .( 2000 ) . p. di caro , g. nicotra , knowing the unknown across regions : spatial tax evasion across italy , m. r. marino , r. zizza , the personal income tax evasion in italy : an estimate by taxpayer s type . in : michael pickhardt and aloysprinz ( eds . ) , tax evasion and the shadow economy , cheltenham : edward elgar , 2011 . c. v. fiorio , f. damuri , workers tax evasion in italy , giornale degli economisti e annali di economia , 64 ( 2/3 ) ( 2005 ) 247 - 270 .r. galbiati , giulio zanella , the tax evasion social multiplier : evidence from italy , journal of public economics , 96(5 ) ( 2012 ) 485 - 494 .p. missier , g. lalk , v. verykios , f. grillo , t. lorusso , p. angeletti , improving data quality in practice : a case study in the italian public administration , distributed and parallel databases , 13(2 ) ( 2003 ) 135 - 160 .j. boyle , an application of fourier series to the most significant digit problem , am .month . 101( 1994 ) 879 - 886 .p. d. scott , m. fasli , benford s law : an empirical investigation and a novel explanation , csm technical report 349 , csm technical report , department of computer science , university essex .available at , 2001 .m. alexeev , e. janeba , s. osborne , taxation and evasion in the presence of extortion by organized crime , j. comparative economics , 32 ( 2004 ) 375 - 387 .b. geys , g. daniele , organized crime , institutions and political quality : empirical evidence from italian municipalities .workshop paper , dept . of political science , stanford university , available at | the yearly aggregated tax income data of all , more than 8000 , italian municipalities are analyzed for a period of five years , from 2007 to 2011 , to search for conformity or not with benford s law , a counter - intuitive phenomenon observed in large tabulated data where the occurrence of numbers having smaller initial digits is more favored than those with larger digits . this is done in anticipation that large deviations from benford s law will be found in view of tax evasion supposedly being widespread across italy . contrary to expectations , we show that the overall tax income data for all these years is in excellent agreement with benford s law . furthermore , we also analyze the data of calabria , campania and sicily , the three italian regions known for strong presence of mafia , to see if there are any marked deviations from benford s law . again , we find that all yearly data sets for calabria and sicily agree with benford s law whereas only the 2007 and 2008 yearly data show departures from the law for campania . these results are again surprising in view of underground and illegal nature of economic activities of mafia which significantly contribute to tax evasion . some hypothesis for the found conformity is presented . |
ion beams have many application in research and industry . being able to increase the delivered intensity to the application and reducing size and cost are important aspect of developing new accelerators and enabling new application for particle accelerators . among the most ambitious applications for high - intensity beamsare driving fusion reactions for the purpose of electricity generation .there are roughly two approaches to the challenge of fusion energy production : magnetic confinement ( mfe ) and inertial confinement ( ife ) . in both approaches ,high - intensity ion beams have been proposed as methods to heat the fusion fuel . in magnetic confinement , typified by tokamaks and other magnetic confinement schemes , a plasma of , for example , deuterium and tritium is heated to a temperature of several where the fusion cross - sections are high enough for the generation of energy from the reaction products exceeds the energy required to sustain the plasma .the plasma and fusion reactions are sustained for a long time ( minutes to hours ) .various methods are being developed to heat the plasma , including the deposition of energy in the plasma from high current deuteron beams .for iter , two deuterium beam systems are being constructed with ion kinetic energy of .each beamline occupies a volume exceeding {m^3} ] to achieve ignition , and simulations indicate that 100 more energy may be created from the fusion reactions .in contrast to the magnetic fusion approach , the process here is pulsed , with a repetition rate of several hertz .the ion beam requirements are constrained by the target design , where for heavy ions ( a>100 ) a kinetic energy of several and kiloamperes of current are required . for lighter ions and for a given total beam energy ,the current must be increased .there has been interest in magneto - inertial fusion ( mif ) , where aspects of magnetic and inertial fusion approaches are merged .the initially low density plasma is confined by a magnetic field .the plasma and embedded magnetic field are compressed by , e.g. , a metal liner , directed plasma or beams , with a confinement time longer than characteristic of ife , and shorter than mfe . for the ife and mif fusion energy applications ,the final current density must be and the total energy per pulse needs to be {mj} ] a unit cell size of {\mu m} ] ( taken as 50% of the breakdown limit from fig . 4 in [ sl02 ] ) will be {a}{cm^2} ] for h ions , both at .normal ion sources produce of the order of {ma}{cm^2} ] have been achieved. the ion source will therefore be the limiting factor in most applications and bunching , funneling or other methods of increasing the current density might be utilized to overcome this limitation .the proposed accelerator structure consists of two main components : rf - units and esq doublets .the rf - units will provide the acceleration for the ion beam and the esqs will allow effective beam transport along the accelerator structure .each rf - wafer unit consists of two vacuum gaps between ring electrodes that are used for acceleration and a field - free drift region of a specific length that allows the rf - field to adjust its phase , so that the ions entering the second gap will also see an accelerating field . for each rf - unitthe ions enter and leave the unit at ground potential .all rf - wafers share the same design making batch fabrication possible : each beamlet corresponds to a through hole aperture in the wafer that has a ring electrode at its entrance and exit on the surface of the wafer . in siliconthis would be a deposited metal ring and in our circuit board design the on board copper is utilized for this purpose . to create an rf - unit ,four wafers are stacked together .both sides of the outer wafers are grounded and all four sides of the inner wafers are connected to the rf - source ( see fig .[ fig : rf - concept ] ) .we use gaps as acceleration gaps .the drift distance is calculated depending on the expected ion energy , ion mass , and rf - frequency .precision washers are used between the wafers to define the gap distances . to focus the beam we rely on esqs .each consists of two pairs of electrodes which are biased positive and negative respectively . to implement esq components , we form four electrodes around each beam aperture and run electrical connections to the front and back of each wafers for the positive and negative voltages ( see fig .[ fig : esq - concept ] ) .the esq structure is therefore completely contained in a single wafer .a single esq wafer focuses the beam in one direction and defocuses the beam in the other .we therefore use two esq wafers to form a focusing doublet to provide an overall focusing effect on the beam .for initial prototyping we utilize pc board ( fr-4 ) to fabricate the rf and esq wafers . using laser micromachining ,top and bottom metal layers are patterned and holes are drilled through the board .alignment between top and bottom is achieved by using an integrated vision system and pre - fabricated alignment fiducials .furthermore , by using the integrated camera of the tool , top and bottom layers can be registered for alignment .steps of the process to fabricate rf wafers are given in fig .[ fig : rf - fab ] . in this process , we start with fr-4 based board that has copper on both sides as seen in the cross section .the circular holes are created using a laser tool .then laser cutting is used to define top and bottom metal routing . a top and bottom view of the fabricated rf wafer is shown in fig . [ fig : rf - fab ] .the main steps of the process to fabricate esq wafers are given in fig .[ fig : esq - fab ] . in this process, the fr-4 based board also has copper on both sides .the holes are then created using the laser tool . as the holes in the pcbs are created using a scanned laser beam rather than a milling tool ,arbitrary hole shapes can easily be realized . after defining of holes ,a copper layer is evaporated onto the board in a conformal evaporator with a rotating chuck system on both sides ( {\mu m} ] ) to ignite the plasma .ions are extracted during the arc pulse by floating the source body to a high voltage ( ) and using a three electrode extraction system ( see fig . [fig : setup ] ) .the plasma facing electrode is not electrically connected to a fixed voltage source and therefore floats to the plasma potential during operation .a voltage of has been measured during plasma operation .the second electrode is used to extract the ion beam and is biased at a negative voltage relative to the source body .a third electrode has been implemented that can be utilized to gate the extracted beam and short , uniform ion pulses of {\mu s} ] per gap ) are the dominant loads .the frequency used for the experiments presented here is . a peak amplitude on the printed circuit board of up to has been measured .in the absence of a pre - bunching ( or chopper ) section between the ion source will deliver a constant ion current ( compared to the rf - period ) , therefore some fraction of ions are expected to get accelerated and others decelerated .a retarding potential analyzer was used to measure the beam energy distribution .this has been implemented by adding a biased grid after the rf - units followed by a faraday cup to measure the beam current . by scanning the grid voltage ,the faraday - cup at the end of the beamline selectively detects the current of ions with a kinetic velocity that is higher than the applied voltage on the grid .the esq wafers are tested in a similar fashion . instead of the rf - units a single esq or a doublet waferis mounted behind the ion source .we then use a scintillator ( rp 400 plastic scintillator ) and a fast image intensifying camera ( princeton instruments ) to look at the beam output .since a filament driven ion source is being used , light from the filament also reaches the scintillator , by looking at the scintillator at an angle , we avoid overlapping the light from the scintillator with light output from the ion beam hitting the scintillator . as both have roughly the same amplitude , this avoids the need for background subtraction .voltage scans on the esq electrodes then result in beam deflection that can be measured .to characterize the ion source , we scanned the extraction voltages of the ion source .the source was operated at using argon .the filament was on for to create a stable and high enough filament temperature .then the arc voltage was pulsed at for .the source was floated at and then extraction voltages on the second and third electrode was scanned . here , the third electrode was always held higher than the extraction electrode . fig .[ fig : source ] shows the resulting beam current measured in the source without any rf - unit or esq - units present .as you can see , the ion current increases according to the child - langmuir law of space - charge limited extraction at the beginning and then , depending on the plasma density , changes at one point into an emission limited regime . operating the source in the emission limited regime , will generate more noise from shot - to - shot , since the output level with depend on the current gas pressure , filament conditions , etc .therefore , an operating point at an extraction voltage of was chosen for an arc current of , which provided a very stable source performance .next a two stage rf - unit was tested .the beam energy profile was measured at three different conditions : 1 ) minimum rf - amplitude ( lowest setting on the rf - generator ) 2 ) rf - amplitude at amplitude and 3 ) rf - amplitude at amplitude .the results are shown in fig .[ fig : rf - result ] together with simulated results ( solid lines ) .the simulation uses a very simply 1d - model to estimate the beam energy of several nanosecond long beam pulses and a perfect energy filter in front of the faraday - cup .the simulated results are based on the rf - frequency and amplitude , the ion energy and mass , as well acceleration gap positions and contains no free parameters .results from using several rf - gaps as well discussion of the simulation program will discussed in a future publication . even for the case with almost no rf ,most of the feature in the signal still can be attributed to the few volts of remaining rf - amplitude .the remaining discrepancy is due to the energy spread that is intrinsic to the ion source .the measured beam energy can be estimated to be .for the cases where the rf was turned on , we measured a wider energy spread .as can be seen , some particles arrive at the correct time to achieve full acceleration in each gap , whereas other arrive at different phases of the rf and either get partial accelerated , do not feel any acceleration or get decelerated .the measured energy distribution fits very well with our simulated results from the 1d - model .as one can see , the maximum beam energy is roughly given by the starting beam energy and four times the maximum rf - amplitude .the fact that the measured energy is a bit lower can be attributed to the fact that the gap distances were designed for a slightly higher rf - amplitude . for the esq tests ,we first utilized a single esq wafer and demonstrated the typical elliptical deformation of a round beam that is the result of focusing the beam in one direction and at the same time defocusing the beam in the other direction , see fig .[ fig : esq - result ] for results from applying different polarities to the esq electrodes . combining two esqs into a doubletthen allows the beam to be focused in both directions . to demonstrate this, we were able to choose the voltages on an esq doublet so that the initial round beam is again focused to a round beam after passing to energies esq ( not shown ) .in this paper we have shown that the basic components needed to implement a mems based meqalac work .this opens the path to compact , high current accelerator that can be used , for example , to drive fusion reactions with ion beams as well as for other applications .we envision this technology to be applicable for ions beam in the to several range , with average beam current densities up to {ma}{cm^2}$ ] .the technology can also be used for lower energies , however below it will be more effective to use a single high voltage gap to directly accelerate the ions . above beam energy , the advantages of having cheap components and lower voltages will be more and more important . for this technology to be competitivewe believe that an rf - amplitude of several kilovolts is needed per acceleration gap .this way , we will be able to achieve gradients of for the accelerator structure at frequencies in the range . to accomplish these high gradients we are currently investigating the use of on board resonators with a high q that already have been shown to produce the required voltages .first prototypes are currently being designed and we will test these devices in the coming month to be able to integrate most of the rf - stack onto the wafers . switching from pcboard to silicon will also provide a path for mass fabrication and better manufacturing precision .we have shown that a compact rf - cell without a resonant cavity can be used to accelerate an ion beam .for this proof of concept printed circuit board structures have been used , whereas for a final accelerator we envision the use of silicon wafers .this will allow smaller beamlets packed to a higher density on a wafer for increased effective beam - current densities .furthermore , focusing elements in the form of electrostatic quadrupoles will be added to the accelerator to allow for beam transport and refocusing of the beam along the beamline . | a new approach for a compact radio - frequency(rf ) accelerator structure is presented . the idea is based on the multiple electrostatic quadrupole array linear accelerator ( meqalac ) structure that was first developed in the 1980s . the meqalac allowed scaling of rf - structure down to dimensions of centimeters while at the same time allowing for higher beam currents through parallel beamlets . using micro - electro - mechanical systems ( mems ) for highly scalable fabrication , we reduce the critical dimension to the sub - millimeter regime , while massively scaling up the potential number of parallel beamlets . the technology is based on rf - acceleration components and electrostatic quadrupoles ( esqs ) implemented in a silicon wafer based design where each beamlet passes through beam apertures in the wafer . the complete accelerator is then assembled by stacking these wafers . this approach allows fast and cheap batch fabrication of the components and flexibility in system design for different applications . for prototyping these technologies , the components have been fabricated using printed circuits board ( pcboard , fr-4 ) . in this paper , we present first proof of concept results of the major components : rf - acceleration and esq focusing . ongoing developments on implementing components in silicon and scaling of the accelerator technology to high currents and beam energies will be discussed . |
demand - side management helps utilities to regulate increasing energy demand by utilizing existing power grid infrastructure .recent efforts of demand - side management include load - shifting methods , load curtailing methods and energy conservation strategies . distributed energy resources such as energy storage devices and renewable energy resources provide vast opportunities for demand - side management by storing extra energy generated by renewable resources that can be dispatched to support peak energy demand. in general , effectiveness of consumer - driven demand - side management methods depends on active participation of users .however , in the long run , users may change their participating behavior leading to unexpected outcomes such as lower peak energy reduction and economic benefits .therefore , designing successful demand - side management approaches have often been challenging with volatile user behavior . in this paper, we investigate impacts of realistic energy user behavior , which is not completely rational , on a decentralized energy trading system proposed to regulate electricity demand of a residential community . in the energy trading system , users with photovoltaic ( pv ) energy generation can decide to participate across time to trade energy with a community energy storage ( ces ) device .first , we elaborate a non - cooperative stackelberg game to study the energy trading between the ces operator and participating users where the ces operator acts as the leader and the users are their followers .then we develop another non - cooperative game between users to explore their behavior in determining optimal energy trading starting times that minimize personal daily energy costs under two different user - behavioral models : expected utility theory and prospect theory .the contributions of this work are : * with time - varying subsets of active participating users that depend on their decisions of participating - time , the energy trading system attains a unique stackelberg equilibrium across time where the ces operator maximizes revenue while users minimize energy costs .* benefits of the energy trading system are robust to users participating - time strategies that significantly deviate from complete rationality .game - theoretic demand - side management methods have been widely investigated in literature .these studies assume that users act rationally and ideally obeying the strategies predicted by game - theoretic systems .however , social studies have proved that the rationality assumption of game theory can be violated in real world when users face uncertainty in decision making .abundant research using prospect theory has shown how real life user behavior contravenes the conventional game theoretic rationality assumption . in ,a prospect theoretic study for a load - shifting approach showed that deviations of users decisions to participate from conventional game - theoretic decisions result in significantly different outcomes .in contrast to , we apply prospect theory to study users behavior of choosing to participate across time in a stackelberg game - theoretic energy trading system that does not intend to shift regular energy consumption of users . in this regard , we show that the outcomes of the energy trading system are indistinguishable under both prospect theory and expected utility theory , even though users decisions to choose to participate differ between the two models .the stackelberg game - theoretic energy trading system between a ces device and users in assumes users participate from the beginning of day and hence the number of users remain consistent over time . here, we extend the stackelberg energy trading system to study users decisions of selecting energy trading starting times incorporating prospect theory .the ces - user stackelberg game in this paper differs from that in because the number of active participating users is time - variant depending upon each user s decision of choosing an energy trading starting time .the community consists of two types of energy users : participating users and non - participating users .the users have rooftop pv panels and they are the players in the energy trading optimization who trade energy with the grid and the ces device .the users are conventional grid users without behind - the - meter energy generation and are not players in the energy trading optimization . depending on net pv energy after consuming, the users are classified into surplus users and deficit users those are time - dependent . for the energy trading optimization , the entire control time period , usually a day ,is partitioned into number of equal time slots with granularity of .we assume that pv power generation and demand forecasts of the following day are available to the users to decide their day - ahead energy trading strategies .if and are the pv energy and the regular energy demand of user at time , respectively , then they sell / buy energy amount to / from the ces device at time such that , where is the grid energy consumption of the user .note that when the user buys energy from the grid and when the user sells energy to the grid . if the surplus energy of the user is , each user sells energy to the ces device and user buys energy from the ces device such that , the ces operator trades energy with the grid at each time where if the ces device is charged ( discharged ) . here , we use the same ces model given in that is similar to the energy storage model in . in this regard ,per - slot energy trading amounts are given as and where and are the per - slot charging energy amounts , and and are the per - slot discharging energy amounts .we define a charging efficiency , a discharging efficiency and a leakage rate for the energy storage .denoting is the charge level at the beginning of day , the energy capacity limit of the ces device gives , {\boldsymbol{\beta}}\preceq { \boldsymbol{b } } , \label{eq : id6}\ ] ] where with elements of maximum energy capacity of the ces device . with elements =\tau^l ] .^t ] and the grid energy trading profile ] minimize their personal energy costs in .let us consider a single time slot where [multiblock footnote omitted ] .then for user , the cost function is quadratic with respect to , where and using and . here, is the total grid energy load at time excluding the load of the user and .clearly , is interdependent on each other s behavior and we study the energy trading coordination between the users using a non - cooperative game . here, is the strategy set available to the users and is the strategy set of the user subject to . is the set of cost functions given by .each user determines the optimal energy trading amount from such that their energy cost is minimized . here , denotes the strategy profile of the opponents of the user that is given by . then the optimization problem of each user is to find , note that the game is similar to the non - cooperative subgame between users in .however , the subsets of players are not uniform for the game played at each time in contrast to . although the number of players is time - variant , using the same rationale in we can prove that the game played at any particular time has a unique nash equilibrium for any feasible and . at the nash equilibrium of the game , the optimal energy trading amount of the user , be found by setting the first derivative of with respect to to zero that gives , solving for all users simultaneously , we can obtain , where the ces operator also maximizes their revenue in by determining optimal and . by substituting in, we can write the objective of the ces operator as , ={\operatornamewithlimits{argmax}}_{{\boldsymbol{a}},{\boldsymbol{l_q}}\in\mathcal{q}}{\sum_{t=1}^k ( \lambda_1 a_t^2+\lambda_2 a_t+\lambda_3 l_{q , t}^2+\lambda_4l_{q , t } ) } , \label{eq : id14}\ ] ] where , , , and . is the strategy set available to the operator subject to and .there is a unique solution for the objective function of the ces operator , since is strictly concave because of the negative definite hessian matrix with respect to all feasible and the strategy set is convex due to linear constraints and .the ces operator first sets optimal ] to maximize and at time , user selects to minimize cost in .the _ utilities _ are as defined in for the ces operator and for the user .let ] where be the solution of the game at time .then the point ] and is the probability of choosing by the user . is the probabilities of the users except user . the intuition behind the cost in relies on the assumption that the user assesses their neighbours empirical frequencies of actions identical to their objective probabilities of choosing actions .however , this generalization may not be valid in the real world as people overweight outcomes with low probabilities and underweight outcomes with high probabilities .these observations are clearly explained under prospect theory . in practice ,the users may subjectively evaluate their neighbors actions to minimize energy costs .this characteristic is more realistic than assuming users act rationally and perceive their neighbors behavior objectively . in this regard ,we study actual user behavior as to when they select their energy trading starting time using prospect theory . to this end ,probability weighting functions are used to model the subjective behavior of users when they make decisions under risk and uncertainty . in this regard, the probability weighting function implies the subjective evaluation of the user about an outcome with probability .we use the prelec function to model the subjective perceptions of users on each other s behavior that is given by , here , is a parameter that decreases as the user s subjective evaluation deviates from the objective probability .if the user s subjective and objective probabilities are equal , then and this corresponds to expected utility theory . assuming that the subjective probabilities of user about their own actions are equal to their objective probabilities , the expected daily energy cost of user under prospect theory is , after defining the expected daily costs of the users , we now analyze the solutions for the game played under expected utility theory and prospect theory . due tocomputational usefulness , here we study the existence of equilibria . for the game , a mixed strategy profile is an equilibrium if it satisfies , where is the set of all mixed strategy profiles over and . in general , equilibria always exist and for the game , we are interested to find -nash equilibrium located close to a mixed strategy nash equilibrium under both expected utility theory and prospect theory .we use the iterative algorithm proposed in that was proved to converge to an -nash equilibrium close to a mixed strategy nash equilibrium under both expected utility theory and prospect theory . in summary , the algorithm is given by , where is the iteration number , is the inertia weight . of which , where is the expected cost when the user selects the pure strategy in response to the mixed strategies of other players at iteration i.e. , .note that for prospect theory , considers the weighted probabilities of other users mixed strategies at .as the algorithm converges , -nash equilibrium with respect to strategy profile is obtained under both expected utility theory and prospect theory . given the equilibrium probabilities of participating - time decisions of the users , we can define the expected revenue of the ces operator under both prospect theory and expected utility theory . in this regard ,if and are the -nash equilibriums under expected utility theory and prospect theory , respectively , then the subsequent expected daily ces revenue in each case can be obtained by , where is the ces revenue as per at the stackelberg equilibrium corresponds to , for expected utility theory and for prospect theory .in simulations , we consider real data of average pv power and user demand of the western power network in australia on a summer day ( see fig .[ fig : pvdemand ] ) and we assume that all users have power profiles same to these average profiles .further , , , kwh , kwh , , and .peak hours of the grid are between 16.00 and 23.00 and we select such that .we choose such that the predicted grid price range is same to the reference time - of - use price range in and is set to a constant such that the average predicted grid price is equal to the average reference price .the community has 10 households where 6 users are participating users in the system .the allowable energy trading starting times for the users are 01.00 , 12.00 and 17.00 so that . for comparisons ,we use a baseline without a ces device where the users trade energy directly with the grid that uses the same energy cost model . for the algorithm ,we use ;~\forall n\in \mathcal{p} ] tends to users become more subjective deviating from the objective evaluation assumption in expected utility theory . ] assuming .here , cost savings are calculated compared to the baseline . when , and even when with significant non - ideal behavior , the expected cost savings remained almost 28% under both models because for all users , participation probabilities at each time in using prospect theory do not significantly deviate from those obtained under expected utility theory as shown in table [ table 1 ] . when , the participation probabilities at are significantly increased for the fourth and fifth users compared to those predicted using expected utility theory ( see table [ table 1 ] ) . as a result ,the expected cost savings reduced from 28% to 21.5% for all users . .] . ] . ]user& = 1& = 12& = 17& = 1& = 12& = 17& = 1& = 12& = 17& = 1& = 12& = 17 + 1 & 0.9966&0.0005&0.0029 & 0.9988&0.0005&0.0007 & 0.9989&0.0005&0.0006 & 0.9979&0.0009&0.0012 + 2 & 0.9966&0.0005&0.0029 & 0.9988&0.0005&0.0007 & 0.9989&0.0005&0.0006 & 0.9979&0.0009&0.0012 + 3 & 0.9966&0.0005&0.0029 & 0.9988&0.0005&0.0007 & 0.9989&0.0005&0.0006 & 0.9979&0.0009&0.0012 + 4 & 0.0070&0.9924&0.0006 & 0.0076&0.9918&0.0006 & 0.0070&0.9924&0.0006 & 0.9979&0.0009&0.0012 + 5 & 0.0070&0.0005&0.9925 & 0.0076&0.0005&0.9919 & 0.0095&0.0005&0.9900 & 0.9979&0.0009&0.0012 + 6 & 0.9966&0.0005&0.0029 & 0.9988&0.0005&0.0007 & 0.9989&0.0005&0.0006 & 0.9979&0.0009&0.0012 + fig .[ fig : avgexpcostsav ] , fig .[ fig : ces rev ] and fig .[ fig : parred ] depict the variations in different aspects of system performance across the range of possible values . here, larger tending to 1 reflects that the users behave closer to the rationality assumption in expected utility theory , and smaller tending to 0 implies that their evaluations of opponents actions are more distorted from that of expected utility theory .[ fig : avgexpcostsav ] shows that under expect utility theory , the average of expected cost savings of the users achieved by participating in the system is 28.1% . on the other hand, even if the users weighting effects on their opponents actions are getting larger , i.e. , when is getting smaller , the expected cost savings will not significantly fluctuate and remain almost at 28% except for .[ fig : ces rev ] shows that , when , expected revenue for the ces operator retains nearly unchanged compared to the expected revenue calculated under expected utility theory . in terms of demand - side management of the grid , the expected peak - to - average ratio reduction compared to the baseline will not change notably from the peak - to - average ratio reduction predicted using expected utility theory when .this is because as shown in table [ table 1 ] , for , users prospect theoretic probabilities of participation at each time remain almost the same as those in expected utility theory . when , the fourth and fifth users will more likely to start energy trading from the beginning under prospect theory , which is not the case under expected utility theory. however , this behavioral change will only reduce the expected peak - to - average ratio reduction from 17.7% to 16.55% ( see fig .[ fig : parred ] ) .in this paper , we have studied effects of realistic , non - ideal , behavior of users , with respect to choosing energy trading starting times , on a game - theoretic demand - side management energy trading system between a community energy storage ( ces ) device and users .first , we have developed the non - cooperative stackelberg game to study the energy trading interaction between the users and the ces operator based on users decisions as to whether to participate across time .next we have studied a non - cooperative game to explore how the users make decisions to participate in the above energy trading system under two user - behavioral models : prospect theory and expected utility theory .simulation results show that the benefits of the energy trading system are robust to users strategies of participating - time that significantly deviate from complete rationality .we postulate that the energy trading system can be scaled to any number of participating users and present similar performance trends. a. mohsenian - rad , v. wong , j. jatskevich , r. schober , and a. leon - garcia , `` autonomous demand - side management based on game - theoretic energy consumption scheduling for the future smart grid , '' _ ieee trans .smart grid _ , vol . 1 , no . 3 , pp .320331 , dec . 2010 .a. haney , t. jamasb , j. wu , l. platchkov , and m. pollitt , `` demand - side management strategies and the residential sector : lessons from the international experience , '' in _ the future of electricity demand _ , t. jamasb and m. pollitt , eds.1em plus 0.5em minus 0.4em cambridge : cambridge university press , 2011 , ch .337378 .i. atzeni , l. ordonez , g. scutari , d. palomar , and j. fonollosa , `` demand - side management via distributed energy generation and storage optimization , '' _ ieee trans .smart grid _ , vol . 4 , no . 2 ,866876 , june 2013 . h. k. nguyen , j. song , and z. han , `` demand side management to reduce peak - to - average ratio using game theory in smart grid , '' in _ proc .computer commun . workshops ( infocom wkshps ) _ , march 2012 , pp . 9196 .y. wang , w. saad , n. mandayam , and h. poor , `` integrating energy storage into the smart grid : a prospect theoretic approach , '' in _ proc .conf . on acoustics , speech and signal processing ( icassp )_ , may 2014 , pp .77797783 .c. p. mediwaththe , e. r. stephens , d. b. smith , and a. mahanti , `` competitive energy trading framework for demand - side management in neighborhood area networks , '' _ arxiv e - prints _ , dec .[ online ] .available : http://adsabs.harvard.edu/abs/2015arxiv151203440m b. jones , n. wilmot , and a. lark , `` study on the impact of photovoltaic ( pv ) generation on peak demand , '' western power , australia , tech ., april 2012 .[ online ] .available : http://www.westernpower.com.au | this paper investigates effects of realistic , non - ideal , decisions of energy users as to whether to participate in an energy trading system proposed for demand - side management of a residential community . the energy trading system adopts a non - cooperative stackelberg game between a community energy storage ( ces ) device and users with rooftop photovoltaic panels where the ces operator is the leader and the users are the followers . participating users determine their optimal energy trading starting time to minimize their personal daily energy costs while subjectively viewing their opponents actions . following a non - cooperative game , we study the subjective behavior of users when they decide on energy trading starting time using prospect theory . we show that depending on the decisions of participating - time , the proposed energy trading system has a unique stackelberg equilibrium at which the ces operator maximizes their revenue while users minimize their personal energy costs attaining a nash equilibrium . simulation results confirm that the benefits of the energy trading system are robust to decisions of participating - time that significantly deviate from complete rationality . |
the replica method , originally devised as a trick to compute thermodynamical quantities of physical systems in presence of quenched disorder , has found applications in the analysis of systems of very different nature , as neural networks , combinatorial optimization problems , error correction codes etc . although many physicists believe that the method , within the replica symmetry breaking scheme of parisi , is able to potentially give the exact solution of any problem treatable as a mean field theory , the necessary mathematical foundation of the theory is still lacking , after more then 20 years from its introduction in theoretical physics .the last times have seen a growing interest of the mathematical community in the method , leading to important but still partial results , confirming in certain cases the replica analysis , with more conventional and well established techniques .apart the remarkable exception of the analysis of the fully connected -spin model in ref . and the rigorous analysis of random energy models , the analysis of the mathematicians has been , as far as we know , restricted to the high temperature regions and/or to problem of replica symmetric nature .very welcomed have been the techniques recently introduced by guerra and toninelli which allow rigorous analysis not relying on the assumption of high temperature , and valid even in problems with replica symmetry breaking . along these lines ,an important step towards the rigorous comprehension of the replica method , has been undertaken in , where it has been shown how in the case of the sherrington - kirkpatrick model , and its -spin generalizations , the replica free - energies with arbitrary number of replica symmetry breaking steps constitute variational lower bounds to the true free - energy of the model .as stated in that paper , the analysis is restricted to fully - connected models , whose replica mean field theory can be formulated in terms of a single matrix . however , in recent times , many of the more interesting problems analyzed with replica theory pertain to the so called `` diluted models '' where each degree of freedom interacts with a finite number of neighbors .the introduction of a `` population dynamics algorithm '' has allowed to treat in full generality -within statistical precision- complicated sets of probabilistic functional equations appearing in the one step symmetry broken framework of diluted models .the same algorithm has been used as a starting point of a generalized `` belief propagation '' algorithm for optimization problems .furthermore , at the analytic level , simplifications due to graph homogeneities in some cases , and to the vanishing temperature limit in some other cases have led to supposedly exact solutions of the ground state properties of diluted models , culminated in the resolution of the random xor - sat on uniform graphs in and the random k - sat problem in within the framework of `` one - step replica symmetry breaking '' ( 1rsb ) .the aim of this paper , is to show that the replica analysis of diluted models provides lower bounds for the exact free - energy density , and ground state energy density .we analyze in detail the cases of the diluted -spin model on the poissonian degree hyper - graphs also known as random xor - sat problem and the random k - sat problems .we expect that along similar lines free - energy lower bounds can be found for many other diluted cases .the guerra method we use sheds some light on the meaning of the replica mean field theory . the physical idea behind the method is that within mean field theory one can modify the original hamiltonian weakening the strength of the interaction couplings or removing them partially or totally , and compensate this removal by some auxiliary external fields . in disordered systemsthese fields should be random fields , taken from appropriate probability distributions and possibly correlated with the original values of the quenched variables eliminated from the systems .one is then led to consider hamiltonians interpolating between the original model and a pure paramagnet in a random field , and by means of these models achieving free - energy lower bounds .we will see that the rs case corresponds to assuming independence between the random fields and the quenched disorder .the parisi rsb scheme , assumes at each breaking level a peculiar kind of correlations , and gives free - energy bounds improving the rs one .our paper is organized in this way : in section 2 we introduce some notations that will be extensively used in the following sections . in section 3we introduce the general strategy to get the replica bounds ; we then specialize to the replica symmetric and the one step replica symmetry broken bounds , giving the results in the -spin and the -sat cases .conclusions are drawn in section 4 . in the appendices some details of the calculations in both the -spin andthe -sat cases are shown .our results will be issue of explicit calculations .although at the end we will get bounds , formalizable as mathematical theorems , the style and most of the notations of the paper will be the ones of theoretical physics .the spin models we will consider in this work are defined by a collection of ising spins , interacting through hamiltonians of the kind where the indices are i.i.d. quenched random variables chosen uniformly in .we will call each term a clause .the subscript in the clauses indicates the dependence on a single or a set of quenched random variables , as it will be soon clear .the number of clauses will be taken to be proportional to .for convenience we will choose it to be for each sample a poissonian number with distribution .the fluctuations of will not affect the free - energy in the thermodynamic limit , and this choice , which slightly simplify the analysis , will be equivalent to choosing a fixed value of equal to .the clauses themselves will be random .the -spin model has clauses of the form this form reduces to in the case of the viana - bray spin glass . in both cases the be taken as i.i.d .random variable with regular symmetric distribution .notice that for ] valid in the thermodynamic limit , so that will be often implicitly neglected in our calculations .the strategy to get the replica bound is a generalization of the one introduced by guerra in the case of fully connected models . we will consider models which will interpolate between the original ones we want to analyze and pure paramagnet in random fields with suitably chosen distribution .the underlying idea is that , given the mean field nature of the models involved , if one was able to reconstruct the real local fields acting on a given spin variable via a given hyper - edge , and to introduce auxiliary fields acting on that variable in such a way to energetically balance the deletion of the hyper - edge , then it would be possible to have an exact expression for the free - energy in terms of such auxiliary fields even when the whole edge set was emptied .however , if the replacement is done with some approximate form of the auxiliary fields distribution function , the real free - energy will be the one calculated using the approximate fields plus an excess term at every step of the graph deletion process .the proof of the definite sign of this excess term gives a way to determine bounds for the thermodynamic quantities .we will prove the existence of replica lower bounds to the free - energy density of the p - spin model and the random k - sat problem . in this last caseour result proves that the recent replica solution of gives a lower bound to the ground state energy and therefore an upper bound for the satisfiability threshold .the proofs will strictly hold in the limit , due to the presence of corrections of order in the calculated expressions for any finite size graph .moreover , our proofs will be restricted to the -spin model the the k - sat with even . in the cases of odd same bound would hold if one could rely on some physically reasonable assumptions on the overlap distribution ( see below ) .our analysis will start from the tap equations for the models , and their probabilistic solutions implied by the cavity , or equivalently the replica method at various degrees of approximation .we will consider in particular the replica symmetric ( rs ) and one step replica symmetry broken solutions , but it should be clear from our analysis how to generalize to more steps of replica symmetry breaking . in the tap / cavity equations one singles out the contribution of the clauses and the sites to the free - energy and defines cavity fields and as the local field acting on the spin in absence of the clause and the local field acting on due to the presence of the clause only .if we define ] and ] , which within the formalism selects families of solutions at different free - energy levels .the physical free - energy is estimated maximizing over .the interpretation of these equations has been discussed many times in the literature .we will show here , that such choices in the field distributions result in lower bounds for the free - energy analogous to the ones first proved by guerra in fully connected models . in order to prove these bounds, we will have to consider auxiliary models where the number of clauses will be reduced to ( ) , while this reduction will be compensated in average by some external field terms of the kind : where the numbers will be i.i.d .poissonian variables with average . as the notation suggests , the fields will play the role of the cavity fields of the tap approach , and they will be i.i.d. random variables with suitable distribution .indeed , for each field we will chose in an independent way primary fields ( ) and clause variables such that the relation is verified .notice that the compound hamiltonian ={\cal h}^{(\alpha t)}[{\bf s}]+{\cal h}^{(t)}_{ext}[{\bf s } ] \label{compoundh}\ ] ] will constitute a sample with the original distribution for , while it will consist in a system of non interacting spins for .the key step of the procedure , consists in the choice of the distribution of the primary fields . we will also find useful to define fields verifying the field are related to the s by a relation similar to ( [ c1 ] ) , while the s are related to the s by a relation similar to ( [ c2 ] ) .of course , the statistics of the fields and the s do coincide in the tap approach .it is interesting to note that the bounds we will get , are optimized precisely when their statistical ensemble coincide .as we mentioned , various replica bounds are obtained assuming for the fields the type of statistics implied by the different replica solution .so , the replica symmetric bound is got just supposing the field as quenched variables completely independent of the quenched disorder and with distribution . for the one - step rsb bound on the other hand the distribution itself be considered as random , subject to a functional probability distribution ] coincides the expression of the variational free - energy in the replica treatment under condition = p[h] ] is a remainder term . instead of writing the formulae for general clauses , in order to keep the notations within reasonable simplicity , we specialize now to the specific cases of the -spin model and the k - sat .notice that in all models = -\frac{1}{\beta } \langle \log ( 2 \cosh ( \beta h ) ) \rangle_h |_{t=0}\ ] ] in the case of the -spin . substituting in eq.([pippo2 ] ) and rearranging termsone immediately finds : & = & \frac{1}{\beta } \left [ \alpha \left(p \left\langle \log ( \cosh \beta u ) \right\rangle_u - \left\langle \log ( \cosh \beta j ) \right\rangle_j \right ) - \left\langle \log ( 2 \cosh \beta h ) \right\rangle_h + \right .\nonumber \\ & & \left .\alpha ( p-1 ) \left\langle \log \left ( 1 + \tanh ( \beta j)\prod_{t=1}^p \tanh ( \beta g_t ) \right ) \right\rangle_{\{g_t\ } , j } \right]\end{aligned}\ ] ] while the remainder is the integral of = & & -\frac{\alpha}{\beta } \left[\frac{1}{n^p } \sum_{i_1, ...,i_p } e \left\langle \log(1+\tanh(\beta j)\omega(s_{i_1} ... s_{i_p } ) ) \right\rangle_j -p e \left\langle \log(1+\tanh ( \beta u)\omega ( s_i ) ) \right\rangle_u + \nonumber \right . \\ & & \left .( p-1 ) e \left\langle \log(1+\tanh ( \beta j)\prod_{t=1}^p \tanh ( \beta g_p ) ) \right\rangle_{\{g_t\},j } \right].\end{aligned}\ ] ] the expression for ] is therefore , for all for which its expression makes sense , a lower bound to the free - energy . at saturation = p[h ] |_{t=0 } \ ; \forall \ ; h\ ] ] should hold , which is simply the self - consistency rs equation . by using equation can establish that the remainder is positive for even .we expand the logarithm of the three terms in ( absolutely converging ) series of , and notice that thanks to the parity of the and the distributions , they will just involve negative terms .we can then take the expected value of each terms and write = \frac 1 \beta \sum_{n=0}^{\infty } \langle \tanh^{2 n } \beta j \rangle_{j } \frac{1}{n } \omega \left[(q^{(2n)})^p - p q^{(2n ) } \langle \tanh ^{2 n } \beta g \rangle_g^{p-1 } + ( p-1 ) \langle\tanh ^{2 n } \beta g \rangle_g^p \right ] \label{sser}\ ] ] where we have introduced the overlap and the replica measure defined in section 2 .the series in ( [ sser ] ) is an average of positive terms in the case of the viana - bray model , where we get perfect squares , and more in general for all even , as we can easily , starting from the observation that in this case is positive or zero for all , real . in the case of odd , the same term is positive only if is itself positive or zero .the bound of the free - energy would therefore be established if we were able to prove that the probability distributions of the has support on the positives . , but it is not clear its physical meaning . ] this property , which tells that anti - correlated states are not possible , is physically very sound whenever the hamiltonian is not symmetric under change of sign of all spins .in fact , one expects the probability of negative values of the overlaps to be exponentially small in the size of the system for large .unfortunately however we have not been able to prove this property in full generality .notice that upon maximization on , the results of imply that the remainder is exactly equal to zero if the temperature is high enough for replica symmetry to hold . in the case of the -sat , using def.([sat ] ) for the clause , we find relation : \ ; , \label{u - ksat}\ ] ] where . via direct inspection , the variational free - energy coincides with the rs expression & = & \frac{1}{\beta } \left [ \alpha ( p-1 ) \left\langle\log \left ( 1 + ( e^{-\beta } -1)\prod_{t=1}^p \left ( \frac{1 + \tanh ( \beta g_t)}{2 } \right ) \right ) \right\rangle_{\{g_t\},\{j_t\ } } - \langle \log ( 2\cosh ( \beta h ) ) \rangle_h + \right .\nonumber \\ & & \left .\alpha p \langle \log ( 2\cosh ( \beta u ) ) \rangle_u - \alpha p \left\langle \log \left ( 1 + \frac{(e^{-\beta } -1)}{2}\prod_{t=1}^{p-1 } \left ( \frac{1 + \tanh ( \beta g_t)}{2 } \right ) \right ) \right\rangle_{\{g_t\},\{j_t\ } } \right ] \label{frsksat}\end{aligned}\ ] ] while the remainder is the integral of & = & -\frac{\alpha}{\beta } e \left [ \frac{1}{n^p } \sum_{i_1, ... ,i_p } \left\langle \log \left ( 1+(\e^{-\beta}-1)\omega(\prod_{t=1}^p \frac{1+j_{t } s_{i_t}}{2 } ) \right ) \right\rangle_{\{j_t\ } } - \right .\nonumber \\ & & \frac{p}{n } \sum_i \left\langle \log \left ( 1 + \xi \omega\left(\frac{1+j s_i}{2 } \prod_{t=1}^{p-1}\frac{1+j_{t } \tanh ( \beta g_t)}{2 } \right ) \right ) \right\rangle_{\{g_t\ } , j , \{j_t\ } } + \nonumber \\ & & \left .( p-1 ) \left\langle \log\left ( 1 + \xi \prod_{t=1}^{p}\frac{1+j_{r } \tanh ( \beta g_t)}{2}\right ) \right\rangle_{\{g_t\ } , \{j_t\ } } \right ] \ ; . \label{restoksat } \end{aligned}\ ] ] considerations analogous to the case of the -spin , have led us to add and subtract terms from eq.([pippo2 ] ) to single out the proper remainder term .expanding in series the logarithms , exploiting the symmetry of the probabilities distribution functions and taking the expectation of each term of the absolutely convergent series we finally obtain : = \frac{\alpha}{\beta } \sum_{n\ge 1 } \frac{(-1)^{n}}{n } ( \xi^*)^n \omega \left [ ( 1+q_n)^p - p(1+q_n)\langle(1+j \tanh ( \beta g))^n\rangle_{j , g}^{p-1 } + ( p-1 ) \langle(1+j \tanh ( \beta g))^n\rangle_{j , g}^{p } \right ] \label{resto2ksat}\ ] ] where we have defined and .detailed calculations are given in the appendix .as in the -spin case , the previous sum is obviously positive for even . for odd we should again rely on the physical wisdom that all have positive support and so have the functions , the variational free - energy coincides with the rs expression once extremized over at the condition at .we establish here a more complex estimate , in a larger variational space of functional probability distributions. the general strategy will be here to consider the same form for the auxiliary hamiltonian , but now with a more involved choice for the fields distribution .the fields on different sites or different index will be still independent , but each site field distribution will be itself random i.i.d ., chosen with a probability density functional ] , ] coincides with ] is the remainder .notice that the derivation immediately suggests how to generalize the analysis to more steps of replica symmetry breaking .let us now specialize the formulae for the -spin model and the k - sat .again , in this case we will need the expression for ] coincides with the 1rsb free - energy once maximized over the variational space of probability distribution functionals .the maximization condition reads : = { \cal p}[p ] \ ; |_{t=0 } \ ; \forall \ ; p \ ; , \label{spersb}\ ] ] which is simply the self consistency 1rsb condition . for even ( and in particular for that corresponds to the viana - bray case ), one can check that the remainder is positive just expanding the logarithm in series and exploiting the parity of the and the distributions . asthis is considerably more involved then in the rs case , we relegate this check to appendix a. in the -sat case the expression for function reads : while the corresponding one for is the same as in the rs case .the corresponding replica free - energy and remainder read & = & \frac{1}{m\beta } \left [ \alpha ( p-1 ) \left\langle\log \left\langle \left ( 1 + \xi \prod_{t=1}^p \left ( \frac{1 + j_t \tanh ( \beta g_t)}{2 } \right ) \right)^m \right\rangle_{\ { g_t\ } } \right\rangle_{\{g_t\},\{j_t\ } } \right .- \nonumber \\ & & \left .\alpha p \left\langle \log \left\langle \left ( \frac{b(\{j_t\},\{g_t\})}{2 \cosh ( \beta u_j(\{j_t\},\{g_t\ } ) ) } \right)^m \right\rangle_{\{g_t\ } } \right\rangle_{\{g_t\},\{j_t\},j } + \left\langle \log \left\langle \left ( \frac{1}{2 \cosh ( \beta h ) } \right)^m \right\rangle_h \right\rangle_p \right ] \label{liuto}\end{aligned}\ ] ] the remainder is the integral of & = & -\frac{\alpha}{\beta m } e_1 \left [ \frac{1}{n^p } \sum_{i_1, ... ,i_p } \left\langle \log \left ( \frac{e_2 z^m \left ( 1 + \xi \omega \left ( \prod_{t=1}^p \frac{1 + j_t s_{i_t}}{2 } \right ) \right)^m}{e_2 z^m } \right ) \right\rangle_{\{j_t\ } } \right .- \nonumber \\ & & \frac{p}{n } \sum_i \left\langle \log \left ( \frac{e_2 z^m \left\langle \left ( 1 + \xi \frac{1 + j \omega ( s_i)}{2 } \prod_{t=1}^{p-1 } \frac{1 + j_t \tanh ( \beta g_t)}{2 } \right)^m \right\rangle_{\ { g_t\}}}{e_2 z^m } \right ) \right\rangle_{\ { g_t\},\{j_t\},j } + \nonumber \\ & & \left . ( p-1 ) \left\langle \log \left\langle \left ( 1 + \xi \prod_{t=1}^p \left ( \frac{1 + j_t \tanh ( \beta g_t)}{2 } \right ) \right)^m \right\rangle_{\ { g_t\ } } \right\rangle_{\{g_t\},\{j_t\ } } \right ] \label{pipposat}\end{aligned}\ ] ] the expression for ] .analogously , the term writes \ ] ] or , making use of the definition of , \ ] ] eventually , following analogous manipulations , the last term can be written as invoking [ probq2 ] and collecting all & = & \frac{\alpha}{\beta m } \sum_{l\ge 1}\frac{m^l}{l } \sum_{{k_1, ... ,k_l \atop \sum_{s=1}^l k_s { \rm even}}}^{1,\infty } \prod_{s=1}^l \left ( \frac{\prod_{r=1}^{k_s-1}(r - m)}{k_s ! }\right ) \left\langle \left ( \tanh ( \beta j ) \right)^{\sum_{s=1}^l k_s } \right\rangle_j \ ; \cdot \nonumber \\ & & \omega^{(l ) } \left [ ( q^{(k_1, ... ,k_l)})^p - p a(k_1, ... ,k_l)^{p-1 } ( q^{(k_1, ... ,k_l ) } ) + ( p-1 )a(k_1, ... ,k_l)l^{p } \right ] \label{final_monster}\end{aligned}\ ] ] where we have defined : each inner term of the series ( [ final_monster ] ) \label{final_term}\ ] ] is always positive semidefinite for even while we need the condition conditions for odd . for retrieves the viana - bray result where ( [ final_term ] ) is a perfect square . as in the rs case, one can now integrate eq.([final_monster ] ) and recognize that once more the total true free - energy can be written as variational term plus a positive extra one .the variational term coincides with the 1rsb free - energy at stationarity and under condition aim of this appendix is to show that the expression for the remainder ] specializes to : & = & -\frac{\alpha}{\beta } e \left [ \left\langle \log \left ( \omega \left ( \exp^{-\beta \prod_{r=1}^p \frac{1+j_{r } s_{r}}{2 } } \right ) \right ) \right\rangle_{\{j_t\ } } - \right .\nonumber \\ & & p\left\langle \log \left ( 1 + \omega(s ) \tanh ( \beta u ) \right ) \right\rangle_u - - p \left\langle \log \left ( 1 + \frac{\xi}{2 } \prod_{t=1}^{p-1 } \left ( \frac{1 + j_t \tanh ( \beta g_t)}{2 } \right ) \right ) \right\rangle_{\{g_t\ } , \{j_t\ } } + \nonumber \\ & & \left . ( p-1 ) \left\langle \log\left ( 1 + \xi \prod_{t=1}^{p}\frac{1+j_{r } \tanh ( \beta g_t)}{2}\right ) \right\rangle_{\{g_t\ } , \{j_t\ } } \right ] \label{rem - ksat - rs-1}\end{aligned}\ ] ] which thanks to the relation between and , rewrites as & = & -\frac{\alpha}{\beta } e \left [ \left\langle \log \left ( 1+(\e^{-\beta}-1)\omega(\prod_{t=1}^p \frac{1+j_{t } s_{t}}{2 } ) \right ) \right\rangle_{\{j_t\ } } - \right .\nonumber \\ & & p \left\langle \log \left ( 1 + \xi \omega\left(\frac{1+j s}{2 } \prod_{t=1}^{p-1}\frac{1+j_{t } \tanh ( \beta g_t)}{2 } \right ) \right ) \right\rangle_{\{g_t\ } , j , \{j_t\ } } + \nonumber \\ & & \left .( p-1 ) \left\langle \log\left ( 1 + \xi \prod_{t=1}^{p}\frac{1+j_{r } \tanh ( \beta g_t)}{2}\right ) \right\rangle_{\{g_t\ } , \{j_t\ } } \right ] \label{rem - ksat - rs}\end{aligned}\ ] ] the last term has been added and subtracted from eq.([fverars ] ) in order to extract a remainder that would vanish if replica symmetry holds , and maximization is performed on . as in the -spin case , we will proceed in a taylor expansion of expression ( [ rem - ksat - rs ] ) in powers of , and rely on absolute convergence to average each term of the series .expanding the first term in ( [ rem - ksat - rs ] ) we can write = \nonumber \\ & & \sum_{n\ge 1 } \frac{(-1)^{n+1}}{n } ( \xi^*)^n e \left [ \left\langle \omega\left(\prod_{t=1}^p ( 1+j_t s_t)\right)^n \right\rangle_{\{j_t\ } } \right ] = \nonumber \\ & & \sum_{n\ge 1 } \frac{(-1)^{n+1}}{n } ( \xi^*)^n \omega \left [ \prod_{t=1}^p \left ( 1 + \sum_{l=1}^n \left\langle j^l_t \right\rangle_{j_t } \sum_{a_1< ...<a_l}^{1,n } s_t^{a_1} ... s_t^{a_l}\right ) \right]=\nonumber \\ & & \sum_{n\ge 1 } \frac{(-1)^{n+1}}{n } ( \xi^*)^n \omega \left [ \prod_{t=1}^p \left ( 1 + \sum_{l=1}^n \left\langle j^l_t \right\rangle_{j_t } \sum_{a_1< ... <a_l}^{1,n } q^{a_1 ... a_l } \right ) \right]=\nonumber \\ & & \sum_{n\ge 1 } \frac{(-1)^{n+1}}{n } ( \xi^*)^n \omega[(1+q_n)^p]\end{aligned}\ ] ] where we have defined and .notice that due to the negative sign of , the coefficients are all negative .the analogous expansion of the second term is : = \nonumber \\ & & \sum_{n\ge 1 } \frac{(-1)^{n+1}}{n } ( \xi^*)^n \omega \left [ \left ( 1 + \sum_{l=1}^n \langle j^l \rangle_j \sum_{a_1< ... <a_l}^{1,n } q^{a_1 ...a_l } \right ) \left\langle \prod_{t=1}^{p-1 } \prod_{l=1}^{n } \left ( 1 + j_t \tanh ( \beta g_t ) \right ) \right\rangle_{\{j_t\},\{g_t\ } } \right ] = \nonumber \\ & & \sum_{n\ge 1 } \frac{(-1)^{n+1}}{n } ( \xi^*)^n\omega \left [ ( 1+q_n)\left\langle(1+j \tanh ( \beta g))^n\right\rangle_{j , g}^{p-1 } \right]\end{aligned}\ ] ] finally , the third terms in eq.([rem - ksat - rs ] ) immediately reads the sum of the three pieces in eq.([rem - ksat - rs ] ) gives : = \frac{\alpha}{\beta } \sum_{n\ge 1 } \frac{(-1)^{n}}{n } ( \xi^*)^n \omega \left [ ( 1+q_n)^p - p(1+q_n)\langle(1+j \tanh ( \beta g))^n\rangle_{j , g}^{p-1 } + ( p-1 ) \langle(1+j \tanh ( \beta g))^n\rangle_{j , g}^{p } \right]\ ] ] the previous sum is always positive semidefinite for even while we need for odd . we proceed in the same way as in the -spin case .the algebra is elementary but more tedious and involved , therefore we will only list the final results of the calculation . starting from eq.([pipposat ] ) , we again expand in series the first term , getting , with a treatment similar to the rs case : = \sum_{l\ge 1}\frac{m^l}{l } \sum_{k_1, ... ,k_l}^{1,\infty } ( -\xi^*)^{\sum_{s=1}^l k_s } \prod_{s=1}^l \left ( \frac{\prod_{r=1}^{k_s-1}(r - m)}{k_s ! }\right ) \omega^{(l ) } \left [ ( 1 + { \bf q}(k_1, ... ,k_l))^p \right ] \label{primomostro}\end{aligned}\ ] ] where we have defined : analogous steps give for the second term in eq.([pipposat ] ) \left\langle \prod_{s=1}^l \left\langle \left ( 1 + j \tanh ( \beta g ) \right)^{k_l } \right\rangle_g \right\rangle_{g , j}^{p-1 } \label{terzomostro}\end{aligned}\ ] ] and for the third term where in the last two terms we can further expand with equal to and respectively . since it is easy to see how only positive terms of the series survive .collecting all , we eventually find the complete power expansion for : \label{ultimoomostro}\end{aligned}\ ] ] where we have defined again , every term of the expansion is positive for even and for odd under condition .let us briefly sketch the proof of the existence of the thermodynamic limit of free - energy of the spin model for .let us define a model which interpolates between two non interacting systems with and spins respectively , and a system of spins .each clause will belong to the total system with probability , to the first subsystem with probability and to the second subsystem with probability .we chose the indices in the following way : for each clause the indices will be i.i.d .with probability , the indices will be chosen uniformly in the set , with probability the indices will be chosen in and with probability in the set .let us consider the free - energy .a direct calculation of its -derivative e \langle \log(1+\tanh(\beta j ) \omega(s_{i_1} ... s_{i_p}))\rangle_j.\ ] ] expanding the logarithm in series , observing that thanks to the symmetry of the distribution the odd term vanish , introducing the replica measure and using the convexity of the function for even one proves that which implies sub - additivity ; this is in turn is a sufficient condition to the existence of the free - energy density .the same prove applies to the even random k - sat model . for odd we face a difficulty similar to the one in the replica bounds .we can not prove sub - additivity due to the need to consider negative values of the overlaps , and non convexity of for negative .for a review on the work of mathematicians on spin glass systems see : a. bovier and p. picco ( eds . ) _ mathematical aspects of spin glasses and neural networks _ ,progress in probability , vol .41 , birkhauser , boston , 1997 .+ m. talagrand , proceedings of the berlin international congress of mathematicians ., extra volume i , ( 1998 ) 1 and references therein .we refer to the web page of m. talagrand for the comprehensive list of references on the work of this author : http://www.math.ohio-state.edu/ talagran/ j. s. yedidia , w. t. freeman , and y. weiss , _ understanding belief propagation and its generalizations _ , 2001 .merl technical report tr 2001 - 22 , available at http://www.merl.com/papers/tr2001-22 .j. s. yedidia , w. t. freeman , y. weiss , in _ advances in neural information processing systems 13 _ , edited by t. k. leen , t. g. dietterich , and v. tresp , ( mit press , cambridge , ma , 2001 ) | in this paper we generalize to the case of diluted spin models and random combinatorial optimization problems a technique recently introduced by guerra ( cond - mat/0205123 ) to prove that the replica method generates variational bounds for disordered systems . we analyze a family of models that includes the viana - bray model , the diluted -spin model or random xor - sat problem , and the random k - sat problem , showing that the replica method provides an improvable scheme to obtain lower bounds of the free - energy at all temperatures and of the ground state energy . in the case of k - sat the replica method thus gives upper bounds of the satisfiability threshold . |
genetic interaction measures how different genes collectively contribute to a phenotype , and can reveal functional compensation and buffering between pathways ( or protein complexes ) under genetic perturbations .recently , genome - wide screening of genetic interactions have become possible for different species via high - throughput methods in which the phenotypic effect of the double - knockout of each pair of genes are compared with the aggregated effects of the two individual knockouts under an assumption of independence . an extreme case , called synthetic lethality , occurs when a double knockout results in the death of a cell even when the single knockouts have no effect .the genetic interaction networks revealed by these experiments provide novel insights both when analyzed by themselves and when integrated with other molecular networks or genomic datasets , such as physical interaction , gene expression and chemical - genetic interaction data . for higher eukaryotes such as human ,these reverse - genetics approaches have not been as straightforward due to both less amenable genetics and more complex phenotypes of interest such as disease onset and survival , which are difficult to study in cell based assays . despite slow progress in mapping genetic interactions using reverse genetics approaches in human cells, we are accumulating a wealth of data on individual genetic variations in human populations .genome - wide association studies ( gwas ) capturing single - nucleotide polymorphisms ( snps ) or copy number variations that have been widely applied for studying genetic differences between disease samples ( cases ) , and normal samples ( controls ) , offer an alternative means to map genetic interactions .for example , if two genetic variants have weak individual association with a disease but very strong joint association , the genes controlled by the two variants may have compensating functions that can buffer variations in each other , but yield a much higher risk of the disease phenotype of interest for a joint mutation .indeed , the genetic interactions between genetic variants ( such as snps and cnvs ) have been extensively studied as statistical epistasis ( used interchangeable with genetic interaction in this paper ) in both genome - wide and targeted studies .however , existing approaches mostly consider pairs ( or high - order combinations ) of genetic variants as separate interaction candidates , and hence estimate their statistical significance and study their biological interpretations in an isolated manner .while many statistically significant and biologically relevant instances of epistasis are discovered , these approaches may have the following drawback .several pairs of genetic variants may not have statistically significant genetic interactions when considered as individual interaction candidates , but nevertheless , can be collectively significant if they are highly enriched for two pathways that have complementary functions ( a between pathway model , bpm ) .this limitation motivates the design of approaches that can model the collective significance of snp pairs based on their network structure .specifically , we aim to construct human disease - specific ( cases vs. controls ) genetic interactions from gwas case - control datasets and then discover bpms from the constructed network , i.e. , two sets of genetic variants that have many genetic interactions across the two sets but none or very few within either set . for the community doing research on genetic interaction , the novelty is the exploration of whether the biologically interesting structures ( e.g. , bpms ) discovered from the genetic interaction networks of lower eukaryotes such as yeast also exist in complex diseases , such as cancer and neurological diseases , for higher eukaryotes such as humans . for the community doing research on gwas data analysis ,this work provides additional evidence to support the shift from the analysis of single genes ( or snps ) to sets of genes ( e.g. , gene sets or protein interaction subnetworks ) .existing approaches that focus on the discovery of individually significant pathways or subnetworks ignore those pathways or subnetworks that are individually insignificant but are compensatory to each other as a combination ( e.g. , pairs of pathways or subnetworks ) in their strong association with a disease phenotype . in this paper , we propose a general framework for constructing human disease - specific genetic interaction networks with gwas data . because different types of genomic data have unique characteristics that need to be addressed , we focus on genome - wide case - control snp data and its accompanying linkage disequilibrium ( ld ) structure .we discuss the challenges in the construction of genetic interaction networks due to ld structure and propose a general approach with three steps : ( 1 ) estimating snp - snp genetic interactions , ( 2 ) identifying ld blocks and summarizing snp - snp interactions to ld block - block genetic interactions , and ( 3 ) functional mapping ( e.g. gene mapping ) for each ld block . to illustrate how the constructed genetic interaction network can be used to obtain both known and novel biological insights about disease phenotype of interest in the case - control study we designed two sets of functional analyses ( and ) to analyze the genetic interaction networks constructed on each of the six case - control snp datasets used in this study . for functional analysis , we study whether a constructed human genetic interaction network has functional significance with respect to independent biological databases .specifically , we compare the ld block - block genetic interaction network and the genetic - interaction - profile - based ld block - block similarity network with the human functional network integrated in .interestingly , we find that the pairs of ld blocks that have high genetic interaction and those pairs that have high similarity of genetic interaction profiles have significantly higher functional similarity .this motivates the potential utility of the constructed genetic interaction network for revealing both known and novel biological insights into the disease phenotype of interests in a case - control study . for functional analysis , we study how to use the constructed human genetic interaction network to provide detailed insights about the compensation between pathways in their joint association with a disease phenotype .specifically , we discover between pathway models ( bpm ) from the block - block genetic interaction network .a bpm contains two sets of ld blocks , which have many cross - set genetic interactions but very few within - set genetic interactions .the experiments on the six snp datasets demonstrate that the discovered bpms have statistically significant properties ( supported by permutation tests of case - control groupings ) such as across - set densities of genetic interactions and functional enrichments based on three sets of biological databases .the significant bpms may provide indications of the compensation between pathways ( or protein complexes ) in their association with the disease phenotypes , and serve as a novel type of biomarker for complex diseases .comprehensive experimental results on the six case - control snp datasets support several points : ( i ) from the perspective of genetic interaction analysis , the constructed human genetic interaction network has functional significance , and the biologically interesting motifs such as bpm that are common in lower eukaryotes also exist in human with respect to complex diseases such as cancer and parkinson s disease ; ( ii ) from the perspective of gwas data analysis and biomarker discovery , discovering bpms from the constructed human genetic interaction network can help reveal novel biological insights about complex diseases , beyond existing approaches for gwas data analysis that either ignore interactions between snps , or model different snp - snp genetic interactions separately rather than studying global genetic interaction networks as in this study .we first present a general framework for constructing genetic interaction networks from genome - wide case - control snp datasets , and then describe two approaches for the functional analysis of the resulting networks that can be used to discover novel biological insights about complex diseases .for each step in this process , we selected a particular approach .various alternatives are possible , but due to the limitation of space and the desire to clearly explain our general approach , we do not discuss those alternatives in any detail .however , given the significant results obtained by the current approach consistent over six datasets ( section [ sec : exp ] ) , some of these alternatives should be explored to see if additional improvements in the results are possible .there are three steps in the network construction framework : ( i ) measuring all pairwise snp - snp genetic interaction with respect to the case - control grouping in a gwas dataset , ( ii ) summarizing snp - to - snp interactions into a block - level genetic interaction network , and ( iii ) functional mapping for each block .the principal goal of measuring genetic interactions between two snps is to capture the non - additive effect between the two snps in their combined association with the phenotype of interest . for this purpose ,we leverage the extensive research on statistical epistasis that has been recently reviewed by h. cordell . among the different measures for epistasis ,we selected the information theoretic _ synergy _. the _ synergy _ between two snps with respect to a binary class label variable ( cases vs. controls ) , i.e. , is defined in as follows : where denotes the mutual information between the class variable and a variable in this paper , _ synergy _ and mutual information are always normalized by , after which , ranges from to .the focus of this paper is on positive _ synergy _ since we want to measure the interaction between two snps beyond the independent additive effect of their joint association with a case - control phenotype .the interpretation of a positive _ synergy _ , e.g. 0.2 , is that the two snps as a combination provide 20% more information about beyond the summation of information provided independently by the two snps . in this paper , we say two snps have a positive genetic interaction if they have a positive _ synergy _ ( beyond - additive effect ) to keep the sign of genetic interaction _ synergy _ consistent . note that , in reverse - genetics based yeast genetic interaction , negative genetic interaction is used to denote beyond - additive effect .we chose to compute synergy for all pairs of snps rather than just those pairs for which one snp has sufficiently large marginal effects since we did not want to risk missing snp pairs that have weak ( or no ) marginal effect but a strong combined effect as discussed in since these pairs are essential for building an interaction network that may have even better statistical power than other approaches .after the _ synergy _ calculation for all pairwise snps , we get a full weighted snp - snp network .we denote this matrix as , as shown in figure [ fig : flowchar_construct ] ) . this network can not be directly interpreted because of the ld structure in the snp data . in the next section, we discuss this challenge in detail and present an approach to address it . ;( 2 ) identifying ld blocks ; and ( 3 ) summarizing a snp - snp network to a block - block genetic interaction network ( ) . ]due to the ld structure in snp data , nearby snps tend to have correlated genotypes over the samples .therefore , if a pair of snps ( and ) have large _ synergy _ , the snps close to probably also have large _ synergy _ with the snps close to .this can result in a trivial type of local motif ( approximate bicliques ) in the snp - snp network as illustrated in figure [ fig : ld_biclique].this biases the functional analysis since such bicliques do not reflect the functional similarity between the snps in the same ld block . in order to gain non - trivial insights from the genetic interaction network ,we propose to summarize the snp - snp interaction network by an ld block - block network . * identifying ld blocks * : different measures for estimating ld such as and are specifically designed for measuring the non - random associations between polymorphisms at different loci .these measures capture the difference between observed and expected allelic frequencies ( assuming random distributions ) , which depend on the phase information and define an ld block within a genomic region . with a related but different purpose , our goal in this study is to identify a set of snps on the same chromosome having similar genotype profiles and use a single block to represent them .we use hamming similarity to measure the correlation between the genotype profiles of snps .( snps are columns in the case - control data as illustrated in figure [ fig : flowchar_construct ] . )this similarity serves better for our purpose of measuring mathematical similarity between two snps rather than the ld .we take such a conservative approach in order to make sure that we do not create two separate ld blocks for snps that have similar genotype profiles .this also avoids the estimation of phase information , which adds additional uncertainty that may confound the analysis .we do not restrict an ld block to be within a local genomic region .this is because snps that are far from each other can also have high genotype correlation , at least from the mathematical perspective as shown in .again , we take such a conservative approach in order to make sure that we do not create two separate ld blocks for snps that have similar genotype profiles , which may happen if a window - size constraint is used . for simplicity , we perform a greedy search of ld blocks .specifically , we randomly take a snp and combine it with all the other snps ( on the same chromosome ) with hamming similarity above a threshold ( is used in this study ) as an ld block .a snp will only be assigned to one ld block .* snp - snp network to ld block - block network * : after identifying all the ld blocks in a dataset , we have a many - to - one mapping from all the snps to a set of blocks . given this mapping ,we summarize the snp - snp _ synergy _ network to a block - block _ synergy _ network using the following general function to estimate the _ synergy _ between two blocks and , where denotes a general aggregation function , e.g. or . in this paper, we adopt the function based on the following reasons and observations : ( 1 ) biologically , it is likely that only one pair of snps across two ld blocks are truly causative in the case - control phenotype , in which case is the ideal aggregation function .( 2 ) based on multiple datasets used in the experiment section , the function consistently yields coherence with existing biological knowledge gained from yeast genetic interaction networks .( 3 ) in the sanity check , the pairs with top _ synergy _ values have similar ld - block sizes as the null distribution , and thus are not due to the bias of large ld block sizes .there are other aggregation approaches that we will explore in future work .after this step , we have a block - block genetic interaction network , which we denote as as illustrated in figure [ fig : flowchar_construct ] . * functional mapping for each ld block * : after the construction of an ld block - block genetic interaction network , a functional mapping for each block is required to interpret the structure of the interaction network in functional terms that have biological meaning . for this purpose, we first assign each snp to the closest gene based on its genome location .then , the genes of an ld block are obtained from the snps that were assigned to that block in the ld identification step .we will explore more advanced gene mapping strategies in future work .interestingly , even with this simple gene mapping approach , the functional analysis in section [ sec : exp ] shows that the constructed ld block - block interaction network still appears to have functional structure .note that the gene mapping does not result in a gene - gene interaction network .this is because an ld block can span multiple genes so that gene - gene network derived from the blocks would contain many trivial biclique patterns .from this perspective , the genetic interaction network constructed for yeast in from eqtl data may have included a large number of false positive gene - gene edges since it connected all the gene pairs from the two ld blocks .in contrast , the block - block network constructed in this study has the least amount of bias from trivial bicliques . in section [ sec : construction ] , we presented a framework for constructing a block - block genetic interaction network from a human case - control genomic dataset . in this subsection , we present two sets of functional analyses : , which comparing the constructed ld block - block genetic interaction network and the corresponding similarity network derived from the human functional network , and , which discovers between pathway models ( bpm ) from the ld block - block genetic interaction network for functional enrichment analysis . both types of analysis have been used in the systematic interpretation of double - knockout - based yeast genetic interaction networks .here we use a similar approach to reveal novel biological insights from the genetic interaction networks constructed from human case - control genomic datasets .figure [ fig : flowchart_bpm_net_analysis ] shows the overall design of the two types of functional analysis and will be referenced extensively in the rest of this section . an important point in understanding following discussionis that we first threshold the block - block genetic interaction matrix ( ) to a binary matrix ( ) .specifically , we binarize this network with a quantile threshold ( e.g. 1% ) , such that those block - block edges with _ synergy _ in the top quantile ( those with large beyond - addtive effect interactions ) are kept in the binary network . we denote the matrix representation of this network as as illustrated in figure [ fig : flowchart_bpm_net_analysis ] . both of the analyses make use of binarized matrix . . there are two types of analysis : ( ) constructing the block - block similarity network ( ) from and enrichment analysis , and ( ) discovering bpms ( between pathway models ) from and enrichment analysis .both enrichment analyses are with respect to three sources of biological knowledge , namely ( i ) the block - block functional similarity network ( ) derived from the human functional network , ( ii ) the human go annotations and ( iii ) the human molecular signature database ( msigdb ) . ] in the first approach , we study whether the constructed human genetic interaction network has functional significance supported by independent biological databases . specifically , we first use jaccard similarity to measure the similarity between the profiles of genetic interactions of two blocks in . this results in matrix shown in figure [ fig : flowchart_bpm_net_analysis ] .the motivation is that the analysis of global yeast genetic interaction networks has shown that the similarity between such interaction profiles of two genes is correlated with the functional similarity of two genes . in the experimental section, we will use the constructed block - block similarity matrix to compare the similarity of block interaction profiles with the human gene - gene functional network integrated in . functional analysis approach , takes a complementary approach to discover between pathway models ( bpm ) .( this approach was demonstrated to be effective in the analysis of yeast genetic interaction networks . ) using insights gained from yeast genetic interaction network , a bpm contains two sets of genes that have many across - set genetic interactions and within - set protein - protein interactions .the two sets of genes in a bpm may correspond to two biological pathways ( or protein complexes ) that have redundant ( or complementarity ) biological functions with respect to the case - control grouping . in the context of this study , a bpm contains two sets of ld blocks , which have many large cross - set genetic interactions ( synergy values ) in and very few within - set genetic interactions .discovering bpms from the binarized disease - specific genetic interaction matrix ( ) can provide novel insights beyond existing approaches designed for analyzing case - control snp datasets .recently , existing approaches for analyzing case - control datasets have shifted from discovering single genes ( or snps etc ) to sets of genes ( e.g. pathways or protein interaction subnetworks ) .while many statistically significant and biologically relevant gene sets or subnetworks are discovered , existing approaches may ignore those pathways or subnetworks that are individually insignificant but are compensatory to each other as a combination in their strong association with a disease .from this perspective , a bpm captures the compensation between two pathways in their combined association with complex diseases such as cancer that may be caused by the perturbation of multiple ( e.g. , a pair of ) pathways but not individual pathways .this may lead to the discovery of a new type of complex biomarker , i.e. pairs of compensating pathways or protein complexes .many algorithms have been proposed to discover bpms from yeast genetic interaction networks .these algorithms mostly depend on integration of both physical interaction network and genetic interaction network .such integrative approaches have the advantage of better interpretability because of the integration with physical interaction data .however , given our goal of discovering bpms from human case - control datasets and the fact that human protein interaction network is not as complete and lacks reproducibility , an integrative approach may miss many bpms that are not yet well - supported by the existing limited functional knowledge of the human genome .for the above reason , in this study , we took an exploratory approach in which , we search for bpms only from the genetic interaction matrix . given a binary symmetric matrix ( ) , the bpm discovery problem can be reformulated as a quasi - biclique discovery problem . a quasi - biclique is defined as a non - weighted bipartite graph ( where and are two sets of ld blocks and is a set of ld block - block genetic interaction edges ) such that , for a given parameter . where denotes the number of edges between and all the nodes in (similarly for ) . in this paper, we adopt a greedy algorithm with polynomial time complexity to efficiently search for quasi - blciques from the binary block - block network ( ) .note that , several other algorithms that are designed for quasi - biclique discovery or bpm discovery can also be applied for the same purpose . these will be explored in future work as this paper focuses on presenting the overall framework .based on the definition , a quasi - biclique may also have many edges within each of the two sets ( and ) , while bpms that with no or very few within - set edges are relatively more interesting .therefore , after the discovery of a set of quasi - bicliques , a postprocessing will be applied to further select a subset of bicliques with a small fraction of within - set genetic interactions as bpm candidates for the following functional analysis .we will design experiments to interpret the discovered bpms with human functional network and conduct enrichment analysis with human go annotations and human molecular signature database ( msigdb ) in section [ sec : bpmresults ] .in this section , we present the experimental results on the two sets of functional analyses ( described in section [ sec : analysis ] ) on the network constructed with the framework presented in section [ sec : construction ] . in the experiments , we use six case - control snp datasets , one from a genome - wide association study on parkinson s disease ( disease vs. normal ) , and the others from targeted studies ( with around 3000 snps over about 1000 genes ) on myeloma long vs. short survival , myeloma cases vs. controls , lung cancer cases vs. controls ( both heavy smokers ) , rejection vs. no - rejection after kidney transplant , bone disease ( large vs. small number of bone lesions ) .we denote these 6 datasets as parkinson , myeloma - survival , myeloma , lung , kidney , bone in this section . for the genome - wide parkinson data, we selected 8994 non - synonymous snps for this study for the concern of computational efficiency , given the huge number of all pairwise snp - snp genetic interaction to compute , especially in a large number ( 100 ) of permutations for each dataset ( functional analysis ii ) .table [ tab : datasets ] displays the information about the six datasets , including the number of cases , controls , snps , ld blocks identified and edges in an ld block - block genetic interaction network ( ) ..information about the datasets .the number of blocks depends on the hamming similarity threshold ( used in this paper ) .the number of ld block - block ( b - b ) edges depends on the bianrization threshold ( 1.5% for all the datasets except parkinson , for which we used 0.25% to speed up the bpm discovery in functional analysis ii . ) [ cols= " < , < , < , < , < , < " , ] several observations can be made from table [ tab : fdrtable ] . 1 .* statistical significance of the discovered bpms * : many bpms are significant with respect to the on the the density of genetic interactions across the two sets .specifically , there are significant bpms discovered with respect to on all the six datasets .this indicates that the existence of genetically buffered functional modules as captured by the bpm structure is evident in genome - wide case - control snp datasets .these bpms generally have larger sizes and interaction densities than the random bpms discovered from permutation tests .* biological significance of the discovered bpms * : many bpms are significant with respect to the last three measures , i.e. , and , which suggests that they are not only statistically significant , but are also supported by independent genomic / proteomic evidence .more specifically , the density of edges in the functional network , within both compensatory modules ( , ) as well as between them ( ) , is frequently higher for the bpms derived from the real data as compared to the random permutations of case - control groupings .while table [ tab : fdrtable ] gives an overall summary of the bpms discovered from each of the datasets , figure [ fig : bpm_examples ] shows two illustrative examples of bpms discovered from parkinson and myeloma - survival .many bpms from the other four datasets are available on the supplementary website .p - values ( test ) for all the individual snps in the blocks of each side of a bpm is also shown .a dashed line is shown in each histogram , which corresponds to the p - value without bonferroni correction .all the snps have insignificant individual p - value supports the highlight of the proposed framework to discover significant bpms containing snps with weak individual association , that are mostly ignored by existing approaches . ]several observations can be made from figure [ fig : bpm_examples ] .we will use the bpm discovered from parkinson as a running illustrative example of their interpretation . 1 . * the statistical significance of the bpm discovered from parkinson : * the table on the left shows the fdrs for each of the six bpm measures . as shown , the six fdrs are all below with several below ( the latter four ) .a of ( fdr ) shows the dense genetic interactions between the two sets of ld blocks in this bpm .an of 0.98 ( fdr 0.05 ) indicates that the right set of ld blocks have strong ld block - block functional similarity , which agrees the concept of bpm , i.e. each side of the bpm is likely to be involved in a common pathway or a process .an of ( fdr 0.03 ) suggests that the two sets of ld blocks in this bpm may control two functions respectively , which may have compensatory function under genetic variations . 2 .* most snps in the bpm have insignificant individual association * : the histograms of individual snp association ( p - value ) ) indicate that , almost all of the snps in the significant bpm have insignificant individual p - value ( below the green dashed line which corresponds to even before bonferroni correction ) .this supports the utility of the proposed framework for discovering significant bpms that contain snps with weak individual association , which would be mostly ignored by existing approaches .* all cross - set snp pairs in the bpm have insignificant genetic interaction when considered separately * : we also compute fdr for each snp pair in a bpm across the two sets of ld blocks . for the two examples shown in the figure , the lowest fdrs of individual snp pairs are and in the parkinson example and the myeloma - survival example respectively .this indicates that none of the snp pairs in the two datasets would be considered as significant epistasis if they are considered in an isolated manner .in contrast , discovering them as a bpm with the proposed approach yields their significant fdrs .this demonstrates the effectiveness of the proposed functional analysis for genome - wide case - control snp datasets .* the two sets in the bpm contain ld blocks either from different chromosomes or from the same chromosome but with different genotype profiles * : the two heat maps of all pair s hamming similarities for the all the snps ( grouped by ld blocks ) indicate that the snps within each ld block have correlated genotype profiles while those from different ld blocks have different genotype profiles .this indicates that the bpm is not a trivial biclique that is due to ld structure .note that , this is an illustration of the effectiveness and necessity of constructing and analyzing a block - level genetic interaction network instead of either a snp - level network or a gene - level network as discussed in the methods section .the diversity of chromosomes in both sets of a bpm indicates that the two compensating functions ( e.g. pathways ) are from different chromosomes , showing the complexity of the mechanisms underlying the disease phenotype .* an illustrative interpretation on the bpm discovered on the parkinson s dataset * : given the statistical significance of the bpm structures discovered across many of the gwas studies , we further asked whether the genes on both sides were enriched for known pathways or processes using gene sets defined by gene ontology terms or msigdb .consistent with their overlap with the functional network , a number of the modules involved in bpms did show significant enrichment ( see figure [ fig : bpm_examples ] for examples on the parkinson and myeloma - survival datasets or the supplementary website for a complete list ) .the bpm shown in figure [ fig : bpm_examples ] that is associated with parkinson s disease was a pair of modules , one enriched for ion channel activity and the other enriched for protein kinase activity .the ion channel activity enrichment is driven by three genes , , , each of which comes from a separate ld block on three different chromosomes ( chromosomes 9 , 17 , and 21 , respectively ) .this is potentially interesting as potassium channels were recently suggested as a possible new target for therapeutics . it was hypothesized that such ion channels may affect the progressive loss of dopamine neurons , which is the main cause of parkinson s disease .it is also interesting to note that mouse knock - out mutants of , a potassium voltage - gated channel protein , have been associated with the so - called shaker / waltzer phenotype , which is characterized by rapid bilateral circling during locomotion .the complementary module in this parkinson s disease bpm was enriched for protein kinase activity due to the presence of the two protein kinases ( c - mer proto - oncogene tyrosine kinase ) and ( eukaryotic translation initiation factor 2-alpha kinase 3 ) , suggesting that combined mutations affecting ion channel activity and one of these signaling pathways may be causal determinants of parkinson s . while the specific link is unclear , it is interesting to note that is one of the key regulators of the eukaryotic translation initiation factor and thus , it controls global rates of protein synthesis in the cell .it is certainly conceivable that mutations in such a protein with relatively global influence on protein levels could modify , and in this case aggravate , the effects of other mutations .in fact , mutations in another translation initiation factor , , were recently associated with vanishing white matter disease , a disorder that causes rapid deterioration of the central nervous system .given the large number of genes involved in the ld blocks associated with each bpm , identifying the genes functionally responsible could be quite difficult and is one of the main caveats of this type of analysis. however , this process can be aided by simple enrichment analysis , which in this case appears to implicate processes whose link to parkinson s disease seems plausible .in this paper , we target the construction and functional analysis of disease - specific human genetic interaction networks from genome - wide association data designed for case - control studies on complex diseases . specifically , we focused on genome - wide case - control snp data , which has its linkage disequilibrium ( ld ) structure .we discussed the challenges in the detection of genetic interactions due to ld structure and propose a general approach with three steps : ( 1 ) estimating snp - snp genetic interactions , ( 2 ) identifying genome segments in linkage disequilibrium ( ld ) and mapping of snp interactions to ld block - block interactions , and ( 3 ) mapping for ld blocks to genes .we performed two sets of functional analyses on six case - control snp datasets to study if the constructed human genetic interaction network has functional significance supported by independent biological evidence by comparing with a human functional networks .we also demonstrated how the constructed interaction network can provide high - resolution insights about the compensation between pathways in their joint association with a disease phenotype by discovering between - pathway models .comprehensive experimental results on six case - control datasets demonstrated that ( i ) from the perspective of genetic interaction analysis , the constructed human genetic interaction network has functional significance , and that biologically interesting motifs such as bpm that are common in lower eukaryotes also exist in the genetic interaction network discovered from human genetic variations associated with complex diseases such as cancers and parkinson s disease ; ( ii ) from the perspective of gwa data analysis , discovering bpms from the constructed human genetic interaction network can help reveal novel biological insights about complex diseases , beyond existing approaches for gwas data analysis that either ignore interactions between snps , or model different snp - snp genetic interactions separately rather than studying global genetic interaction networks as done in this study .this paper focused on the presentation of the overall framework of constructing and analyzing human disease - specific genetic interaction network with gwas data .there are a number of interesting and necessary directions for future work such as exploring the effect of different epistasis measures , the effect of different ld block identification approaches , the effect of different aggregation functions , and the effect of different gene mapping approaches . in conclusion, we want to highlight that , even though we chose to use some relatively simple and conservative options in the framework which needs further exploration as discussed above , the significant statistical and biological evidence obtained from the two sets of functional analyses demonstrate the effectiveness of the current framework in revealing several consistent observations over six case - control snp datasets .g. fang , m. haznadar , w. wang , m. steinbach , b. vanness , and v. kumar .efficient discovery of high - order snp combinations with strong phenotype association .technical report 013 , dept . of computer science , univ . of minnesota , 2010 .b. v. ness , c. ramos , m. haznadar , a. hoering , j. c. jeff haessler , s. jacobus , m. oken , v. rajkumar , p. greipp , b. barlogie , b. durie , m. katz , g. atluri , g. fang , r. gupta , m. steinbach , v. kumar , r. mushlin , d. johnson , and g. morgan . ., 6:66 , 2008 . | genetic interaction measures how different genes collectively contribute to a phenotype , and can reveal functional compensation and buffering between pathways under genetic perturbations . recently , genome - wide investigation for genetic interactions has revealed genetic interaction networks that provide novel insights both when analyzed independently and when integrated with other functional genomic datasets . for higher eukaryotes such as human , the above reverse - genetics approaches are not straightforward since the phenotypes of interest for higher eukaryotes such as disease onset or survival , are difficult to study in a cell based assay . in this paper , we propose a general framework for constructing and analyzing human genetic interaction networks from genome - wide single nucleotide polymorphism ( snp ) datasets used for case - control studies on complex diseases . specifically , we propose a general approach with three major steps : ( 1 ) estimating snp - snp genetic interactions , ( 2 ) identifying linkage disequilibrium ( ld ) blocks and mapping snp - snp interactions to ld block - block interactions , and ( 3 ) functional mapping for ld blocks . we performed two sets of functional analyses for each of the six case - control snp datasets used in the paper , and demonstrated that ( i ) genes in ld blocks showing similar interaction profiles tend to be functionally related , and ( ii ) the network can be used to discover pairs of compensatory gene modules ( between - pathway models ) in their joint association with a disease phenotype . the proposed framework should provide novel insights beyond existing approaches that either ignore interactions between snps or model different snp - snp pairs with genetic interactions separately . furthermore , our study provides evidence that some of the core properties of genetic interaction networks based on reverse genetics in model organisms like yeast are also present in genetic interactions revealed by natural variation in human populations . |
ground - breaking papers are extreme events in science. they can transform the way in which researchers do science in terms of the subjects they choose , the methods they use , and the way they present their results .the related spreading of ideas has been described as an epidemic percolation process in a social network .however , the impact of most innovations is limited .there are only a few ideas , which gain attention all over the world and across disciplinary boundaries .typical examples are elementary particle physics , the theory of evolution , superconductivity , neural networks , chaos theory , systems biology , nanoscience , or network theory .it is still a puzzle , however , how a new idea and its proponent can be successful , given that they must beat the rich - gets - richer dynamics of already established ideas and scientists . according to the matthew effect , famous scientists receive an amount of credit that may sometimes appear disproportionate to their actual contributions , to the detriment of younger or less known scholars .this implies a great authority of a small number of scientists , which is reflected by the big attention received by their work and ideas , and of the scholars working with them .therefore , how can a previously unknown scientist establish at all a high scientific reputation and authority , if those who get a lot of citations receive even more over time ? herewe shed light on this puzzle .the following results for nobel prize laureates in chemistry , economics , medicine and physics suggest that innovators can gain reputation and innovations can successfully spread , mainly _ because _ a scientist s body of work overall enjoys a greater impact after the publication of a landmark paper .not only do colleagues notice the ground - breaking paper , but the latter also attracts the attention to older publications of the same author ( see fig .[ h ] in 1989 and is the most cited work of fenn , with currently over citations .the diagram reports the growth in time of the total number of citations received by this landmark paper ( blue solid line ) and by six older papers .the diagram indicates that the number of citations of the landmark paper has literally exploded in the first years after its appearance .however , after its publication in 1989 , a number of other papers also enjoyed a much higher citation rate .thus , a sizeable part of previous scientific work has reached a big impact after the publication of the landmark paper .we found that the occurrence of this boosting effect is characteristic for successful scientific careers . ] consequently , _ future _ papers have an impact on _ past _ papers , as their relevance is newly weighted .we focus here on citations as indicator of scientific impact , studying data from the isi web of science , but the use of click streams would be conceivable as well .it is well - known that the relative number of citations correlates with research quality .citations are now regularly used in university rankings , in academic recruitments and for the distribution of funds among scholars and scientific institutions .we evaluated data for nobel prize laureates that were awarded in the last two decades ( - ) , which include an impressive number of about million citations .for all of them and other internationally established experts as well , we find peaks in the changes of their citation rates ( figs . 2 and 3 ) .[ h ] for nobel laureates [ here for ( a ) mario r. capecchi ( medicine , 2007 ) , ( b ) john c. mather ( physics , 2006 ) , ( c ) roger y. tsien ( chemistry , 2008 ) and ( d ) roger b. myerson ( economics , 2007 ) ] .sharp peaks indicate citation boosts in favor of older papers , triggered by the publication and recognition of a landmark paper .insets : the peaks even persist ( though somewhat smaller ) , if in the determination of the citation counts , the landmark paper is skipped ( which is defined as the paper that produces the largest reduction in the peak size , when excluded from the computation of the boost factor ) .we conclude that the observed citation boosts are mostly due to a collective effect involving several publications rather than due to the high citation rate of the landmark paper itself . ][ h ] versus traditional citation variables .each panel displays the time histories of four variables : the boost factor , the average number of citations per paper , the cumulative number of citations , and the -index earned until year .the panels refer to the same nobel laureates as displayed in fig .the classical indices have relatively smooth profiles , i.e. they are not very sensitive to extreme events in the life of a scientist like the publication of landmark papers .an advantage of the boost factor is that its peaks allow one to identify scientific breakthroughs earlier . ]moreover , it is always possible to attribute to these peaks landmark papers ( fig .4 ) , which have reached hundreds of citations over the period of a decade .we first determined the ranks of all papers of an author based on the total number of citations received until the year inclusively .we then determined the rank of that particular publication , which had the greatest contribution to the peak .this was done by measuring the reduction in the height of the peak , when the paper was excluded from the calculation of the boost factor ( as in the insets of fig .the distribution of the ranks of `` landmark papers '' is dominated by low values , implying that they are indeed among the top publications of their authors . ]such landmark papers are rare even in the lives of the most excellent scientists , but some authors have several such peaks . technically , we detect a groundbreaking article published at time by comparing the citation rates before and after for the earlier papers .the analysis proceeds as follows : given a year and a time window , we take all papers of the studied author that were published since the beginning of his / her career until year .the citation rate measures the average number of citations received per paper per year in the period from to .similarly , the citation rate measures the average number of citations received by the same publications per paper per year between and ( or , if exceeds ) . the ratio , which we call the `` boost factor '' , is a variable that detects critical events in the life of a scientist : sudden increases in the citation rates ( as illustrated by fig .1 ) show up as peaks in the time - dependent plot of . in our analysis we used the generalized boost factor , which reduces the influence of random variations in the citation rates ( see materials and methods ) .figure 2 shows typical plots of the boost factors of four nobel prize laureates .interestingly , peaks are even found , when those papers , which mostly contribute to them , are _ excluded _ from the analysis ( see insets of fig .that is , the observed increases in the citation rates are not just due to the landmark papers themselves , but rather to a collective effect , namely an increase in the citation rates of _ previously _ published papers .this results from the greater visibility that the body of work of the corresponding scientist receives after the publication of a landmark paper and establishes an increased scientific impact ( `` authority '' ) . from the perspective of attention economics , it may be interpreted as a herding effect resulting from the way in which relevant information is collectively discovered in an information - rich environment .interestingly , we have found that older papers receiving a boost are not always works related to the topic of the landmark paper .traditional citation analysis does not reveal such crucial events in the life of a scientist very well .figure 3 shows the time history of three classical citation indices : the average number of citations per paper , the cumulative number of citations , and the hirsch index ( -index ) in year . for comparison ,the evolution of the boost factor is depicted as well .all indices were divided by their maximum value , in order to normalize them and to use the same scale for all .the profiles of the classical indices are rather smooth in most cases , and it is often very hard to see any significant effects of landmark papers. however , this is not surprising , as the boost factor is designed to capture abrupt variations in the citation rates , while both and reflect the overall production of a scientist and are therefore less sensitive to extreme events . to gain a better understanding of our findings , figs . 4 and 5 present a statistical analysis of the boosts observed for nobel prize laureates .[ h ] and .the power law fits ( lines ) are performed with the maximum likelihood method .the exponents for the direct distribution ( of which the cumulative distribution is the integral ) are : ( top left ) , ( bottom left ) , ( top right ) , ( bottom right ) .the best fits have the following lower cutoffs and values of the kolmogorov - smirnov ( ks ) statistics : , ( top left ) , , ( bottom left ) , , ( top right ) , , ( bottom right ) .the ks values support the power law ansatz for the shape of the curves .still , we point out that on the left plots the data span just one decade in the variable , so one has to be careful about the existence of power laws here . ]figure 4 demonstrates that pronounced peaks are indeed related to highly cited papers .furthermore , fig .5 analyzes the size distribution of peaks .the distribution looks like a power law for all choices of the parameters and ( at least within the relevant range of small values ) .this suggests that the bursts are produced by citation cascades as they would occur in a self - organized critical system .in fact , power laws were found to result from human interactions also in other contexts .the mechanism underlying citation cascades is the discovery of new ideas , which colleagues refer to in the references of their papers .moreover , according to the rich - gets - richer effect , successful papers are more often cited , also to raise their own success .innovations may even cause scientists to change their research direction or approach .apparently , such feedback effects can create citation cascades , which are ultimately triggered by landmark papers . finally , it is important to check whether the boost factor is able to distinguish exceptional scientists from average ones . since any criteria used to define `` normal scientists ''may be questioned , we have assembled a set of scientists taken at random .scientists were chosen among those who published at least one paper in the year 2000 .we selected names for each of four fields : medicine , physics , chemistry and economy .after discarding those with no citations , we ended up with scientists . in fig .6 we draw on a bidimensional plane each scientist of our random sample ( empty circles ) , together with the nobel prize laureates considered ( full circles ) .the two dimensions are the value of the boost factor and the average number of citations of a scientist .a cluster analysis separates the populations in the proportions of to .the separation is significant but there is an overlap of the two datasets , mainly because of two reasons .first , by picking a large number of scientists at random , as we did , there is a finite probability to choose also outstanding scholars .we have verified that this is the case .therefore , some of the empty circles deserve to sit on the top - right part of the diagram , like many nobel prize laureates .the second reason is that we are considering scholars from different disciplines , which generally have different citation frequencies .this affects particularly the average number of citations of a scientist , but also the value of the boost factor . in this way , the position in the diagram is affected by the specific research topic , and the distribution of the points in the diagram of fig .6 is a superposition of field - specific distributions .nevertheless , the two datasets , though overlapping , are clearly distinct . adding further dimensions could considerably improve the result . in this respect, the boost factor can be used together with other measures to better specify the performance of scientists .in summary , groundbreaking scientific papers have a boosting effect on previous publications of their authors , bringing them to the attention of the scientific community and establishing their `` authority '' .we have provided the first quantitative characterization of this phenomenon by introducing a new variable , the `` boost factor '' , which is sensitive to sudden changes in the citation rates .the fact that landmark papers trigger the collective discovery of older papers amplifies their impact and tends to generate pronounced spikes long before the paper receives full recognition .the boosting factor can therefore serve to discover new breakthroughs and talents more quickly than classical citation indices . it may also help to assemble good research teams , which have a pivotal role in modern science .the power law behavior observed in the distribution of peak sizes suggests that science progresses through phase transitions with citation avalanches on all scales from small cascades reflecting quasi - continuous scientific progress all the way up to scientific revolutions , which fundamentally change our perception of the world .while this provides new evidence for sudden paradigm shifts , our results also give a better idea of why and how they happen .it is noteworthy that similar feedback effects may determine the social influence of politicians , or prices of stocks and products ( and , thereby , the value of companies ) .in fact , despite the long history of research on these subjects , such phenomena are still not fully understood .there is evidence , however , that the power of a person or the value of a company increase with the level of attention they enjoy .consequently , our study of scientific impact is likely to shed new light on these scientific puzzles as well .the basic goal is to improve the signal - to - noise ratio in the citation rates , in order to detect sudden changes in them .an effective method to reduce the influence of papers with largely fluctuating citation rates is to weight highly cited papers more .this can be achieved by raising the number of cites to the power , where .therefore , our formula to compute looks as follows : here , is the number of cites received by paper in year .the sum over includes all papers published before the year ; is the time window selected to compute the boosting effect . for we recover the original definition of ( see main text ) . for the analysis presented in the paper we have used and , but our conclusions are not very sensitive to the choice of smaller values of and .we acknowledge the use of isi web of science data of thomson reuters for our citation analysis .a.m. , s.l . and d.h .were partially supported by the future and emerging technologies programme fp7-cosi - ict of the european commission through the project qlectives ( grant no . :h . e. and s. f. gratefully acknowledge ictecollective , grant 238597 of the european commission .0.5 cm boyack kw , brner k ( 2003 ) indicator - assisted evaluation and funding of research : visualizing the influence of grants on the number and citation counts of research papers .j am soc inf sci technol 54 : 447461 . | nobel prizes are commonly seen to be among the most prestigious achievements of our times . based on mining several million citations , we quantitatively analyze the processes driving paradigm shifts in science . we find that groundbreaking discoveries of nobel prize laureates and other famous scientists are not only acknowledged by many citations of their landmark papers . surprisingly , they also boost the citation rates of their previous publications . given that innovations must outcompete the rich - gets - richer effect for scientific citations , it turns out that they can make their way only through citation cascades . a quantitative analysis reveals how and why they happen . science appears to behave like a self - organized critical system , in which citation cascades of all sizes occur , from continuous scientific progress all the way up to scientific revolutions , which change the way we see our world . measuring the `` boosting effect '' of landmark papers , our analysis reveals how new ideas and new players can make their way and finally triumph in a world dominated by established paradigms . the underlying `` boost factor '' is also useful to discover scientific breakthroughs and talents much earlier than through classical citation analysis , which by now has become a widespread method to measure scientific excellence , influencing scientific careers and the distribution of research funds . our findings reveal patterns of collective social behavior , which are also interesting from an attention economics perspective . understanding the origin of scientific authority may therefore ultimately help to explain , how social influence comes about and why the value of goods depends so strongly on the attention they attract . |
the 20th century is well known for its critical works of kuhn , popper , lakatos and feyerabend that tried to build models of how the science should work or to show how it does in fact work . in the same , owing to the entrance into era of overwhelming information it was possible to tackle this problem quantitatively , pointing out specific phenomena observed in science .several studies are bound to answer such questions like `` how to measure who the best scientist is ? '' or try to simulate the process of paradigms shifts . in this study , we make use of complex networks tools to show how this issue is resolved at the level of scientific institutions ( i.e. , universities ) , to be more specific ( i ) what is the correlation between university rank and the number of papers in a specific discipline and ( ii ) what are the components of the scientific collaborations .in order to estimate the correlations between university rankings and scientific productivity we had to identify two different sources of data : ( i ) first devoted to the university ranking with at least 10 years activity , ( ii ) second connected to actual bibliographic information , in particular complying with the following rules : ( 1 ) allowing to view categories of publications,(2 ) allowing to view address of the publication , ( 3 ) allowing to view year of publication .the lists of top hundred universities were downloaded from two services : academic ranking of world universities - later referred to as arwu and qs world university ranking - later referred to as qs .the rationale behind choosing two rankings that follow different rules was to check the robustness of the performed analysis .after preliminary analysis , we have chosen the service web of science as a data source for obtaining the information on citations .for one institution the average number of publications ranges between few to dozens of thousands of publications . as a resulteach university has two tables containing the following fields : ( i ) published ( date of publication),(ii ) i d ( reference to the second table),(iii ) subject category ( category of publications),(iv ) language . the key information used in this report is the subject category of the published paper ( we will refer to it later as to simply _ category _ ) we start the analysis with the estimation of correlation reflecting the dependence of the number of papers a university has published on its rank in the list . to be more precise , for each of 180 categories we build a 100 by 2 matrix , where the first row gives ranks of all universities in this category and the second one gathers the number of papers published in this category by the given university . as one of the variables is already given in the form of rank we decided to use spearman s rank correlation coefficient as the measure of dependence between and .the results are gathered in table [ tab : all ] together with the total number of papers in the given category and the statistical significance of the test .it is of use to examine the relation between the size of the category , measured by the number of papers belonging to it and the above mentioned correlation coefficient .those results are shown in fig .[ fig : rho ] giving the evidence that lower correlation ( i.e. , larger number of papers following higher rank ) is in general characteristic for categories with large total number of papers .moreover , the correlation for such categories are statistically significant . here , we would like to check the hypothesis of categorical separation of science .it is our belief , that certain categories ten to `` glue together '' the scientists working in them . in other words ,the possibility of interdisciplinary research is not that high as one would expect it to be . in order to test this assumptionwe we performed the principal component analysis ( pca ) for 10 most prominent categories ( in sense of the total number of papers ) . as can be seen in fig .[ fig : pca]a , the first three principal components explain 90% of variability in the data , so the analysis can be restricted just to them .further , by plotting 2nd component vs 1st ( fig .[ fig : pca]b ) and 3rd component vs. 2nd ( fig .[ fig : pca]c ) we can identify the main directions of the dataset . in fig .[ fig : pca]b we have biochemistry , biology , neuroscience , medicine and psychology in the positive part of x - axis while chemistry , physics , materials science , engineering and computer science are in negative part of this axis . incan thus mean that the 1st component divides the categories into technical sciences ( negative values ) and medicine - related ones ( positive values ) .the 2nd component is much harder to be identified a rough estimate could link positive axis with _ fundamental sciences _ as we have physics , chemistry and biology .finally there is a clear interpretation as to the 3rd component the only significant positive value is connected to physics .apart from the categorical point of view we can also consider university quality by analyzing the direct connections between universities and on the basis of the collaboration matrix where the element gives the number of common publications of institutions and .the principal concept of the network analysis is depicted in fig .[ fig : sch1 ] . using 100 highest ranked universities , for each of them ( ) we search for its publications . then, if among the co - authors of there is any that comes from either of the universities a link of weight between those universities ( e.g , and ) is established .the weight is increased by one each time is found among the following publications of . finally the weight of the link between nodes and is just the number of their common publications ( as seen in the database ) . [ [ weights - probability - distribution ] ] weights probability distribution + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the first , fundamental quantity to be computed is the probability distribution of weights giving the idea about the diversity of number of publications between universities .figure [ fig : pw ] presents for raw data ( black circles ) as well as for the logarithmically binned ones ( with the base , red - filled circles ) .the plot suggests that the majority of weights can be found for w between 1 and 10 - there a plateau can be clearly seen .however , there is still a clear pattern for the remaining part even for weights as large as that could be presumably fitted by a power - law function .however it is possible to fit a full - range log - normal function ( red curve ) with the parameters and ( values obtained by a maximum - likelihood fitting ) .however , the kolmogorov - smirnov goodness - of - fit test accepts the hypothesis that the data points come from the distribution described by ( [ eq : pw ] ) for relatively low level of significance ( ) .the result is similar to this obtained in performing this search for consecutive universities from the ranking we obtain a fully connected network of all 100 universities with links denoting the number of common publications . [ [ dependence - of - edge - width - on - node - strength ] ] dependence of edge width on node strength + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + an interesting point of the further analysis is to test if the strength of the university , measured as the total number of its publications with other universities from the ranking influence the affinity of one university to link to another one . more precisely , we shall test what is dependence of the weight between universities and on the product of their strengths . a log - log scatter - plot of this relation for all pairs of universitiesis shown in fig .[ fig : wsasb]a with black circles .it brings clear evidence that the larger is the product of universities strengths the higher is the number of common publications between them . by performing a logarithmic binning ( red - filled circles )it is possible to analyze the specific form of the relation .the outcome is presented in fig .[ fig : wsasb]b , where two fits are shown : a linear one ( blue line , slope and negligible intercept ) and power - law one ( red dotted line , exponent ) . the linear fitting has the value of 0.99 while the power - law one 0.94 .taking into account those value as well as close to 1 exponent of the power - law fitting it is reasonable to assume that the average weight between universities characterized by strengths ( number of publications ) and is given by the relation equation ( [ eq : wab ] ) can serve as a kind of predictor for estimating a possible level of cooperation between two universities .also , observed deviations from this law could indicate either a presence of outliers in a given dataset or invalid data , thus eq . ( [ eq : wab ] ) might be useful as a first - step verification procedure of the examined data .[ [ weight - threshold ] ] weight threshold + + + + + + + + + + + + + + + + following analysis will use the concept of weight threshold depicted in fig .[ fig : sch2 ] .let us take the original network of 5 fully connected universities from fig .[ fig : sch2]a .let us assume now that we are interested in constructing an unweighted network that would take into account only the connections with weight higher than a certain threshold weight ( ) . a possible outcome of this procedure is presented in fig .[ fig : sch2]b - all the links with are omitted and as a result we obtain a network where links indicate only connections between nodes ( i.e. , they do not bear any value ) . using weight threshold as a parameter it is possible to obtain several unweighted networks - for each value of in the range we get a different network structure is determined only by .then , for each of these networks it is possible to compute standard network quantities : ( i ) number of nodes that have a at least one link ( i.e. , nodes with degree are not taken into account ) , ( ii ) number of edges ( links ) between the nodes,(iii ) clustering coefficient , ( iv ) assortativity coefficient ( v ) entropy of node degree probability distribution and ( vi ) the average shortest path ( see materials and methods for details ) .[ [ network - observables - as - function - of - weight - threshold ] ] network observables as function of weight threshold + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + figure [ fig : allw ] gathers the plots of the above described network parameters as a function of .first , as can be seen in fig .[ fig : allw]a , the number of nodes is a linearly decreasing function of the weight threshold .the number edges decreases even faster - for it follows an exponential function ( fig .[ fig : allw]b ) . similarly to clustering coefficient also drops down linearly with the weight threshold ( fig .[ fig : allw]c ) , although several small jumps over the trend can be seen .the most interesting is the behaviour of shown in figure 5d : the coefficient starts with , while for larger thresholds it crosses and for in range ] , means that the highly connected nodes have the affinity to connect to other nodes with high while happens when highly connected nodes tend to link to nodes with very low .+ _ entropy _ of node degree probability distribution .it is calculated by first obtaining the degree probability distribution ( i.e. , the probability that a randomly chosen node has exactly edges ) and then evaluating the expression : where and are , respectively , the smallest and the largest degree in the network .for the sake of comparison we divide the obtained value of entropy by its maximal value , i.e. , ._ average shortest path _ .it is calculated as the average value of shortest distance ( measured in the number of steps ) between all pairs of nodes , in the network .we acknowledge support by fp7 fet open project dynamically changing complex networks - dynanets eu grant agreement number 233847 . this work has been supported by the european union in the framework of european social fund through the warsaw university of technology development programme , realized by center for advanced studies .kuhn t. s. ( 1996 ) .the structure of scientific revolutions . _ university of chicago press , 3rd edition_. popper k. ( 2002 ) . the logic of scientific discovery ._ routledge , 2nd edition_. lakatos i. ( 1980 ) .the methodology of scientific research programmes ._ cambridge university press_. feyerabend p. ( 2010 ) .against method ._ verso , 4th edition_. merton , r. k. ( 1968 ) . the matthew effect in science ._ science 1968 , 159 _ , 56 - 63 .king , d. a. ( 2004 ) .the scientific impact of nations ._ nature 430 _ , 311 .hirsh j. e. ( 2005 ) .an index to quantify an individual s scientific research output .sci usa 102 _radicchi f. , fortunato s. , castellano c. ( 2008 ) .universality of citation distributions : toward an objective measure of scientific impact .usa 105_,17268 .radicchi f. , fortunato s. , markines b. , vespignani a. ( 2009 ) ._ physical review e 80 _ , 056103 .petersen a. m. , wang f. , stanley h. e. ( 2010 ) .methods for measuring the citations and productivity of scientists across time and discipline . _ physical review e 81 _ , 036114 .radicchi f. , castellano c. ( 2011 ) . rescaling citations of publications in physics ._ physical review e 83 _ , 046116 . mzaloumian a. , young - ho e. , helbing d. , lozano s. , fortunato s. ( 2011 ) .how citation boosts promote scientific paradigm shift and nobel prizes ._ plos one 6 _ , e18975 .bornholdt s. , jensen m. h. , sneppen k. ( 2011 ) .emergence and decline of scientific paradigms ._ physical review letters 106 _ , 058701 .kondratiuk p. , siudem g. , hoyst j. a. ( 2012 ) .analytical approach to model of scientific revolutions ._ physical review e 85 _ , 066126 .fronczak p. , fronczak a. , hoyst j. a. ( 2007 ) .analysis of scientific productivity using maximum entropy principle and fluctuation - dissipation theorem ._ physical review e 75 _ , 026103 .barabsi a .-l . , albert r. ( 2002 ) .statistical mechanics of complex networks ._ reviews of modern physics 74 _ , 47 .chmiel a. , sienkiewicz j. , suchecki k. , hoyst j. a. ( 2007 ) networks of companies and branches in poland ._ physica a 383 _ , 134 .hennemann s. , rybski d. , liefner i. ( 2012 ) the myth of global science collaborationcollaboration patterns in epistemic communities ._ j. informetr 6 _ , 217225 .pan r. k. , kaski k. , fortunato s. ( 2012 ) world citation and collaboration networks : uncovering the role of geography in science . _ sci .rep . 2 _ , 902 .[ cols="^,^ " , ] | we perform the analysis of scientific collaboration at the level of universities . the scope of this study is to answer two fundamental questions : ( i ) can one indicate a category ( i.e. , a scientific discipline ) that has the greatest impact on the rank of the university and ( ii ) do the best universities collaborate with the best ones only ? using two university ranking lists ( arwu and qs ) as well as data from the science citation index we show how the number of publications in certain categories correlates with the university rank . moreover , using complex networks analysis , we give hints that the scientific collaboration is highly embedded in the physical space and the number of common papers decays with the distance between them . we also show the strength of the ties between universities is proportional to product of their total number of publications . |
the spreading of epidemics is a threat to the world public health .it has also been arising large scientific attention due to the potential harm to the society . in particular , the infection by the human immunodeficiency virus ( hiv ) which causes aids has been the subject of intense studies since the early 1980s , when the virus started spreading quickly throughout the globe and has become a worldwide concern .estimates reveal that by the end of 2010 more than 30 million people were living with the infection worldwide . as a comparison , about 2.7 million new infections were identified on the globe just in 2010 . the study of the epidemiology of hiv infection using accurate surveillance datahas been providing essential information in guiding rational control and intervention programs , as well as in monitoring trends of the epidemic in many countries . for example , it has been shown that new hiv infections among adults depend on their geographic region in the globe and social contact groups distributions .brazil the world s fifth most populous country , with almost 200 million inhabitants was the first developing country to implement a large - scale control and intervention programs .the brazilian response to aids has included effective prevention strategies with strong participation of the civil society , multi - sectoral mobilization , and intense use of antiretroviral therapies . in a global scenario , these facts has attracted significant attention from the scientific community to the dynamics of hiv / aids epidemic in brazil . in the present work ,we investigate the evolution and current status of the aids epidemic in brazil by analyzing its annual absolute frequency per city .our analysis points in the direction of an alternative approach to the usual methods of epidemics analysis , emphasizing how the dynamics of the virus at the level of municipalities behaves collectively .the data were obtained from datasus , a database of the national public health system freely available online and maintained by the department of informatics of the brazilian health system .in this section we report the results that we obtained by analyzing the empirical data which cover the first 33 years of the aids infection in brazil .specifically , we investigate the annual absolute frequency , defined as the number of new aids cases diagnosed in the year at a given city .it is valid noting that the system data registry of datasus hosts only mandatory cases of aids reported by health professionals to health services .therefore , the number of hiv positive patients was not available in datasus until 2012 , but only the number of aids cases .altogether , we analyze data ranging from 1980 up to 2012 of 5138 brazilian cities with at least one aids case in the aforementioned period . up to the end of 2012 , the infection had already reached more than 5000 cities ( of the brazilian cities ) .we start with an overview of the spatial spreading of the virus .next , we focus on the temporal evolution of the infection in brazilian cities . the first aids case in brazil was identified in the year 1980 in so paulo , the most populous brazilian city ( southern region ) . in the next few years, new satellite cases came out and the infection started spreading to a larger area . in this mean time , the infection spread to other cities and the whole country was already taken by the end of the 1990s .the current status of the aids infection in brazil is depicted in figure [ growth ] , where the size of the circles is proportional to the logarithm of and the histograms along the axis detail the annual absolute frequencies as function of the geographic position of the cities .a chronological evolution of the aids cases per year among brazilian cities is available as a supplementary figure ( figure [ chronology ] ) . as clearly illustrated in figure [ growth ] , the infection is concentrated in some key cities located mainly next the atlantic littoral , where is concentrated most of the brazilian population . .the histograms along the axis represent the dependence of the distribution of the aids cases on the geographic position ( latitude and longitude ) of the cities .besides reflecting in a great extent the population distribution , this figure also provides a general information concerning the spatial spreading of the epidemics over the country . ]first we considered , the total annual absolute frequency at each year , obtained by summing over the total of brazilian cities , .figure [ dynamics]a shows the temporal evolution of and compares it with the curve ^{\frac{1}{\nu}}}\ , , \label{logistic}\ ] ] for different non - negative values of . , emphasizing their discreteness nature . ]this curve is obtained as the solution of the richard s differential equation and represents a generalized logistic growth .classically , logistic - like equations have a vast application in systems related to the dynamics of populations where there is a strong competition of increasing and establishing factors , exactly as in the case of diseases growing and spreading .a dimensional analysis reveals that has the dimension of , goes with , while and are dimensionless . within the period 19802012 for 5138 brazilian cities .the dashed blue line is equation ( 1 ) with the parameters , , and .the continuous red line is equation ( 1 ) with the parameters , , and .( b ) growth of the corresponding accumulated number of cases shown in log - lin scale .the inset represents the first points of the curve of , corresponding to the period 19801985 .the solid line is a fit in this region , given by with .( c ) decay of the actual reproduction number for the whole period shown at log - log scale .the solid line is a linear fit ( in log - log scale ) giving a power law exponent .the green points correspond to the numerical solution of the continuous equation with for the gompertz model , connecting the fit of equation ( 1 ) to the patterns of the data . ]the particular case corresponds to the usual logistic curve . for this case , a least - squared regression to the data . ]( see the dashed curve on figure [ dynamics]a ) leads to , and ( ) , where the uncertainties correspond to 99% confidence intervals .this type of curves have also been identified in si ( susceptible - infected ) simulations of the initial evolution of epidemic outbreaks in growing scale - free networks with local structures .similar curves has also been identified in networks whose nodes are not only the media to spread the virus , but also to disseminate their opinions about it . in the aids spreading case ,the establishment of an almost constant growth per year after 1999 ( figure [ dynamics]a ) could be associated with factors such as the impact on using arv therapies as well as the extensive broadcast prevention campaigns realized in brazil .it is also evident that the limit leads to the gompertz model .] , that was originally proposed to model mortality of an aging population .almost century later , this approach still has applications in biological growth curve , in survival analysis and in regeneration .a least - squared regression of the gompertz curve with to the data ( see the continuous curve on figure [ dynamics]a ) leads to and ( ) , where the uncertainties correspond to 99% confidence intervals .it is noticeable that the gompertz model is better than the generalized logistic model ( for different values of ) when describing the beginning of the curve , but looking at the residuals of fit , both models are in good agreement with the data .figure [ dynamics]b shows , the correspondent accumulated annual absolute frequency defined mathematically as . in a few words, represents the total number of aids cases diagnosed in brazil until the year . for small times, exhibits an exponential growth , ( inset in figure [ dynamics]b ) , where has the unity .the limit can be recovered ( analytically ) from equation ( 1 ) when is very small .a numerical comparison with the data reveals a relative error of 0.8 ( for the logistic model ) or of 0.5 ( for the gompertz model ) for small times . in this background, can be considered an estimate of the intrinsic growth rate describing the initial growth of the epidemic , which underlies the entire spreading of the infection in the surrounding regions .naturally , the intrinsic growth rate can be estimated from this approach since in the beginning of the epidemic practically corresponds to the prevalence of aids infection cases .a linear regression to the data leads to ( 99% confidence interval ; ) for the aids epidemic among brazilian cities .similar analysis for aids epidemic in france , in the uk , and in western germany have led to , , and respectively .as for hiv / aids epidemic among homosexual population in england and wales ( 19811996 ) , it was found .in contrast to the intrinsic growth rate , reproduction numbers describe a more local or individual quantity . the actual reproduction number is defined as the average number of secondary cases per case to which the infection was actually transmitted during the infectious period in a population .an estimate of can be obtained assuming that , where is the average length of the hiv infectious period ( years ) . since the ratio ( )can be associated to a transmission index , reproduction numbers carry important indicators like the threshold for pandemics . figure [ dynamics]c shows the temporal evolution of in brazil in comparison with a power law decaying in the form with ( 99% confidence interval ; ) . for sure ,equation ( 2 ) can be associated from equation ( 1 ) with the assumption that accumulating a discrete time series is a good approach for an integration process over a continuous variable .in particular , the numerical solution to the continuous approach with given by equation ( 1 ) also leads to consistent results within the confidence intervals ( see the green points in figure [ dynamics]c ). using incidence data of hiv / aids epidemic only among homosexual population in england and wales ( 19811996 ) , a power law pattern could not be identified .clearly , that data are a small fraction of the population and have severe limitations in time and space , which does not happen with the brazilian data , which have continental proportions , including more than 5000 urban centers and more than 30 years of data .as usually happens in a epidemic spreading , the aids infection had focus in a specific region and then spread to a larger area ( see figure [ chronology ] ) . during the spreading process ,individuals in many different urban centers get infected and the radius of the infection grew fast . a natural measure characterizing such process is a function indicating the number of cities with the same number of aids hosts .we investigated that through the probability density function ( pdf ) of the annual absolute frequency of aids among brazilian cities for fixed years .figure [ powerlaw]a shows of the set of all the in the year 2012 in comparison with the power law decay this result indicates that the distribution of the set of over all the brazilian cities has a robust long - tailed behavior .similar power laws can also be identified for all the with fixed . in particular ,the cramr - von mises test assures that for the null hypothesis that the yearly data is distributed accordingly to power law curves can not be reject at a confidence level of 99% ( the -values lies in the interval 600 million ( around 1/3 of the amount spent in caribbean and all the three americas ) in hiv domestic and international spending . as a consequence , these efforts should have had an impact over the epidemic of aids in brazil . to quantify such impact , disregarding other possible contributions, we split all the brazilian cities with at least one aids case since 1989 into 4 groups accordingly to their average absolute frequency per year , denoted by in and defined as an arithmetic mean of : group i accommodates 4029 cities with .group ii encompasses 565 cities with .group iii accommodates 259 cities with , and , finally , group iv covers 48 cities with .it is evident from figures [ allometry ] that , has a well - defined allometry with the population of cities .for this reason , separate groups of cities accordingly to is also equivalent to separate by population - sizes , apart from random fluctuations . in figure [ momentos ]we show average values and variance for each one of these groups .we can see that for small cities ( groups i and ii ) , the average number of cases and the respective fluctuations are still growing , while large cities ( groups iii and iv ) exhibit a different behavior : the number of cases are already decreasing , as well as the fluctuations .this could reflect that the intensive broadcast programs of the brazilian government against the infection has been giving results mainly in the largest urban centers , where is concentrated most of the cases .intense treatment therapies started in brazil at the beginning of the 1990s and had a peak in 1996 with the introduction of haart ( highly active antiretroviral therapy ) which substantially reduces the infectiousness of people living with aids .estimates suggest that more than 1.2 million of life - years are estimated to have been gained in brazil between 1996 and 2009 .their impact over the frequency of new infection cases is evident in cities of the groups iii and iv since the end of the 1990s .many therapeutic approaches have been under constant investigation in order to develop vaccines against the hiv virus or at least control some effects of the infection .but ongoing research is unavoidable to identify a solution to this devastating worldwide epidemic . in brazilian cities in the period 19892012 . *the total of 5138 cities are split in 4 groups : ( group i ) , ( group ii ) , ( group iii ) , and ( group iv ) . ]in the present work , we showed that the aids epidemic in brazil ( 19802012 ) exhibits the following growth patterns : _( i ) _ logistic - type growth of total aids cases per year with an exponential regime in the first few years of the aids spreading and _ ( ii ) _ power law - type decaying of the actual reproduction number ; _ ( iii ) _ power law behavior with a robust exponent in the pdf of the annual absolute frequency among cities ; _ ( iv ) _ a robust allometric relationship between the epidemic and the population of the cities ; and , finally , _ ( v ) _ different profile for the temporal evolution of the epidemics depending on the average absolute frequency of cases in the cities . the result shown in figure [ dynamics]a suggests a logistic - like growth of the total aids cases per year in brazil .typically such curves are associated with a competition of increasing and establishing agents . for this reason, they have a vast application in the dynamics of populations , including the spreading of infectious diseases . in the aids case ,the establishing agents seems to be connected to the impact of the large - scale control and programs of hiv / aids intervention in brazil , like the extensive broadcast prevention campaigns and the evolution with the antiretroviral therapies .looking carefully at figure [ dynamics]a it is possible to identify that , mainly after the year 1999 , there is a tendency to saturation of .such region represents the period where the impact of the efforts against the aids epidemics became more intense .it is well - known that the number of cases of a new infection in a confined area grows exponentially . for the aids epidemics in brazil, we found that this exponential provides the parameter ( inset of figure [ dynamics]b ) which are in good agreement with the growth of aids in some europeans countries . for small time scales ( typically smaller than the characteristic time for infection - related deaths ) the prevalence of an infection can be associated to the whole number of infected individuals .using this assumption , we obtained figure [ dynamics]c , which shows that the growth in the absolute frequency of cases is decreasing with time .this result is consistent with the literature and predicts that the infection is growing slower with time .it should be noted that the fluctuations after _ ca . _1992 occurs because it is the region where the small time scale is already broken ( _ ca . _ 10 years for aids ) .we also showed that the distribution of the aids epidemics in small spatial scales ( over cities ) follows a power law behavior for the whole period with an decreasing exponent typically around 1.9 and systematically below the inverse - square law ( ) widespread in the nature . yet , the infection over cities scales with the population in that region .we showed through figure [ allometry ] that the scale rule follows a super - linear allometric pattern ( ) .in contrast , mortality rates from influenza and pneumonia in us cities surrounding the 1918 pandemic scales linearly with the population size ( ) . in a general scenario , new aids cases in brazil are running to a plateau ( figure [ dynamics]a ) . assuming that this behavior is mainly due to the results of the programs and strategies , against the aids infection , we focused on the level of cities , elaborating figure [ momentos ] . by using it, we showed that the first results of the efforts against the infection occurred mainly in urban centers with an average of more than 10 aids cases per year ( groups iii and iv ) .that is , the infection is not expected to grow uncontrollably in such cities ; in particular , it is expected to lose its strength in the largest urban centers ( average of more than 84 aids cases per year ) .naturally , this result is reinforced by the power law decaying in the actual reproduction number ( figure [ dynamics]c ) , what indicates that the fraction of number of new cases per case is decreasing and whose effect is noted mainly in the larger cities because the fluctuations of cases appears to be smaller . on the other hand , the average number of new infections per year ( as well as their fluctuations ) in cities of the groups i and ii are still growing .the data is not long enough in time to identify if this fact is just a delay period or if such cities have not experienced sufficiently the brazilian control strategies against aids . in any case , they merit further attention from the brazilian health authorities . in a general summary , we proposed in this work an alternative approach to the usual methods of epidemics analysis , emphasizing how the dynamics of the virus at the level of municipalities behaves collectively .the authors thank for the financial support of the brazilian agencies cnpq , capes , and the national institute of science and technology for complex systems .f.j.a . is especially grateful to capes / fundao araucria for the grant number 87/2014 .10 [ 1]`#1 ` urlstyle [ 1]doi:#1 [ 1 ] [ 2 ] _ _ _ _ _ _ _ _ _ _ _ _ _ _ key : # 1 + annotation : # 2 _ _ _ _ _ _ _ _ _ _ _ _ _ _ kreiss jk , koech d , plummer fa , holmes kk , lightfoote m , et al .( 1986 ) aids virus infection in nairobi prostitutes .new engl j med 314 : 414 morris m , kretzschmar m ( 1997 ) concurrent partnerships and the spread of hiv .aids 11 : 641 648 .newman mej ( 2002 ) spread of epidemic disease on networks .phys rev e 66 : 016128 .riley s , fraser c , donnelly ca , ghani ac , abu - raddad lj , et al .( 2003 ) transmission dynamics of the etiological agent of sars in hong kong : impact of public health interventions .science 300 : 1961 lipsitch m , cohen t , cooper b , robins jm , ma s , et al .( 2003 ) transmission dynamics and control of severe acute respiratory syndrome .science 300 : 1966 1970 .neumann g , noda t , kawaoka y ( 2009 ) emergence and pandemic potential of swine - origin h1n1 influenza virus .nature 459 : 931 939 .curran jw , jaffe hw , hardy am , morgan wm , selik rm , et al .( 1988 ) epidemiology of hiv infection and aids in the united states .science 239 : 610 616 .bor r , miller r , goldman e ( 1993 ) hiv / aids and the family : a review of research in the first decade .j fam ther 15 : 187 204 .vogl d , rosenfeld b , breitbart w , thaler h , passik s , et al .( 1999 ) symptom prevalence , characteristics , and distress in aids outpatients .j pain symptom manage 18 : 253 262 .hayes r , weiss h ( 2006 ) understanding hiv epidemic trends in africa .science 311 : 620 621 .fonseca mgp , bastos fi ( 2007 ) twenty - five years of the aids epidemic in brazil : principal epidemiological findings , 1980 - 2005 .rev saude publ 23 : s333 s343 .walker bd , burton dr ( 2008 ) toward an aids vaccine .science 320 : 760 764 .hall hi , espinoza l , benbow n , hu yw , for the urban areas hiv surveillance workgroup ( 2010 ) epidemiology of hiv infection in large urban areas in the united states .plos one 5 : e12756 . unaids .unaids report on the global aids epidemic 2010 . http://www.unaids.org/globalreport/documents/20101123_globalreport_full_en.pdf . accessed on february 2014 .accessed on february 2014 .amundsen eja , stigum h , rttingen ja , aalen oo ( 2004 ) definition and estimation of an actual reproduction number describing past infectious disease transmission : application to hiv epidemics among homosexual men in denmark , norway and sweden .epidemiol infect 132 : 1139 1149 .beyrer c , baral sd , walker d , wirtz al , johns b , et al .( 2010 ) the expanding epidemics of hiv type 1 among men who have sex with men in low- and middle - income countries : diversity and consistency .epidemiol rev 32 : 137151 .alsallaq ra , baeten jm , celum cl , hughes jp , abu - raddad lj , et al .( 2013 ) understanding the potential impact of a combination hiv prevention intervention in a hyper - endemic community .plos one 8 : e54575 . quinn tc ( 1996 ) global burden of the hiv pandemic .lancet 348 : 99 106 . meiners c , sagaon - teyssier l , hasenclever l , moatti jp ( 2011 ) modeling hiv / aids drug price determinants in brazil : is generic competition a myth ?plos one 6 : e23478 .bailey ntj ( 1988 ) simplified modelling of the population dynamics of hiv / aids .j r statist soc a 151 : 31 43 .teixeira pr , antnio mv , barcarolo j ( 2004 ) antiretroviral treatment in resource - poor settings : the brazilian experience .aids 18 : 5 7 .accessed on february 2014 .nelder ja ( 1961 ) the fitting of a generalization of the logistic curve .biometrics 17 : 89 110 .gaillard jm , pontier d , allaine d , loison a , herve jc , et al .( 1997 ) variation in growth form and precocity at birth in eutherian mammals .proc biol sci 264 : 859 868 .murray jd ( 2002 ) mathematical biology : i. an introduction .springer , 3 edition .ni s , weng w , shen s , fan w ( 2008 ) epidemic outbreaks in growing scale - free networks with local structure .physica a 387 : 5295 5302 .ni s , weng w , zhang h ( 2011 ) modeling the effects of social impact on epidemic spreading in complex networks .physica a 390 : 4528 4534 .grangeiro a , escuder mml , castilho ea ( 2010 ) magnitude and trend of the aids epidemic in brazilian cities , from 2002 to 2006 .rev saude publ 44 : 430 441 .gompertz b ( 1825 ) on the nature of the function expressive of the law of human mortality , and on a new method of determining the value of life contingencies .philos trans r soc lond 115 : 513 585 .winsor cp ( 1932 ) the gompertz curve as a growth curve .proc natl acad sci usa 18 : 1 8 .coe jb , mao y ( 2005 ) gompertz mortality law and scaling behavior of the penna model .phys rev e 72 : 051925 .wallenstein s , brem h ( 2004 ) statistical analysis of wound - healing rates for pressure ulcers .am j surg 188 : 73 78 .nishiura h ( 2010 ) correcting the actual reproduction number : a simple method to estimate from early epidemic growth data .int j environ res public health 7 : 291 302 .gran jm , wasmuth l , amundsen ej , lindqvist bh , aalen oo ( 2008 ) growth rates in epidemic models : application to a model for hiv / aids progression .stat med 27 : 4817 4834 .ftp://ftp.ibge.gov.br/censos/censo_demografico_2000/dados_do_universo/municipios . accessed on february 2014 . ibge . ftp://ftp.ibge.gov.br/censos/censo_demografico_2010/resultados_do_universo/xls/municipios . accessed on february 2014 . west gb , brown jh , enquist bj ( 1999 ) the fourth dimension of life : fractal geometry and allometric scaling of organisms . science 284 : 1677 - 1679 .niklas kj ( 2004 ) plant allometry : is there a grand unifying theory ?biol rev camb philos soc 79 : 871 889 .bettencourt lma , lobo j , helbing d , khnert c , west gb ( 2007 ) growth , innovation , scaling , and the pace of life in cities .proc natl acad sci usa 104 : 7301 - 7306 .alves lga , ribeiro hv , lenzi ek , mendes rs ( 2013 ) distance to the scaling law : a useful approach for unveiling relationships between crime and urban metrics .plos one 8 : e69580 .gomez - lievano a , youn h , bettencourt lma ( 2012 ) the statistics of urban scaling and their connection to zipf s law .plos one 7 : e40393 .malacarne lc , mendes rs , lenzi ek ( 2001 ) -exponential distribution in urban agglomeration .phys rev e 65 : 017106 .nj , ribeiro mb ( 2006 ) zipf law for brazilian cities .physica a 367 : 441 448 .nunn as , fonseca em , bastos fi , gruskin s , salomon ja ( 2007 ) evolution of antiretroviral drug costs in brazil in the context of free and universal access to aids treatment . plos med 4 : e305 .porco tc , martin jn , page - shafer ka , cheng a , charlebois e , et al . ( 2004 ) decline in hiv infectivity following the introduction of highly active antiretroviral therapy .aids 18 : 81 88 .grangeiro a , escuder mm , menezes pr , alencar r , de castilho ea ( 2011 ) late entry into hiv care : estimated impact on aids mortality rates in brazil , 2003 plos one 6 : e14585 .ahead of world aids day 2013 unaids reports sustained progress in the aids response ./ en / resources / presscentre / pressreleaseandstatementarchive/2013/november/20131120report/. accessed on june 2014 .acuna - soto r , viboud c , chowell g ( 2011 ) influenza and pneumonia mortality in 66 large cities in the united states in years surrounding the 1918 pandemic .plos one 6 : e23467 . | brazil holds approximately 1/3 of population living infected with aids ( acquired immunodeficiency syndrome ) in central and south americas , and it was also the first developing country to implement a large - scale control and intervention program against aids epidemic . in this scenario , we investigate the temporal evolution and current status of the aids epidemic in brazil . specifically , we analyze records of annual absolute frequency of cases for more than 5000 cities for the first 33 years of the infection in brazil . we found that _ ( i ) _ the annual absolute frequencies exhibit a logistic - type growth with an exponential regime in the first few years of the aids spreading ; _ ( ii ) _ the actual reproduction number decaying as a power law ; _ ( iii ) _ the distribution of the annual absolute frequencies among cities decays with a power law behavior ; _ ( iv ) _ the annual absolute frequencies and the number of inhabitants have an allometric relationship ; _ ( v ) _ the temporal evolution of the annual absolute frequencies have different profile depending on the average annual absolute frequencies in the cities . these findings yield a general quantitative description of the aids infection dynamics in brazil since the beginning . they also provide clues about the effectiveness of treatment and control programs against the infection , that has had a different impact depending on the number of inhabitants of cities . in this framework , our results give insights into the overall dynamics of aids epidemic , which may contribute to select empirically accurate models . |
the collective self - examination made possible in astronomy by the decadal review process is a prime opportunity to engineer the incentives and institutions that shape our profession .we need not simply accept as inevitable the institutional framework that we ve inherited . the standard trajectory of the american academic career has been essentially fixed since the mid-20th century , when postdoc appointments started becoming common [ 1 ] .however , the practice of astronomy has changed since the 1950 s : we now deal with increasingly enormous telescopes , collaborations , and data sets .the lack of similar evolution in graduate training has resulted in ph.d .recipients who are no longer optimally trained for the skills and new positions required by modern astronomical research . in 2 ,we discuss these issues in detail and identify inefficiencies in the current structure for funding and training new professionals .the practical structure of academic astronomy has also changed significantly . while the ph.d .overproduction rate compared to faculty spots has remained approximately steady over the past two decades , there were many fewer postdoc positions in the past [ 2 ] . in recent years , however , increased federal funding has led to a boom in graduate student and postdoc positions without a concomitant expansion in the number of permanent faculty positions [ 3 ] . in 3 , we explore the implications of these realities in our field .additionally , the past half - century has witnessed a dramatic change in the workforce . the opportunity to gain the contributions of many excellent astronomersis currently missed . in 4 , we consider only one example of this phenomenon : how the resulting rise in the demand for `` child - friendly '' careers has been borne by our field .progress to date has been minimal , and the evidence is that this situation systematically selects against women .there is thus a good case to be made that the institutional framework of academic astronomy is suboptimal and disserves both the practitioners of astronomy and the public that ultimately funds it .fortunately , this is an issue that we , the astronomical community , can solve .we argue that the subcommittee on the state of the profession ( ssp ) should direct its attention toward improving the structure of our instutional framework .we outline below the evidence for a few outstanding problems , describe their costs to the community , and provide some suggestions that we hope will prove useful to those charged with charting the course of astronomy over the next decade .the training of professional astronomers is integral to the discussion of what science will be done in the coming ten years . in this section , we discuss the current state of astrophysics training and propose that adaptations to this process will be necessary in order to best use financial resources and personnel to produce the best science .significant financial resources are currently invested in the training of the next generation of astronomers . a typical astronomy ph.d .candidate may earn 20k spent by the pi or department to pay tuition and university fees .assuming that the average student spends 5 years in graduate school and requires an additional 43 m spent annually to produce new professionals _ ( see also the seth _ et al . _ position paper on `` employment & funding in astronomy '' ) .this is almost quadruple the annual operations budget of the keck observatory [ 6 ] .graduate students are typically funded through research grants and are typically expected to devote the vast majority of their time to pure research .this policy is even expressly stated in some graduate student guides ( _ e.g. _ [ 7 ] ) and is pervasive in the professional culture .few programs offer any incentives to broaden coursework beyond astronomy and physics to include computer science , engineering , public policy , business , or education . while many astronomy graduate programs mandate that students spend one or more semesters as teaching assistants , training in teaching skills is generally minimal , although there are laudable exceptions [ 8 , 9 ] .development of the skills needed for teaching at the non - university level and public outreach is typically absent from the curriculum .there is only room in the field for % of ph.d .recipients to become faculty at colleges and universities [ 3 ] .in fact , self - reporting of the careers of 651 astronomy ph.d . recipients from 1980 - 2000 at eight universitiesreveals that 34% currently hold tenure - track faculty positions at research universities . the remaining two - thirds of ph.d .astronomers are employed at teaching colleges as tenure - track faculty ( 10% ) , at k-12 schools as educators or elsewhere as education researchers ( 2% ) , at observatories and national laboratories as permanent support / research staff ( 38% ) , and within business and industry ( 17% ) .that is , almost one third of ph.d .recipients are not primarily employed in research .an informal survey of uc berkeley faculty indicates that they spend approximately 25% of their work time on research ( excluding student and postdoc interaction ) , 25% on teaching , 19% on administrative duties ( including committee participation and large - scale project management ) , 14% on advising students and postdocs , 12% on securing funding ( for personal research as well as observatories / organizations / departments as a whole ) , and 5% on public outreach . despite the small sample sizewe feel that it is fair to say that * even `` research university '' faculty spend the majority of their time on activities other than research*. overall , we find that * training is a significant expenditure in the field of astronomy * and that * a majority of astronomy ph.d .recipients spend a significant fraction of their time on activities other than research*. although the critical problem - solving skills obtained via research training are unarguably used by all astronomy ph.d.s , regardless of career , there are other , equally valuable skills that these ph.d.s will need that are not currently included in graduate training .this mismatch between the training of young professionals and the skills required in their future employment will only continue to grow in the coming decade .the increasing size and scope of projects in astronomy ( _ e.g. _ keck , vla , alma , tmt , supercomputing and data management facilities ) are creating increasingly large collaborations in which diverse skills are needed [ 10 ] .obviously , the success of these projects depends critically on the of ph.d.s who become faculty and pis .such success is , however , dictated by much more than the pi s ability to do excellent science .lead scientists within collaborations must also excel at managing funding and personnel , communicating science needs and results to the public ( including the general public , government agencies , private investors , and industrial partners ) , and instructing and mentoring junior members of the group .many of the requisite skills for these tasks are , in general , completely ignored during graduate and postdoctoral training and developed only later through trial and error .the lack of these skills in pis can result at best in inefficiencies and at worst in misuse of funding and failure of strong scientific programs . about two - fifths of ph.d .astronomers find employment in permanent support or research staff positions , which require skills in areas such as data management and the construction , operation , and maintenance of hardware and software .this proportion will likely increase with the increasing scope of planned facilities . despite this fact , ph.d .programs generally do not have formal structures that encourage candidates to develop sophisticated programming or engineering skills . even more critically , the funding of the field and influx of talented individuals into it rely on education and outreach at all levels .this is the explicit career of 11% of ph.d .astronomers ( k-12 educators , education researchers , and professors at teaching colleges ) , but it is also an important role of university faculty ( % of whose time is devoted to teaching and public outreach ) and , to varying extents , of support astronomers .the importance of this part of astronomy can not be understated : in the notable case of the hubble space telescope ( hst ) , public enthusiasm for the field directly led to the continuation of nasa support and congressional funding that would have otherwise been cut [ 11 ] .nevertheless , education and outreach typically have minimal roles in astronomy training ( 2.1 ) . finally , of ph.d .astronomers leave the field entirely .while , as argued by the seth _ et al ._ position paper on `` employment & funding in astronomy '' , there are not enough permanent positions in the field for all astronomy ph.d.s , there is no reason to believe that those who leave are uniformly less excellent astronomers than those who stay .* the training that ph.d.s receive creates expectations about the profession and signals what it values .the mismatch between that training and actual employment opportunities may drive talented young scientists to leave the profession .* the above breakdown of job outcomes is a statistical reality .graduate mentors need to both support and provide training for a number of possible employment opportunities .we suggest that the ssp investigate ways in which funding structures and associated directives to universities can be altered to support this realignment .in particular , * we suggest the following * : 0.5 in * ( a ) * that the definition of a career in astronomy be broadened to include the true assortment of potential careers in astronomy that ph.d.s eventually have ; that this be assessed via rigorous tracking of the employment statistics of all ph.d .recipients from graduate school through postdoc positions and culminating only when permanent positions are attained ; and that updated statistics be regularly disseminated within the academic community .this is also one of the main points of the seth _ et al . _ position paper on `` employment & funding in astronomy '' . *( b ) * that the training given at universities granting astronomy ph.d.s reflect this paradigm shift , such that graduate students are trained for the jobs they will eventually hold . * ( c ) * that communication and leadership skills be emphasized in a meaningful and substantial way in ph.d .programs , helping the next generation to garner support from non - scientists , lead successful scientific projects and collaborations , and attract , educate , and mentor future generations of scientists. 0.25 in creative thinking will obviously be necessary to effect such a change in the astronomy education system at the graduate level .here we list some means through which this might be achieved , which we intend not to be exhaustive but rather to initiate discussion : new ph.d .programs could be funded to provide the breadth of knowledge and specialization now required by many careers in astronomy , _e.g. _ , joint programs between astronomy and computer science , engineering , business , public policy , or education ( dual ph.d . , ph.d./masters , ph.d . with a `` minor '' ); federally - funded astronomy ph.d .students could be required to spend a semester away from research in , _e.g. _ , government agencies , student teaching positions , or internships within industry ; quantifiable mentoring , teaching , and outreach requirements could be attached to federal grants ( expanding on the latest nsf proposal and award policies and procedures guide ) such that young professionals are required to devote part of their time to improving their leadership and communication skills .most likely , a combination of many changes and initiatives will be needed to ensure that the training of young astronomers is best - suited to the positions that will need to be filled in the next ten years and beyond .the structure of the traditional academic career has not kept up with the realities of the modern university .the postdoctoral position , once a short stopping point between ph.d . andthe tenure track , has evolved into a substantial phase of the academic career , with recipients holding 23 postdoc positions ( _ i.e. _ , 49 years ) until a permanent job is obtained [ 3 ] . along with the increasing duration of this phase, it is also becoming increasingly and unnecessarily demanding and demoralizing .three fundamental factors are responsible for the transformation of the character of the postdoc phase : ( 1 ) the boom in ph.d.s granted , ( 2 ) the lack of a similar expansion in permanent academic positions , and , importantly , ( 3 ) the failure of most ph.d .training programs to adapt to this discrepancy , as described in 2 .the interaction of these factors results in a situation where there are many people competing for few spots . while intense competition for prestigious jobs is natural , the incentives of the field encourage maximization of the amount of work extracted from trained astronomers , _i.e. _ , the attrition of ph.d.s out of the running for those jobs as late as possible . this situation does not select for higher - quality faculty it merely places unnecessary burdens on those who do not end up attaining faculty jobs .the explicit recommendation of the previous decadal review committee to increase federal funding for postdoctoral fellowships has played a role in this effect [ 12 , p. 198 ] .the consequences of this situation are acute .* current postdocs endure a period of intense competition , prolonged job insecurity , multiple relocations with little or no choice in destination , and the prospect of a forced late - stage ( mid-30 s ) career change .* as we discuss in the following section , the implications for those who also wish to start a family are particularly dire .this situation will not change as long as the three factors described above hold . because our field is vibrant and competitive and academic jobs are very attractive to many , it would be incorrect to suggest that the difficulty of the postdoc phase will lead to unfilled tenure - track positions . however , the simple fact is that _ we can do better_. the problems of the postdoc system are not limited to astronomy , nor can they be solved overnight .one advantage that our field has , however , is relatively small size and the importance that federal funding plays within it .* we recommend that the ssp * : 0.5 in * ( d ) * re - evaluate postdoctoral and pre - tenure positions and recommend funding changes to remove the `` arms race '' incentives of the current system . 0.25 inwe hope that the ssp discusses a wide range of approaches to meeting this challenge , such as : eliminating the plethora of federally funded postdoctoral fellowships in favor of funding more diverse permanent positions ; completely reconceptualizing the postdoctoral process and the transition from graduate school to permanent positions ; encouraging the creation of a more fluid workforce in which early - career jobs can be held for longer periods of time and transitions between positions flow naturally with project timescales rather than in rigid ( typically 3 - 5 year ) timescales .it is worth noting that the tenure system plays a fundamental role in shaping the current postdoctoral system . finally ,in this topic , academic systems outside of the united states can provide examples , both good and bad , from which to learn .the lack of adaptation in our professional institutions to social change has dramatic effects on the retention of excellent astronomers , with particular impact on underrepresented groups .lengthy reports can be ( and have been ) written on this topic ; we will focus on one example , the area of `` child - friendliness '' within astronomy and its effect upon women in our field .we will treat it briefly and consider again how the structure of our institutions affects the profession ; we hope that other position papers will deal with this , and the other ways in which our field loses excellent astronomers , much more fully . in the past , most couples had one working spouse and one spouse who performed virtually all childcare duties . in such situations , it is viable for the working spouse to have an extremely time - demanding job .the modern norm is for families to be dual - income , and professional couples with children increasingly expect that both partners will work hard , pursue a fulfilling career , and share in childcare duties [ 13 ] . in this situationtime - demanding and relatively low - paying jobs are much more difficult to accommodate .the increase in demand for careers that accommodate two - income families has occurred more - or - less simultaneously with the severe lengthening of the postdoc phase .all of the difficulties mentioned in 3 are particularly problematic for parents ; moreover , the postdoc stage usually happens at the exact age in the late 20 s and early 30 s in which most families are started .the hardship of multiple relocations is especially trying for those with long - term partners let alone those with long - term partners _ and _ children who therefore need to solve the `` two - body problem '' not once but several times over the course of only a few years .the reality of these concerns is well - established in our field and others . in a survey of university of californiagraduate students in all disciplines , 74% of the male respondents and 84% of the female respondents reported being `` somewhat '' or `` very concerned '' about the family - friendliness of their career paths [ 14 ] .( here , we treat issues relating to family - friendliness as a superset of those relating to child - friendliness . )exacerbating the problem is the difficulty of re - entering the field after any significant time away from it , discouraging would - be parents from leaving the field temporarily to care for young children .in fact , even junior faculty are uncomfortable with lessening their workload for maternity / paternity . according to the uc faculty work and family survey [ 15 ] , less than a third of eligible faculty used the university s tenure `` clock stoppage '' option for new parents ; of the survey respondents who did not , a significant fraction ( % ) cited `` it might have hurt my career '' as a reason for not invoking it .while the difficulties of raising a family affect all academics , there is no question that they disproportionately impact the careers of women , regardless of their level of talent . in [ 14 ] , 46% of the female respondents who began graduate school with the goal of becoming faculty but shifted their goals cited `` issues related to children '' as a major factor , while only 21% of the male respondents did .the results of [ 15 ] make for sobering reading : in virtually every aspect , female faculty find more of a tension between their careers and their families than their male counterparts .* we emphatically reject the notion that a prioritization of family life over career necessarily implies a lack of excellence in the field . *improving the representation of women in astronomy depends upon addressing the child - friendliness of academic careers , though * the status of women in astronomy depends on far more than this one factor*. conversely , * genuine efforts in this direction will make it a better field for all of its members , not just women . * we believe that the inequities alluded to in this section are all serious and worthy of correction on their own merits .however , they also have a damaging effect on the field by injuring its legitimacy in the public eye .legislators and other funders may question justifiably whether they should direct spending towards a field that is only sluggishly addressing its glaring inequalities [ 16 , 17 ] .this is especially true for a field which generally aspires to be a meritocracy in which individuals succeed purely based on the quality of their contributions .while astronomy is not the only offender , other fields , notably biology , do a better job of retaining excellent scientists [ 18 , 19 ] .significant federal effort is now directed towards rectifying gender inequalities in stem ( science , technology , engineering , and mathematics ) professions . in[ 17 ] , the national academies stated that in order to `` maintain scientific and engineering leadership amid increasing economic and educational globalization , the united states must aggressively pursue the innovative capacity of _ all _ of its people , '' regardless of gender ( emphasis original ) . in october 2008 ,barack obama responded to a query from the association for women in science with the following statement of policy [ 20 ] : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we will need to significantly increase our stem workforce , and to do that we will need to engage not just women and minorities but also persons with disabilities , english language learners , and students from low income families we also support improved educational opportunities for all students , increased responsibilities and accountability for those receiving federal research funding , equitable enforcement of existing laws such as title ix , continuation and strengthening of programs aimed at broader engagement in the stem disciplines _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ there has , in fact , been formal federal investigation into forcing university science programs to begin addressing inequalities by applying title ix to them ( _ e.g. _ [ 21 , 22 ] ). it will take concerted effort across many sectors of academia , government , and society to enable us to attract and retain the best astronomers from all demographics .nevertheless , the decadal review is an opportunity to begin implementing necessary policies .we * suggest that the ssp * : 0.5 in * ( e ) * mandate that job ads and offer letters at _ all _ levels in the field include information on the hiring institution s family - friendly policies . *( f ) * identify model programs that have demonstrated positive impacts on the demographics and family - friendliness of astronomy and recommend that funding be allocated for duplications , expansions , and improvements of these programs . *( g ) * identify policies that help retain the most talented astronomers and recommend that required implementation of these policies be attached to federal funding .study the examples of other fields for lessons , both positive and negative . *( h ) * consider any changes to the postdoctoral system in the light of the effect they will have on family - friendliness and the retention of excellent astronomers . 0.25 in we reiterate that *many of the current challenges in astronomy careers are due to institutional structures than can be changed*. some ideas to initiate discussion are : establishing opt - out minimum tenure `` clock stopping '' or parental leave policies ; issuing comprehensive employer childcare assistance standards . a community approach to the enforcement of title ix ,should it be mandated , should be discussed .the decadal review provides an invaluable opportunity to affirm _ and revise _ our values as a community and set priorities accordingly .this process has been tremendously successful in establishing support for instruments that have been the basis for ground - breaking science : the very large array , hst , and spitzer were all made possible in large part due to recommendations of past decadal review committees .the decadal review process has also won support in washington for the field of astronomy and indirectly led other fields to establish their own similar review processes [ 23 ] .however , while past reviews have been extremely successful in shaping the technologies used to pursue the next generation of science , less attention has been directed towards properly training and maintaining the astronomers who perform the science .still less has been focused on training the next generation of astronomy educators , public policy experts , and project managers .while the needs of the profession and the labor market have evolved significantly since the astronomy decadal review process was established , the overall academic structure of the field has remained largely unchanged .many of our current practices are outmoded , resulting in misallocated resources and attrition patterns that cause us to lose the contributions of excellent scientists .the failure of past decadal review processes to allocate sufficient time and funding to revising these practices represents an undervaluation of the field s human resources . in a time of economic downturn and budget shortfalls , it is in our best interest to put stock in the ability of talented individuals to develop creative new solutions to outstanding problems in our field , whether those problems be in basic research , education , public outreach , or policy .we urge the subcommittee to develop strong , concrete recommendations tied to funding which acknowledge and support the important role human contributions make to the scientific endeavor .0.25in1 [ 6 ] `` first triple quasar discovered at w. m. keck observatory '' , 8 jan 2007 , _ news and outreach _ ,w. m. keck observatory : http://keckobservatory.org/index.php/news/first_triple_quasar_discovered_at_w._m._keck_observatory/ [ 7 ] `` a guide to the astronomy graduate program '' , 2007 , university of arizona , department of astronomy and steward observatory ( 12 mar 2009 ) : http://www.as.arizona.edu/academic_program/graduate_program/graduate_academic_guide.html [ 12 ] astronomy and astrophysics survey committee , commission on physical sciences , mathematics , and applications , national research council .`` astronomy and astrophysics in the new millennium '' , 2001 , the national academies press : http://www.nap.edu/openbook.php?isbn=0309070317 [ 14 ] mason , m. a. & goulden , m. `` uc doctoral student career life survey '' , 2006 , the uc faculty family friendly edge : http://ucfamilyedge.berkeley.edu/why%20graduate%20students%20reject%20the%20fast%20track.pdf [ 15 ] mason , m. a. , stacy , a. , goulden , m. , hoffman , c. , and frasch , k. `` uc faculty family friendly edge report '' , 2005 , the uc faculty family friendly edge : http://ucfamilyedge.berkeley.edu/ucfamilyedge.pdf [ 18 ] `` digest of education statistics '' , fall 2005 , u.s .department of education , national center for education statistics , 200405 integrated postsecondary education data system ( ipeds ) : http://nces.ed.gov/programs/digest/d06/tables/dt06_258.asp [ 19 ] leadley , j. , magrane , d. , lang , j. , and pham , t. `` women in u.s .academic medicine : statistics and benchmarking report , 2007 - 08 '' , 2008 , association of american medical colleges ( aamc ) : http://www.aamc.org/members/wim/statistics/stats08/stats_report.pdf [ 20 ] `` campaign responses to questions from the association for women in science & the society of women engineers '' , 15 oct 2008 , association for women in science ( awis ) : http://www.awis.org/documents/obamamccainresponses.pdf [ 22 ] `` gender issues : women s participation in the sciences has increased , but agencies need to do more to ensure compliance with title ix '' , 22 jul 2004 , government accountability office ( gao ) report , gao-04 - 639 : http://www.gao.gov/products/gao-04-639 | while both society and astronomy have evolved greatly over the past fifty years , the academic institutions and incentives that shape our field have remained largely stagnant . as a result , the astronomical community is faced with several major challenges , including : ( 1 ) the training that we provide does not align with the skills that future astronomers will need , ( 2 ) the postdoctoral phase is becoming increasingly demanding and demoralizing , and ( 3 ) our jobs are increasingly unfriendly to families with children . solving these problems will require conscious engineering of our profession . fortunately , this decadal review offers the opportunity to revise outmoded practices to be more effective and equitable . the highest priority of the subcommittee on the state of the profession should be to recommend _ specific , funded _ activities that will ensure the field meets the challenges we describe . |
music summarization has been the subject of research for at least a decade and many algorithms that address this problem , mainly for popular music , have been published in the past . however , those algorithms focus on producing human consumption - oriented summaries , i.e. , summaries that will be listened to by people motivated by the need to quickly get the gist of the whole song without having to listen to all of it .this type of summarization entails extra requirements besides conciseness and diversity ( non - redundancy ) , such as clarity and coherence , so that people can enjoy listening to them .generic summarization algorithms , however , focus on extracting concise and diverse summaries and have been successfully applied in text and speech summarization .their application , in music , for human consumption - oriented purposes is not ideal , for they will select and concatenate the most relevant and diverse information ( according to each algorithm s definition of relevance and diversity ) without taking into account whether the output is enjoyable for people or not .this is usually reflected , for instance , on discontinuities or irregularities in beat synchronization in the resulting summaries .we focus on improving the performance of tasks recognized as important by the community , e.g. music genre classification , through summarization , as opposed to considering music summaries as the product to be consumed by people .thus , we can ignore some of the requirements of previous music summarization efforts , which usually try to model the musical structure of the pieces being summarized , possibly using musical knowledge .although human - related aspects of music summarization are important in general , they are beyond the focus of this paper .we claim that , for tasks benefiting from summaries , it is sufficient to consider the most relevant parts of the signal , according to its features . in particular , summarizers do not need to take into account song structure or human perception of music .our rationale is that summaries contain more relevant and less redundant information , thus improving the performance of tasks that rely on processing just a portion of the whole signal , leading to faster processing , less space usage , and efficient use of bandwidth .we use , lexrank , , , and support sets to summarize music for automatic ( instead of human ) consumption . to evaluate the effects of summarization, we assess the performance of binary and 5-class music genre classification , when considering song summaries against continuous clips ( taken from the beginning , middle , and end of the songs ) and against the whole songs .we show that all of these algorithms improve classification performance and are statistically not significantly different from using the whole songs .these results complement and solidify previous work evaluated on a binary fado classifier .the article is organized as follows : section [ sec : music - summarization ] reviews related work on music - specific summarization .section [ sec : generic - summarization ] reviews the generic summarization algorithms we experimented with : ( section [ sub : grasshopper ] ) , lexrank ( section [ sub : lexrank ] ) , ( section [ sub : lsa ] ) , ( section [ sub : mmr ] ) , and support sets - based centrality ( section [ sub : support - sets ] ) . section [ sec : experiments ] details the experiments we performed for each algorithm and introduces the classifier .sections [ sec : binary - results ] and [ sec : multiclass - results ] report our classification results for the binary and multiclass classification scenarios , respectively .section [ sec : discussion ] discusses the results and section [ sec : conclusions ] concludes this paper with some remarks and future work .current algorithms for music summarization were developed to extract an enjoyable summary so that people can listen to it clearly and coherently .in contrast , our approach considers summaries exclusively for automatic consumption .human - oriented music summarization starts by structurally segmenting songs and selecting meaningful segments to include in the summary .the assumption is that songs are represented as label sequences where each label represents a different part of the song ( e.g. , ababca where a is the chorus , b the verse , and c the bridge ) . in ,segmentation is achieved by using a to detect key changes between frames and to detect repeating structure . in ,a gaussian - tempered `` checkerboard '' kernel is correlated along the main diagonal of the song s self - similarity matrix , outputting segment boundaries .then , a segment - indexed matrix , containing the similarity between detected segments , is built .is applied to find its rank- approximation .segments are , then , clustered to output the song s structure . in , a similarity matrix is built and analyzed for fast changes , outputting segment boundaries ; segments are clustered to output the `` middle states '' ; an is applied to these states , producing the final segmentation . then , various strategies are considered to select the appropriate segments . in , a modification of the divergenceis used to group and label similar segments .the summary consists of the longest sequence of segments belonging to the same cluster . in and ,average similarity is used to extract a thumbnail seconds long that is the most similar to the whole piece .it starts by calculating a similarity matrix through computing frame - wise similarities .then , it calculates an aggregated similarity measure , for each possible starting frame , of the -second segment with the whole song and picks the one that maximizes it as the summary .another method for this task , maximum filtered correlation , starts by building a similarity matrix and then a filtered time - lag matrix , embedding the similarity between extended segments separated by a constant lag .the starting frame of the summary corresponds to the index that maximizes the filtered time - lag matrix . in , music is classified as pure or vocal , in order to perform type - specific feature extraction .the summary , created from three to five seconds subsummaries ( built from frame clusters ) , takes into account musicological and psychological aspects , by differentiating between types of music based on feature selection and specific duration .this promotes human enjoyment when listening to the summary .since these summaries were targeted to people , they were evaluated by people . in , music datasets are summarized into a codebook - based audio feature representation , to efficiently retrieve songs in a query - by - tag and query - by - example fashion .an initial dataset is discretized , creating a dictionary of basis vectors . then , for each query song , the audio signal is quantized , according to the pre - computed dictionary , mapping the audio signal into a histogram of basis vectors .these histograms are used to compute music similarity .this type of summarization allows for efficient retrieval of music but is limited to the features which are initially chosen .our focus is on audio signal summaries , which are suitable for any audio feature extraction , instead of proxy representations for audio features .applying generic summarization to music implies song segmentation into musical words and sentences .since we do not take into account human - related aspects of music perception , we can segment songs according to an arbitrarily fixed size .this differs from structural segmentation in that it does not take into account human perception of musical structure and does not create meaningful segments .nevertheless , it still allows us to look at the variability and repetition of the signal and use them to find its most important parts .furthermore , since it is not aimed at human consumption , the generated summaries are less liable to violate the copyrights of the original songs .this facilitates the sharing of datasets ( using the signal itself , instead of specific features extracted from it ) for research efforts . in the following sections , we review the generic summarization algorithms we evaluated .[ [ sub : grasshopper ] ] the was applied to text summarization and social network analysis , focusing on improving ranking diversity .it takes an matrix representing a graph where each sentence is a vertex and each edge has weight corresponding to the similarity between sentences and ; and a probability distribution encoding prior ranking .first , is row - normalized : . then , is built , incorporating the user - supplied prior ranking ( is an all-1 vector , is the outer product , and is a balancing factor ) . the first ranking state is found by taking the state with the largest stationary probability ( is the stationary distribution of ) . each time a state is extracted ,it is converted into an absorbing state to penalize states similar to it . the rest of the states are iteratively selected according to the expected number of visits to each state , instead of considering the stationary probability .if is the set of items ranked so far , states are turned into absorbing states by setting and , .if items are arranged so that ranked ones are listed before unranked ones , can be written as follows : is the identity matrix on . and are rows of unranked items . is the expected number of visits to state starting from state ( ) .the expected number of visits to state , , is given by and the next item is , where is the size of .lexrank relies on the similarity ( e.g. cosine ) between sentence pairs ( usually , _tf - idf _ vectors ) .first , all sentences are compared to each other . then , a graph is built where each sentence is a vertex and edges are created between every sentence according to their pairwise similarity ( above a threshold ) .lexrank can be used with both weighted ( eq . [ eq : lex - rank - w ] ) and unweighted ( eq . [ eq : lex - rank - u ] ) edges .then , each vertex score is iteratively computed . in eq .[ eq : lex - rank - w ] through [ eq : lex - rank - u ] , is a damping factor to guarantee convergence ; is the number of vertices ; is the score of vertex ; and is the degree of .summaries are built by taking the highest ranked sentences . in lexrank, sentences recommend each other : sentences similar to many others will get high scores .scores are also determined by the score of the recommending sentences . }\frac{\text{sim}\left(v_{i},v_{j}\right)}{\sum_{v_{k}\in adj\left[v_{j}\right]}\text{sim}\left(v_{j},v_{k}\right)}s\left(v_{j}\right)\ ] ] }\frac{s\left(v_{j}\right)}{d\left(v_{j}\right)}\ ] ] [ [ sub : lsa ] ] was first applied in text summarization in .is used to reduce the dimensionality of an original matrix representation of the text .-based summarizers start by building a terms by sentences matrix .each element of , , has a local ( ) and a global ( ) weight . is a function of term frequency in a specific sentence and is a function of the number of sentences that contain a specific term .usually , are _ tf - idf _ scores .the result of applying the to is , where ( matrix ) are the left singular vectors ; ( diagonal matrix ) contains the singular values in descending order ; and ( matrix ) are the right singular vectors .singular values determine topic relevance : each latent dimension corresponds to a topic .the rank- approximation considers the first columns of , the sub - matrix of , and the first rows of .relevant sentences are the ones corresponding to the indices of the highest values for each right singular vector .this approach has two limitations : by selecting sentences for the summary , less significant sentences tend to be extracted when increases ; and , sentences with high values in several topics , but never the highest , will never be included in the summary . to account for these effects , a sentence score was introduced and is chosen so that the singular value does not fall under half of the highest singular value : .[ [ sub : mmr ] ] sentence selection in is done according to their relevance and diversity against previously selected sentences , in order to output low - redundancy summaries .is a query - based method that has been used in speech summarization .it is also possible to produce generic summaries by taking the centroid vector of all the sentences as the query .uses to select sentences . and are similarity metrics ( e.g. cosine ) ; and are unselected and previously selected sentences , respectively ; is the query , and balances relevance and diversity .sentences can be represented as _ tf - idf _ vectors .this method was first applied in text and speech summarization .centrality is based on sets of sentences that are similar to a given sentence ( support sets ) : .support sets are estimated for every sentence .sentences frequent in most support sets are selected : .this is similar to unweighted lexrank ( section [ sub : lexrank ] ) , except that support sets allow a different threshold for each sentence ( ) and their underlying representation is directed , i.e. , each sentence only recommends its most semantically related sentences . the thresholds can be heuristically determined . , among others , uses a passage order heuristic which clusters all passages into two clusters , according to their distance to each cluster s centroid .the first and second clusters are initialized with the first and second passages , respectively , and sentences are assigned to clusters , one by one , according to their original order .the cluster that contains the most similar passage to the passage associated with the support set under construction is selected as the support set .several metrics were tested for defining semantic relatedness ( e.g. minkowski distance , cosine ) .we evaluated generic summarization by assessing its impact on binary and multiclass music genre classification .these tasks consist of classifying songs based on a scheme ( e.g. artist , genre , or mood ) .classification is deemed important by the community and annual conferences addressing it are held , such as , which comprises for comparing state - of - the - art algorithms in a standardized setting .the best 2015 system for the `` audio mixed popular genre classification '' task uses for classifying music genre , based on spectral features .we follow the same approach and our classification is also performed using .note that there are two different feature extraction steps .the first is done by the summarizers , every time a song is summarized . the summarizers output audio signal corresponding to the selected parts , to be used in the second step , i.e. , when doing classification , where features are extracted from the full , segmented , and summarized datasets .the features used by the consist of a 38-dimensional vector per song , a concatenation of several statistics on features used in , describing the timbral texture of a music piece .it consists of the average of the first 20 concatenated with statistics ( mean and variance ) of 9 spectral features : centroid , spread , skewness , kurtosis , flux , rolloff , brightness , entropy , and flatness .these are computed over feature vectors extracted from 50ms frames without overlap .this set of features and a smaller set , solely composed of averages , were tested in the classification task .all music genres in our dataset are timbrically different from each other , making these sets good descriptors for classification .our experimental datasets consist of a total of 1250 songs from 5 different genres : bass , fado , hip hop , trance , and indie rock .bass music is a generic term referring to several specific styles of electronic music , such as dubstep , drum and bass , electro , and more .although these differ in tempo , they share similar timbral characteristics , such as deep basslines and the `` wobble '' bass effect .fado is a portuguese music genre whose instrumentation consists of stringed instruments , such as the classical and the portuguese guitars .hip hop consists of drum rhythms ( usually built with samples ) , the use of turntables and spoken lyrics .indie rock usually consists of guitar , drums , keyboard , and vocal sounds and was influenced by punk , psychedelia , post - punk , and country .trance is an electronic music genre characterized by repeating melodic phrases and a musical form that builds up and down throughout a track .each class is represented by 250 songs from several artists .the multiclass dataset contains all songs .two binary datasets were also built from this data , in order to test our hypothesis on a wider range of classification setups : bass vs. fado and bass vs. trance , each containing the 500 corresponding songs .10-fold cross - validation was used in all classification tasks .first , as baselines , we performed 3 classification experiments using 30s segments , from the beginning , middle , and end of each song .then , we obtained another baseline by using the whole songs .the baselines were compared with the classification results from using 30s summaries for each parameter combination and algorithm .we did this for both binary datasets and then for the multiclass dataset .applying generic summarization algorithms to music requires additional steps .since these algorithms operate on the discrete concepts of word and sentence , some preprocessing must be done to map the continuous frame representation obtained after feature extraction to a word / sentence representation . for each songbeing summarized , a vocabulary is created , through clustering the frames feature vectors .mlpack s implementation of the -means algorithm was used for this step ( we experiment with some values for and assess their impact on the results ) . after clustering ,a vocabulary of musical words is obtained ( each word is a frame cluster s centroid ) and each frame is assigned its own cluster centroid , effectively mapping the frame feature vectors to vocabulary words .this transforms the real / continuous nature of each frame ( when represented by a feature vector ) to a discrete nature ( when represented as a word from a vocabulary ) .then , the song is segmented into fixed - size sentences ( e.g. , 5-word sentences ) . since every sentence contains discrete words from a vocabulary , it is possible to represent each one as a vector of word occurrences / frequencies ( depending on the weighting scheme ) which is the exact representation used by generic summarization algorithms .sentences were compared using the cosine distance .the parameters of all of these algorithms include : features , framing , vocabulary size ( final number of clusters of the -means algorithm ) , weighting ( e.g. , _ tf - idf _ ) , and sentence size ( number of words per sentence ) . for the multiclass dataset , we also ran experiments comparing human - oriented summarization against generic summarization .this translates into comparing average similarity summaries ( for several durations ) against 30-second generic summaries , as well as comparing structural against fixed - size sentences .we also compared the performance of generic summaries against the baselines for smaller summary durations .every algorithm was implemented in c++ .we used : opensmile for feature extraction , armadillo for matrix operations , marsyas for synthesizing the summaries , and the segmenter used in for structural segmentation .our experiments covered the following parameter values ( varying between algorithms ) : frame and hop size combinations of ( 0.25,0.125 ) , ( 0.25,0.25 ) , ( 0.5,0.25 ) , ( 0.5,0.5 ) , ( 1,0.5 ) and ( 1,1 ) ( in seconds ) ; vocabulary sizes of 25 , 50 , and 100 ( words ) ; sentence sizes of 5 , 10 , and 20 ( words ) ; `` dampened '' _ tf - idf _ ( takes logarithm of _ tf _ instead of _tf _ itself ) and binary weighting schemes . as summarization features , we used vectors of sizes 12 , 20 , and 24 .these features , used in several previous research efforts on music summarization in , describe the timbre of an acoustic signal .we also used a concatenation of vectors with the 9 spectral features enumerated in section [ sub : features ] . for, we tried values of 0.5 and 0.7 .our implementation also makes use of the sentence score and the topics cardinality selection heuristic described in section [ sub : lsa ] .first , we analyze results on the binary datasets , bass vs. fado and bass vs. trance . the reason we chose these pairs was because we wanted to see summarization s impact on an easy to classify dataset ( bass and fado are timbrically very different ) and a more difficult one ( bass and trance share many timbrical similarities due to their electronic and dancefloor - oriented nature ) .for all experiments , classifying using the 38-dimensional features vector produced better results than using only 20 , so we only present those results here .the best results are summarized in tables [ tab : binary - baselines ] , [ tab : bass - fado ] , and [ tab : bass - trance ] ..bass vs. trance summaries [ cols="^,^,^",options="header " , ] [ tab : avg - sim ] we can see that this type of summarization reaches the performance of generic summaries ( 30 seconds ) and full songs when the summary duration reaches 80 seconds ( 89.2% accuracy ) .this means that , for a human - oriented summary to be as descriptive and discriminative as a generic summary , an additional 50 seconds ( 2.67 times the length of the original ) are needed .even though the starting point of this contiguous summary is carefully selected by this algorithm , it still lacks diversity because of its contiguous nature , hindering classification accuracy for this summarizer .naturally , by extending summary duration , summaries include more diverse information , eventually achieving the accuracy of full songs .another form of human - oriented summarization is achieved by using generic summarization operating on structurally segmented sentences , done according to what humans might consider to be structurally meaningful segments .after structural segmentation , we fed each of the 5 generic algorithms with the resulting sentences instead of fixed - size ones and truncated the summary at 30 seconds , when necessary .the parameterization used for these experiments was the one that yielded the best results in the previous experiments for each algorithm .the accuracy results for , lexrank , , , and support sets were , respectively , 82.64% , 83.76% , 81.84% , 82.40% , and 83.84% . even though structurally segmented sentences slightly improve performance , when considering classification accuracy , they are still outperformed by fixed - size segmentation .the best algorithm can only achieve 83.84% accuracy .this is because these sentences are much longer , therefore harming diversity in summaries .furthermore , important content in structural sentences can always be extracted when using smaller fixed - size sentences .thus , using smaller sentences , prevents the selection of redundant content .we ran the wilcoxon signed - ranked test on all of the confusion matrices presented above against the full songs scenario .the continuous sections p - values were , , and for the 30-second beginning , middle , and end sections of the songs , respectively , which means that they differ markedly from using full songs ( as can also be seen by the accuracy drops they cause ) .the summaries , however , were very close to full songs , in terms of accuracy .the p - values for , lexrank , , , and support sets were , , , , and , respectively .thus , statistically speaking , using any of these 30-second summaries does not significantly differ from using full songs for classification ( considering 95% confidence intervals ) .furthermore , the p - values for 20-second summaries and for 10-second support sets summaries were and , respectively , with the remaining p - values of increasing summary sizes also being superior to .thus , statistically speaking , generic summarization ( in some cases ) does not significantly differ from using full songs for classification , for summaries as short as 10 seconds ( considering a 95% confidence interval ) .this is noteworthy , considering that the average song duration in this dataset is 283 seconds , which means that we achieve similar levels of classification performance using around 3.5% of the data .human - oriented summarization is able to achieve these performance levels , but only at 50-second summaries and with a p - value of , barely over the threshold .however , the 60-second summaries produced by this algorithm can not reach that threshold . only at 80 secondsis a comfortable p - value ( ) for the 95% confidence interval attained .although every algorithm creates summaries in a different way , they all tend to include relevant and diverse sentences .this compensates their reduced lengths ( up to 30 seconds of audio ) allowing those clips to be representative of the whole musical pieces , from an automatic consumption view , as demonstrated by our experiments .moreover , choosing the best 30-second contiguous segments is highly dependent on the genres in the dataset and tasks it will be used for , which is another reason for preferring summaries over those segments .the more varied the dataset , the less likely a fixed continuous section extraction method is to produce representative enough clips .bass and trance were the most influenced genres , by summarization , in these experiments .these are styles with very well defined structural borders , and a very descriptive structural element the _ drop_. the lack of that same element in a segment markedly hinders classification performance , suggesting that any genre with similar characteristics may also benefit from this type of summarization .it is also worth restating that hip hop and indie rock were very positively influenced by summarization , regarding classification performance improvements over using full songs .this shows that , sometimes , classification on summarized music can even outperform using the whole data from the original signal .we also demonstrated that generic summarization using fixed - size sentences , that is , summarization not specifically oriented towards human consumption greatly outperforms human - oriented summarization approaches for the classification task .summarizing music prior to the classification task also takes time , but we do not claim it is worth doing it every time we are about do perform a task .the idea is to compute summarized datasets offline for future use in any task that can benefit from them ( e.g. , music classification ) .currently , sharing music datasets for research purposes is very limited in many aspects , due to copyright issues .usually , datasets are shared through features extracted from ( 30-second ) continuous clips .that practice has drawbacks , such as : those 30 seconds may not contain the most relevant information and may even be highly redundant ; and the features provided may not be the ones a researcher needs for his / her experiments . summarizing datasetsthis way also helps avoiding copyright issues ( because summaries are not created in a way enjoyable by humans ) and still provide researchers with the most descriptive parts ( according to each summarizer ) of the signal itself , so that many different kinds of features can be extracted from them .we showed that generic summarization algorithms perform well when summarizing music datasets about to be classified .the resulting summaries are remarkably more descriptive of the whole songs than their continuous segments ( of the same duration ) counterparts .sometimes , these summaries are even more discriminative than the full songs .we also presented an argument stating some advantages in sharing summarized datasets within the community .an interesting research direction would be to automatically determine the best vocabulary size for each song .testing summarization s performance on different classification tasks ( e.g. , with more classes ) is also necessary to further strengthen our conclusions .more comparisons with non - contiguous human - oriented summaries should also be done .more experimenting should be done in other tasks that also make use of only a portion of the whole signal .m. cooper and j. foote , `` summarizing popular music via structural similarity analysis , '' in _ proc . of the ieee workshop on applications of signal processing to audio and acoustics _ ,2003 , pp . 127130 .j. carbonell and j. goldstein , `` the use of mmr , diversity - based reranking for reordering documents and producing summaries , '' in _ proc . of the 21st annual intl .acm sigir conf . on research and development in information retrieval _ , 1998 , pp .335336 .t. k. landauer and s. t. dutnais , `` a solution to plato s problem : the latent semantic analysis theory of acquisition , induction , and representation of knowledge , '' _ psychological review _ , vol .104 , no . 2 ,pp . 211240 , 1997 .x. zhu , a. b. goldberg , j. v. gael , and d. andrzejewski , `` improving diversity in ranking using absorbing random walks , '' in _ proc . of the 5th north american chapter of the association for computational linguistics - human language technologies conf ._ , 2007 , pp . 97104 .y. vaizman , b. mcfee , and g. lanckriet , `` codebook - based audio feature representation for music information retrieval , '' _ ieee / acm trans . on audio , speech and language processing _ ,22 , pp . 14831493 , 2014 .y. gong and x. liu , `` generic text summarization using relevance measure and latent semantic analysis , '' in _ proc . of the 24th annual intl .acm sigir conf . on research and development in information retrieval _ , 2001 , pp .1925 .k. zechner and a. waibel , `` minimizing word error rate in textual summaries of spoken language , '' in _ proc . of the 1st north american chapter of the association for computational linguistics_ , 2000 , pp. 186193 .wu and j .- s .r. jang , `` combining acoustic and multilevel visual features for music genre classification , '' _ acm trans . on multimedia computing , communications and applications _12 , no . 1 ,pp . 10:110:17 , 2015 .r. r. curtin , j. r. cline , n. p. slagle , w. b. march , p. ram , n. a. mehta , and a. g. gray , `` mlpack : a scalable c++ machine learning library , '' _ journal of machine learning research _ , vol .14 , no . 1 ,pp . 801805 , 2013 .f. eyben , f. weninger , f. gross , and b. schuller , `` recent developments in opensmile , the munich open - source multimedia feature extractor , '' in _ proc . of the 21st acm intl .conf . on multimedia _ , 2013 , pp .835838 .francisco raposo graduated in information systems and computer engineering ( 2012 ) from instituto superior tcnico ( ist ) , lisbon .he received a masters degree in information systems and computer engineering ( 2014 ) ( ist ) , on automatic music summarization .he s currently pursuing a phd course on information systems and computer engineering .his research interests focus on music information retrieval ( mir ) , music emotion recognition , and creative - mir applications .ricardo ribeiro has a phd ( 2011 ) in information systems and computer engineering and an msc ( 2003 ) in electrical and computer engineering , both from instituto superior tcnico , and a graduation degree ( 1996 ) in mathematics / computer science from universidade da beira interior .his current research interests focus on high - level information extraction from unrestricted text or speech , and improving machine - learning techniques using domain - related information .david martins de matos graduated in electrical and computer engineering ( 1990 ) from instituto superior tcnico ( ist ) , lisbon .he received a masters degree in electrical and computer engineering ( 1995 ) ( ist ) .he received a doctor of engineering degree in systems and computer science ( 2005 ) ( ist ) .his current research interests focus on computational music processing , automatic summarization and natural language generation , human - robot interaction , and natural language semantics . | in order to satisfy processing time constraints , many tasks process only a segment of the whole music signal . this may lead to decreasing performance , as the most important information for the tasks may not be in the processed segments . we leverage generic summarization algorithms , previously applied to text and speech , to summarize items in music datasets . these algorithms build summaries ( both concise and diverse ) , by selecting appropriate segments from the input signal , also making them good candidates to summarize music . we evaluate the summarization process on binary and multiclass music genre classification tasks , by comparing the accuracy when using summarized datasets against the accuracy when using human - oriented summaries , continuous segments ( the traditional method used for addressing the previously mentioned time constraints ) , and full songs of the original dataset . we show that , lexrank , , , and a support sets - based centrality model improve classification performance when compared to selected baselines . we also show that summarized datasets lead to a classification performance whose difference is not statistically significant from using full songs . furthermore , we make an argument stating the advantages of sharing summarized datasets for future research . |
the recent growth in mobile and media - rich applications continuously increases the demand for wireless bandwidth , and puts a strain on wireless networks , .this dramatic increase in demand poses a challenge for current wireless networks , and calls for new network control mechanisms that make better use of scarce wireless resources .furthermore , most existing , especially low - cost , wireless devices have a relatively rigid architecture with limited processing power and energy storage capacities that are not compatible with the needs of existing theoretical network control algorithms .one important problem , and the focus of this paper , is that low - cost wireless interface cards are built using first - in , first - out ( fifo ) queueing structure , which is not compatible with the per - flow queueing requirements of the optimal network control schemes such as backpressure routing and sheduling .the backpressure routing and scheduling paradigm has emerged from the pioneering work , , which showed that , in wireless networks where nodes route and schedule packets based on queue backlogs , one can stabilize the queues for any feasible traffic .it has also been shown that backpressure can be combined with flow control to provide utility - optimal operation . yet, backpressure routing and scheduling require each node in the network to construct per - flow queues .the following example demonstrates the operation of backpressure .let us consider a canonical example in fig .[ fig : example_fifo_v1](a ) , where a transmitter node , and two receiver nodes , form a one - hop downlink topology .there are two flows with arrival rates and destined to nodes and , respectively .the throughput optimal backpressure scheduling scheme , also known as max - weight scheduling , assumes the availability of per - flow queues and as seen in fig .[ fig : example_fifo_v1](a ) , and makes a transmission decision at each transmission opportunity based on queue backlogs , _ and .in particular , the max - weight scheduling algorithm determines , and transmits from queue .it was shown in , that if the arrival rates and are inside the stability region of the wireless network , the max - weight scheduling algorithm stabilizes the queues .on the other hand , in some devices , per - flow queues can not be constructed . in such a scenario , a fifo queue ,say is shared by flows and as shown in fig .[ fig : example_fifo_v1](b ) , and the packets are served from in a fifo manner . constructing per - flow queues may not be feasible in some devices especially at the link layer due to rigid architecture , and one fifo queue is usually shared by multiple flows .for example , although current wifi - based devices have more than one hardware queue , their numbers are restricted ( up to 12 queues according to the list in ) , while the number of flows passing through a wireless device could be significantly higher .also , multiple queues in the wireless devices are mainly constructed for prioritized traffic such as voice , video , etc ., which further limits their usage as per - flow queues .on the other hand , constructing per - flow queues may not be preferable in some other devices such as sensors or home appliances for which maintaining and handling per - flow queues could introduce too much processing and energy overhead .thus , some devices , either due to rigid architecture or limited processing power and energy capacities , inevitably use shared fifo queues , which makes the understanding of the behavior of fifo queues over wireless networks very crucial ._ example 1 - continued : _ let us consider fig .[ fig : example_fifo_v1 ] again . when a fifo queue is used instead of per - flow queues , the well - known head - of - line ( hol ) blocking phenomenon occurs . as an example, suppose that at transmission instant , the links and are at `` on '' and `` off '' states , respectively .in this case , a packet from can be transmitted if per - flow queues are constructed . yet , in fifo case , if hol packet in belongs to flow , no packet can be transmitted and wireless resources are wasted . although hol blocking in fifo queues is a well - known problem , achievable throughput with fifo queues in a wireless network is generally not known . in particular , stability region of a wireless network with fifo queues as well as resource allocation schemes to achieve optimal operating points in the stability region are still open problems . in this work ,we investigate fifo queues over wireless networks .we consider a wireless network model presented in fig .[ fig : main - example ] with multiple fifo queues that are in the same transmission and interference range .( note that this scenario is getting increasing interest in practice in the context of device - to - device and cooperative networks . )our first step towards understanding the performance of fifo queues in such a setup is to characterize the stability region of the network .then , based on the structure of the stability region , we develop efficient resource allocation algorithms ; _ deterministic fifo - control _ ( ) and _ queue - based fifo - control _ ( ) .the following are the key contributions of this work : * we characterize the stability region of a general scenario where an arbitrary number of fifo queues are shared by an arbitrary number of flows . *the stability region of the fifo queueing system under investigation is non - convex .thus , we develop a convex inner - bound on the stability region , which is provably tight for certain operating points . *we develop a resource allocation scheme ; , and a queue - based stochastic flow control and scheduling algorithm ; .we show that achieves optimal operating point in the convex inner bound . * we evaluate our schemes via simulations for multiple fifo queues and flows .the simulation results show that our algorithms significantly improve the throughput as compared to the well - known queue - based flow control and max - weight scheduling schemes .the structure of the rest of the paper is as follows .section [ sec : system ] gives an overview of the system model .section [ sec : stability_region ] characterizes the stability region with fifo queues .section [ sec : ofc_qfc ] presents our resource allocation algorithms ; and .section [ sec : performance ] presents simulation results .section [ sec : related ] presents related work .section [ sec : conclusion ] concludes the paper ._ wireless network setup : _ we consider a wireless network model presented in fig .[ fig : main - example ] with fifo queues .let be the set of fifo queues , be the fifo queue , and be the set of flows passing through .also , let and denote the cardinalities of sets and , respectively .we assume in our analysis that time is slotted , and refers to the beginning of slot . _flow rates : _ each flow passing through and destined for node is generated according to an arrival process at time slot .the arrivals are i.i.d . over the time slots such that for every and , we have ] , where ] .also , let be the state that fifo queue is at state for some hol packet .the state happens precisely when the channel corresponding to the hol packet is in the state .therefore , the probability of is = p [ c_h = on] ] . noting that we assumed , we conclude that ] and ,\,k \in { \mathcal{k}} ] .the calculations are provided in the following .let = \begin{bmatrix } p[z_0 ] & p[z_1 ] & \ldots & p[z_k ] \end{bmatrix}^{t} ] , and the fact that + \sum_{k \in { \mathcal{k } } } p[z_k ] = 1 ] which is equivalent to ( [ eq : stab_one_queue ] ) .this concludes the proof . now suppose that single - fifo queue is shared by two flows with rates and . according to theorem [ theorem1 ], the arrival rates should satisfy for stability .this stability region is shown in fig .[ fig : fifo_one_queue_fig](b ) . in the same figure, we also show the stability region of per - flow queues , . as seen , the fifo stability region is smaller as compared to per - flow capacity region .yet , we still need flow control and scheduling algorithms to achieve the optimal operating point in this stability region . this issue will be discussed later in section [ sec : ofc_qfc ] . we now consider a wireless network with arbitrary number of fifo queues and flows as shown in fig .[ fig : main - example ] .the main challenge in this setup is that packet scheduling decisions affect the stability region .for example , if both and in fig .[ fig : main - example ] are at state , a decision about which queue to be served should be made .this decision affects future transmission opportunities from the queues , hence the stability region . in this paper, we consider a scheduling policy where the packet transmission probability of each queue depends only on the queue states .in other words , if the state of the fifo queues is , a packet from queue is transmitted with probability .we call this scheduling policy the _ queue - state _ policy . note that as is the transmission probability from queue , we have the obvious constraint our main result is then the following theorem .[ theorem2 ] for a wireless network with fifo queues , if a queue - state policy is employed , then the stability region consists of the flow rates that satisfy }}{\sum_{k \in { \mathcal{k}}_{n } } \lambda_{n , k}/\bar{p}_{n , k } }\nonumber \\ & \prod_{m \in { \mathcal{n}}-\{n\ } } \biggr ( \frac{\sum_{k \in { \mathcal{k}}_{m } } \lambda_{m , k } \rho_{m , k}(s_m)}{\sum_{k \in { \mathcal{k}}_{m } } \lambda_{m , k}/\bar{p}_{m , k } } \biggl ) \tau_{n } ( s_1 , \ldots , s_n ) \biggr \ } , \nonumber \\ &\forall n \in { \mathcal{n } } , k \in { \mathcal{k}}_{n},\end{aligned}\ ] ] where } & = \left\{\begin{array}{rl}1 , & s_n = on \\ 0 , & s_{n } = off\end{array}\right . , \\\rho_{m , k}(s_m ) & = \left\{\begin{array}{rl } 1 , & s_m = on \\{ p_{m , k}}/{\bar{p}_{m , k } } , & s_m = off \end{array } \right .. \end{aligned}\ ] ] _ proof : _ the proof is provided in appendix a. the stability region of a fifo queue system with fifo queues served by a wireless medium is characterized by ( [ eq : lamdba_nk ] ) , ( [ eq : sum_tau]) .now let us consider two fifo queues and which are shared by three flows with rates ; , , and ( fig .[ fig : two_queue](a ) ) . according to theorem [ theorem2 ], the stability region should include arrival rates satisfying inequalities in ( [ eq : lamdba_nk ] ) and ( [ eq : sum_tau ] ) . in this example , with two queues and three flows , these inequalities are equivalent to with , and .the stability region corresponding to these inequalities is the region below the surface in fig .[ fig : two_queue](b ) . in general , we wish to find the optimal operating points on the boundary of the stability region . however , the stability region may not be convex for arbitrary number of queues and flows . developing a convexinner bound on the stability region is crucial for developing efficient resource allocation algorithms for wireless networks with fifo queues .we thus next propose a convex inner bound on the stability region .let us consider a flow with arrival rate to the fifo queue .if there are no other flows and queues in the network , then the arrival rate should satisfy according to theorem [ theorem2 ] . in this formulation, is the total amount of wireless resources that should be allocated to transmit the flow with rate . for multiple - flow , single - fifo case ,the stability region is .similar to the single - flow case , term is the amount of wireless resources that should be allocated to the flow .finally , for the general stability region for arbitrary number of queues and flows , let us consider ( [ eq : lamdba_nk ] ) again .assuming , we can write from ( [ eq : lamdba_nk ] ) as ; }}{\sum_{k \in { \mathcal{k}}_{n } } \lambda_{n , k}/\bar{p}_{n , k } } \nonumber \\ & \prod_{m \in { \mathcal{n}}-\{n\ } } \psi_{m}(s_m ) \tau_{n } ( s_1 , \ldots , s_n ) \biggr \ } , \forall n \in { \mathcal{n } } , k \in { \mathcal{k}}_{n}\end{aligned}\ ] ] which , assuming that , is equivalent to } \prod_{m \in { \mathcal{n}}-\{n\ } } \psi_{m}(s_m ) \nonumber \\ & \tau_{n } ( s_1 , \ldots , s_n ) , \foralln \in { \mathcal{n } } , k \in { \mathcal{k}}_{n}\end{aligned}\ ] ] intuitively speaking , the right hand side of ( [ eq : gec_v2 ] ) corresponds to the amount of wireless resources that is allocated to the queue .thus , similar to the single - fifo queue , we can consider that term corresponds to the amount of wireless resources that should be allocated to the flow .our key point while developing an inner bound on the stability region is to provide rate fairness across competing flows in each fifo queue . since each flow requires amount of wireless resources ; it is intuitive to have the following equality , to fairly allocate wireless resources across flows .more generally , we define a function , where , and we develop a stability region for instead of .the role of the exponent is to provide flexibility to the targeted fairness .for example , if we want to allocate more resources to flows with better channels , then should be larger .now , by the definition of , we have the equivalent form }}{\sum_{k \in { \mathcal{k}}_{n } } ( \bar{p}_{n , k})^{\beta-1 } } \prod_{m \in { \mathcal{n}}- \{n\ } } \omega_{m}(s_{m } ) \nonumber \\ & \tau_{n}(s_1 , \ldots , s_n ) , \forall n \in { \mathcal{n}}\end{aligned}\ ] ] of ( [ eq : lamdba_nk ] ) , where .as seen , ( [ eq : an ] ) is a convex function of .thus , we can define the region ( [ eq : an ] ) , ( [ eq : sum_tau ] ) , , which is clearly an inner bound on the actual stability region . despite the fact that is only inner bound on , for some operating points , _i.e. , _ at the intersection of , lines , the two stability regions ( and ) coincide .thus , for some utility functions , optimal operating points in both and coincide . in the next section ,we develop resource allocation schemes ; and that achieve utility optimal operating points in .in this section , we develop resource allocation schemes ; _ deterministic fifo - control _ ( ) , and a _ queue - based fifo control _ ( ) . in general , our goal is to solve the optimization problem and to find the corresponding optimal rates , where is a concave utility function assigned to flow with rate . although the objective function in ( [ eq : main_opt ] ) is concave , the optimization domain ( _ i.e. , _ the stability region ) may not be convex .thus , we convert this problem to a convex optimization problem based on the structure of the inner bound we have developed in section [ sec : stability_single_innerbound ] . in particular , setting , the problem in ( [ eq : main_opt ] ) reduces to , .this is our deterministic fifo - control scheme ; and expressed explicitly as ; }}{\sum_{k \in { \mathcal{k}}_{n } } ( \bar{p}_{n , k})^{\beta-1 } } \nonumber \\ & \prod_{m \in { \mathcal{n}}- \{n\ } } \omega_{m}(s_m ) \tau_{n}(s_1 , \ldots , s_n ) , \foralln \in { \mathcal{n}}\nonumber \\ & \sum_{n \in { \mathcal{n } } } \tau_{n } ( s_1 , \ldots , s_n ) \leq 1 , \forall ( s_1 , \ldots , s_n ) \in { \mathcal{s}}\nonumber \\ & a_n \geq 0 , \forall n \in { \mathcal{n } } , ( s_1 , \ldots , s_n ) \in { \mathcal{s}}\nonumber \\ & \tau_{n } ( s_1 , \ldots , s_n ) \geq 0 , \forall n \in { \mathcal{n } } , ( s_1 , \ldots , s_n ) \in { \mathcal{s}}\end{aligned}\ ] ] note that optimizes and . after the optimal values are determined , packets are inserted into the fifo queue depending on and served from the fifo queue depending on .although gives us optimal operating points in the stability region ; , it is a centralized solution , and its adaptation to varying wireless channel conditions is limited .thus , we also develop a more practical and queue - based fifo - control scheme , next .* _ flow control : _ at every slot , the flow controller attached to the fifo queue determines according to ; - q_{n}(t)a_{n}(t ) \nonumber \\ \mbox{s.t . } \mbox { } & a_n(t ) \leq r_{n}^{max } , a_n(t ) \geq 0 \end{aligned}\ ] ] where is a large positive number , and is a positive value larger than the maximum outgoing rate from fifo queue ( which is as we assume that the maximum outgoing rate from a queue is 1 packet per slot ) .after is determined according to ( [ eq : flow_control ] ) , is set as .then , packets from the flow are inserted in . * _ scheduling : _ at slot , the scheduling algorithm determines the fifo queue from which a packet is transmitted according to ; }}{\sum_{k \in { \mathcal{k}}_{n } } ( \bar{p}_{n , k})^{\beta } } \tau_{n}(s_1(t ) , \ldots , s_n(t ) ) \nonumber \\ \mbox{s.t . }\mbox { } & \sum_{n \in { \mathcal{n } } } \tau_{n } ( s_1(t ) , \ldots , s_n(t ) ) \leq 1 , \nonumber \\ & \tau_{n } ( s_1(t ) , \ldots , s_n(t ) ) \geq 0\end{aligned}\ ] ] after is determined , the outgoing traffic rate from queue is set to } ] , , .the simulations are repeated for 1000 different seeds , and the average values are reported .[ fig : sim_2](a ) shows average flow rate versus number of flows for our algorithms as well as max - weight . as seen , and are as good as the optimal solution , and they improve over max - weight significantly . fig . [fig : sim_2](b ) shows the same simulation results , but reports the improvement of over max - weight .this figure shows that the improvement of our algorithms increases with increasing number of flows .indeed , the improvement is up to 100% when , which is significant .the improvement is higher for large number of flows , because our algorithm allocates resources to the flows based on the quality of their channels and reduces the flow rate for the flows with bad channel conditions .however , max - weight does not have such a mechanism , and when there are more flows in the system , the probability of having a flow with bad channel condition increases , which reduces the overall throughput . in this section ,we consider two fifo queues and .there are four flows in the system and each queue carries two flows , _ carries flows with rates , and carries flows with rates , .( [ fig : sim_3])(a ) shows the total flow rate versus for the scenario of two - fifo queues with four flows when , , , , and utility is employed , _i.e. , _ .( we do not present the results of the optimal solution as the stability region is not convex in this scenario . ) as seen , and have the same performance and improve over max - weight .the improvement increases with increasing as and penalize flows with bad channel conditions more when increases , which increases the total throughput .( [ fig : sim_3])(b ) shows the total rate versus for two - fifo queues with four flows when and .as seen , and improve significantly over max - weight .furthermore , they achieve almost maximum achievable rate all the time .the reason is that and penalizes the queues with with bad channels .for example , when , the total rate is , because they allocate all the resources to and as there is no point to allocate those resources to and since their channels are always . on the other hand , max - weightdoes not arrange the flow and queue service rates based on the channel conditions , so the total rate reduces to when , _i.e. , _ it is not possible to transmit any packets when max - weight is employed in this scenario .[ fig : sim_4 ] further demonstrates how our algorithms treat flows with bad channel conditions .in particular , fig . [fig : sim_4 ] presents per - flow rate versus for the scenario of two - fifo queues with four flows when and for ( a ) and and ( b ) max - weight .as seen , when increases , decreases in fig .[ fig : sim_4](a ) since its channel is getting worse . yet, this does not affect the other flows .in fact , even increases as more resources are allocated to it when increases . on the other hand , both and decrease with increasing in max - weight ( fig .[ fig : sim_4](b ) ) .this is not fair , because decreases with increasing although its channel is always as . in the same scenario ( fig .[ fig : sim_4](b ) ) , the rates of the queue ( and ) increase with increasing as they use available resource opportunistically .this makes the total rate the same for , , and max - weight . yet , as we discussed , max - weight is not fair to flow in this scenario .in this work , our goal is to understand fifo queues in wireless networks and develop efficient flow control and scheduling policies for such a setup . in the seminal paper ,the authors analyze fifo queues in an input queued switch .they show that the use of fifo queues in that context limits the throughput to approximately 58% of the maximum achievable throughput . however , in the context of wireless networks , similar results are in general not known .backpressure routing and scheduling framework has emer - ged from the pioneering work , which has generated a lot of research interest ; especially for wireless ad - hoc networks .furthermore , it has been shown that backpressure can be combined with flow control to provide utility - optimal operation guarantee .such previous work mainly considered per - flow queues .however , fifo queueing structure , which is the focus of this paper , is not compatible with the per - flow queueing requirements of these routing and scheduling schemes .the strengths of backpressure - based network control have recently received increasing interest in terms of practical implementation .multi - path tcp scheme is implemented over wireless mesh networks in for routing and scheduling packets using a backpressure based heuristic . at the link layer , propose , analyze , and evaluate link layer backpressure - based implementations with queue prioritization and congestion window size adjustment .backpressure is implemented over sensor networks and wireless multi - hop networks . in these schemes ,either last - in , first - out queueing is employed or link layer fifo queues are strictly controlled to reduce the number of packets in the fifo queues , hence hol blocking . in backpressure, each node constructs per - flow queues .there is some work in the literature to stretch this necessity .for example , , propose using real per - link and virtual per - flow queues .such a method reduces the number of queues required in each node , and reduces the delay , but it still needs to construct per - link queues .similarly , constructs per - link queues in the link layer , and schedule packets according to fifo rule from these queues .such a setup is different than ours as per - link queues do not introduce hol blocking .the main differences in our work are : ( i ) we consider fifo queues shared by multiple flows where hol blocking occurs as each flow is transmitted over a possibly different wireless link , ( ii ) we characterize the stability region of a general scenario where an arbitrary number of fifo queues , which are served by a wireless medium , are shared by an arbitrary number of flows , and ( iii ) we develop efficient resource allocation schemes to exploit achievable rate in such a setup .we investigated the performance of fifo queues over wireless networks and characterized the stability region of this system for arbitrary number of fifo queues and flows .we developed inner bound on the stability region , and developed resource allocation schemes ; and , which achieve optimal operating point in the convex inner bound .simulation results show that our algorithms significantly improve throughput in a wireless network with fifo queues as compared to the well - known queue - based flow control and max - weight scheduling schemes .cisco visual networking index : global mobile data traffic forecast update , 2010 - 2015 .ericsson mobility report , november 2013 .l. tassiulas and a. ephremides , `` stability properties of constrained queueing systems and scheduling policies for maximum throughput in multihop radio networks , '' _ ieee trans . autom .19361948 , dec . 1992 .l. tassiulas and a. ephremides , `` dynamic server allocation to parallel queues with randomly varying connectivity , '' _ ieee trans .inf . theory _ ,2 , pp . 466478 , mar .m. j. neely , e. modiano , and c. li , `` fairness and optimal stochastic control for heterogeneous networks , '' _ ieee / acm trans .2 , pp . 396409 , apr . 2008 . http://madwifi-project.org/wiki/chipsets .l. keller , a. le , b. cici , h. seferoglu , c. fragouli , a. markopoulou , `` microcast : cooperative video streaming on smartphones , '' _ acm mobisys _ , june 2012 .m. j. karol , m. g. hluchyj , and s. p. morgan , `` input versus output queueing on a space - division packet switch , '' _ ieee trans .13471356 , dec . 1987 .m. j. neely , _stochastic network optimization with application to communication and queueing systems _ ,morgan & claypool , 2010 .l. tassiulas , `` scheduling and performance limits of networks with constantly changing topology , '' _ ieee trans .inf . theory _ ,3 , pp . 10671073 , may 1997 .n. kahale and p. e. wright , `` dynamic global packet routing in wireless networks , '' _ ieee infocom _ , apr .m. andrews , k. kumaran , k. ramanan , a. stolyar , p. whiting , and r. vijaykumar , `` providing quality of service over a shared wireless link , '' _ ieee commun . mag .2 , pp . 150154 , feb . 2001 .m. j. neely , e. modiano , and c. e. rohrs , `` dynamic power allocation and routing for time varying wireless networks , '' _ ieee j. select .areas commun .1 , pp . 89103 , jan .a. l. stolyar , `` greedy primal dual algorithm for dynamic resource allocation in complex networks , '' _ queuing systems _ , vol .3 , pp . 203220 , 2006 .j. liu , a. l. stolyar , m. chiang , and h. v. poor , `` queue backpressure random access in multihop wireless networks : optimality and stability , '' _ ieee trans .inf . theory _ ,9 , pp . 40874098 , sept . 2009 .b. radunovic , c. gkantsidis , d. gunawardena , and p. key , `` horizon : balancing tcp over multiple paths in wireless mesh network , '' _ acm mobicom _, sept . 2008 .a. warrier , s. janakiraman , s. ha , i. rhee , `` diffq : practical differential backlog congestion control for wireless networks , '' _ ieee infocom _ , apr .u. akyol , m. andrews , p. gupta , j. hobby , i. saniee , and a. stolyar , `` joint scheduling and congestion control in mobile ad - hoc networks , '' _ ieee infocom _ , apr .a. sridharan , s. moeller , b. krishnamachari , `` making distributed rate control using lyapunov drifts a reality in wireless sensor networks , '' _ ieee wiopt _ , apr .s. moeller , a. sridharan , b. krishnamachari , and o. gnawali , `` routing without routes : the backpressure collection protocol , '' _ acm ipsn _r. laufer , t. salonidis , h. lundgren , and p. l. guyadec , `` xpress : a cross - layer backpressure architecture for wireless multi - hop networks , '' _ acm mobicom _ , sept . 2011 .e. athanasopoulou , l. x. bui , t. ji , r. srikant , and a. stolyar , `` backpressure - based packet - by - packet adaptive routing in communication networks , '' _ ieee / acm trans .1 , pp . 244257 , feb . 2013 .l. x. bui , r. srikant , and a. stolyar , `` a novel architecture for reduction of delay and queueing structure complexity in the back - pressure algorithm , '' _ ieee / acm trans .19 , no . 6 , pp .15971609 , dec . 2011 .h. seferoglu , e. modiano , `` diff - max : separation of routing and scheduling in backpressure - based wireless networks , '' _ ieee infocom _ , apr .in this section , we provide a proof of theorem [ theorem2 ] for arbitrary number of fifo queues and flows .let us first consider , which should satisfy the following inequality .1_{[s_n ] } \nonumber \\ & \tau_{n}(s_1,\ldots , s_n ) , \forall n \in { \mathcal{n } } , k \in { \mathcal{k}}_{n}.\end{aligned}\ ] ] where ] as = \sum_{l_1 \in { \mathcal{k}}_{1 } } \ldots \sum_{l_{n-1 } \in { \mathcal{k}}_{n-1 } } \sum_{l_{n+1 } \in { \mathcal{k}}_{n+1 } } \ldots \nonumber \\ & \sum_{l_n \in { \mathcal{k}}_{n } } p[s_{1 } , \ldots , s_{n } , h_{n}=k , h_1=l_1 , \ldots , h_{n-1}=l_{n-1 } , \nonumber \\ & h_{n+1}=l_{n+1 } , \ldots , h_{n}=l_{n } ] , \end{aligned}\ ] ] = \sum_{l_1 \in { \mathcal{k}}_{1 } } \ldots \sum_{l_{n-1 } \in { \mathcal{k}}_{n-1 } } \sum_{l_{n+1 } \in { \mathcal{k}}_{n+1 } } \ldots \nonumber \\ & \sum_{l_n \in { \mathcal{k}}_{n } } \underbrace{p[s_1 | h_1=l_1]}_{\triangleq \xi_{1,l_1}(s_1 ) } \ldots \underbrace{p[s_{n-1 } | h_{n-1}=l_{n-1}]}_{\triangleq \xi_{n-1,l_{n-1}}(s_{n-1 } ) } \nonumber \\ & \underbrace{p[s_{n } | h_{n}=k]}_{\triangleq \xi_{n , k}(s_{n } ) } \underbrace{p[s_{n+1 } | h_{n+1 } = l_{n+1}]}_{\triangleq \xi_{n+1,l_{n+1}}(s_{n+1 } ) } \ldots \underbrace{p[s_{n } | h_{n } = l_{n}]}_{\triangleq \xi_{n , l_{n}}(s_{n } ) } \nonumber \\& p[h_{n}=k , h_{1}=l_{1 } , \ldots , h_{n-1}=l_{n-1 } , h_{n+1}=l_{n+1 } , \ldots , \nonumber \\ & h_{n}=l_{n}]\end{aligned}\ ] ] thus , we have = \sum_{l_1 \in { \mathcal{k}}_{1 } } \ldots \sum_{l_{n-1 } \in { \mathcal{k}}_{n-1 } } \sum_{l_{n+1 } \in { \mathcal{k}}_{n+1 } } \ldots \nonumber \\ & \sum_{l_n \in { \mathcal{k}}_{n } } { \xi_{1,l_1}(s_1 ) } \ldots { \xi_{n-1,l_{n-1}}(s_{n-1 } ) } { \xi_{n , k}(s_{n } ) } \nonumber \\ & { \xi_{n+1,l_{n+1}}(s_{n+1 } ) } \ldots { \xi_{n , l_{n}}(s_{n } ) } p[h_{n}=k , h_{1}=l_{1 } , \ldots , \nonumber \\ & h_{n-1}=l_{n-1 } , h_{n+1}=l_{n+1 } , \ldots , h_{n}=l_{n } ] . \end{aligned}\ ] ] now , we should calculate ] ] ] ] , which is equal to ._ i.e. , _ ] ] ] .therefore , in all markov chains we can create for c1 , c2 , , cn , we have the same transition probabilities , so we have ] ] .this proves our claim that ] ] ] .now that we have shown that ] ] ] holds , ( [ eq : states_v1 ] ) is expressed as = \sum_{l_1 \in { \mathcal{k}}_{1 } } \ldots \sum_{l_{n-1 } \in { \mathcal{k}}_{n-1 } } \sum_{l_{n+1 } \in { \mathcal{k}}_{n+1 } } \ldots \nonumber \\ & \sum_{l_n \in { \mathcal{k}}_{n } } { \xi_{1,l_1}(s_1)}p[h_1=l_1 ] \ldots { \xi_{n-1,l_{n-1}}(s_{n-1 } ) } \nonumber \\ & p[h_{n-1}=l_{n-1 } ] { \xi_{n , k}(s_{n } ) } p[h_{n}=k ] { \xi_{n+1,l_{n+1}}(s_{n+1 } ) } \nonumber \\ & p[h_{n+1}=l_{n+1 } ] \ldots { \xi_{n , l_{n}}(s_{n } ) } p[h_{n}=l_{n } ] , \end{aligned}\ ] ] which leads to = \xi_{n , k}(s_n)p[h_n = k ] \nonumber \\ & \prod_{m \in { \mathcal{n}}-\{n\ } } \left ( \sum_{k \in { \mathcal{k}}_{m } } \xi_{m , k}(s_{m } ) p[h_{m}=k ] \right ) . \end{aligned}\ ] ] now , we should calculate ] should be satisfied , we have = \frac{\lambda_{m , k}/\bar{p}_{m , k}}{\sum_{l \in { \mathcal{k}}_{m } } \lambda_{m , l } / \bar{p}_{m , l}}. \end{aligned}\ ] ] when ( [ eq : app_prob_hm_k ] ) is substituted in ( [ eq : states_v4 ] ) , we have = \xi_{n , k}(s_n ) \frac{\lambda_{n , k}/\bar{p}_{n ,k}}{\sum_{l \in { \mathcal{k}}_{m } } \lambda_{n , l } / \bar{p}_{n , l } } \nonumber \\ & \prod_{m \in { \mathcal{n}}-\{n\ } } \frac{\sum_{k \in { \mathcal{k}}_{m } } \xi_{m , k}(s_m)\lambda_{m , k}/\bar{p}_{m , k}}{\sum_{k \in { \mathcal{k}}_{m } } \lambda_{m , k}/\bar{p}_{m , k}}. \end{aligned}\ ] ] since , we have = \xi_{n , k}(s_n ) \frac{\lambda_{n , k}/\bar{p}_{n , k}}{\sum_{l \in { \mathcal{k}}_{m } } \lambda_{n , l } / \bar{p}_{n , l } } \nonumber \\ & \prod_{m \in { \mathcal{n}}-\{n\ } } \frac{\sum_{k \in { \mathcal{k}}_{m } } \rho_{m , k}(s_m)\lambda_{m , k}}{\sum_{k \in { \mathcal{k}}_{m } } \lambda_{m , k}/\bar{p}_{m , k}}. \end{aligned}\ ] ] when we substitute ( [ eq : states_v6 ] ) into ( [ eq : appa_1 ] ) , we have ( [ eq : lamdba_nk ] ) .this concludes the proof .let define a lyapunov function as ; , and the lyapunov drift as ; ] .thus , eq . ( [ eq : appb_drift2 ] ) is expressed as ; \end{aligned}\ ] ] note that if the flow arrival rates are inside the capacity region , then the minimizing the right hand side of the drift inequality in eq .( [ eq : appb_drift3 ] ) corresponds to the scheduling part of in eq .( [ eq : scheduling ] ) .now , let us consider again the stability region constraint in eq .( [ eq : appa_1 ] ) , which is 1_{[s_n ] } \tau_{n}(s_1,\ldots , s_n ) , \foralln \in { \mathcal{n } } , k \in { \mathcal{k}}_{n} ] .then , eq . ( [ eq : appb_gec1 ] ) is expressed as ; \frac{g_n}{\sum_{k \in { \mathcal{k}}_{n } } ( \bar{p}_{n , k})^{\beta } } \end{aligned}\ ] ] there exists a small positive value satisfying \frac{g_n}{\sum_{k \in { \mathcal{k}}_{n } } ( \bar{p}_{n , k})^{\beta } } \end{aligned}\ ] ] thus , we can find a randomized policy satisfying \leq -\epsilon\end{aligned}\ ] ] now , let us consider eq .( [ eq : appb_drift3 ] ) again , which is expressed as ; \end{aligned}\ ] ] we minimize the right hand side of eq .( [ eq : appb_drift3 ] ) , so the following inequality satisfies ; \leq e \bigl [ { { \mathop{a}\limits^{\vbox to -.5\ex@{\kern-\tw@\ex@ \hbox{\scriptsize * } \vss}}}}_{n}(t ) - \frac{{{\mathop{g}\limits^{\vbox to -.5\ex@{\kern-\tw@\ex@ \hbox{\scriptsize * } \vss}}}}_{n}(t)}{\sum_{k \in { \mathcal{k}}_{n } } ( \bar{p}_{n , k})^{\beta } } \nonumber \\ & | \boldsymbol q(t)\bigr]\end{aligned}\ ] ] where and are the solutions of a randomized policy . incorporating eq . ( [ eq : appb_randep ] ) in eq . ( [ eq : appb_drift4 ] ) , we have the time average of eq .( [ eq : appb_drift5 ] ) leads to \end{aligned}\ ] ] this concludes that the time average of the queues are bounded if the arrival rates are inside the capacity region .now , let us focus on the original claim of theorem [ theorem2 ] .let us consider a drift+penalty function as ; \leq \nonumber \\ & b + e \bigl[ \sum_{n \in { \mathcal{n } } } q_{n}(t ) \bigl(a_{n}(t ) - \frac{g_{n}(t)}{\sum_{k \in { \mathcal{k}}_{n } } ( \bar{p}_{n , k})^{\beta } } \bigr ) | \boldsymbol q(t)\bigr ] - \nonumber \\ & \sum_{n \in { \mathcal{n } } } \sum_{k \in { \mathcal{k}}_{n } } m e[u_{n , k } ( \lambda_{n , k}(t ) ) | \boldsymbol q(t ) ] \end{aligned}\ ] ] since we set , we have \nonumber \\ &\leq b + \sum_{n \in { \mathcal{n } } } e \bigl [ q_{n}(t ) \bigl(a_{n}(t ) - \frac{g_{n}(t)}{\sum_{k \in { \mathcal{k}}_{n } } ( \bar{p}_{n , k})^{\beta } } \bigr ) | \boldsymbol q(t)\bigr ] - \nonumber \\ & \sum_{n \in { \mathcal{n } } } \sum_{k \in { \mathcal{k}}_{n } } m e[u_{n , k } ( a_{n}(t ) ( \bar{p}_{n , k})^{\beta } ) | \boldsymbol q(t ) ] \end{aligned}\ ] ] note that minimizing the right hand side of eq .( [ eq : appb_dpp1 ] ) corresponds to the flow control and scheduling algorithms of in eq .( [ eq : flow_control ] ) and eq .( [ eq : scheduling ] ) , respectively . since there exists a randomized policy satisfying eq .( [ eq : appb_randep ] ) , eq .( [ eq : appb_dpp1 ] ) is expressed as \nonumber \\ & \leq b - \epsilon \sum_{n \in { \mathcal{n } } } q_{n}(t ) - \sum_{n \in { \mathcal{n } } } \sum_{k \in { \mathcal{k}}_{n } } m u_{n , k}(a_n(\bar{p}_{n , k})^{\beta } + \delta ) \end{aligned}\ ] ] where is the maximum time average of the sum utility function that can be achieved by any control policy that stabilizes the system .then , the time average of eq .( [ eq : appb_dpp2 ] ) becomes \biggr \ }\leq \nonumber \\ & \limsup_{t \rightarrow \infty } \frac{1}{t } \sum_{\tau = 0}^{t-1 } \biggl \ { b - \epsilon \sum_{n \in { \mathcal{n } } } q_{n}(\tau ) - \nonumber \\ &\sum_{n \in { \mathcal{n } } } \sum_{k \in { \mathcal{k}}_{n } } m u_{n , k } ( a_n(\bar{p}_{n , k})^{\beta } + \delta ) \biggr \}\end{aligned}\ ] ] now , let us first consider the stability of the queues . if both sides of eq . ([ eq : appb_dpp3 ] ) is divided by and the terms are arranged , we have \bigr \ } - \sum_{k \in { \mathcal{n } } } \sum_{k \in { \mathcal{k}}_{n } } \frac{m}{\epsilon } u_{n , k } ( a_n(\bar{p}_{n , k})^{\beta } + \delta)\end{aligned}\ ] ] since the right hand side is a positive finite value , this concludes that the time averages of the total queue sizes are bounded .now , let us consider the optimality . if both sides of eq .( [ eq : appb_dpp3 ] ) are divided by , we have \leq \nonumber \\ & \limsup_{t \rightarrow \infty } \frac{1}{t } \sum_{\tau = 0}^{t-1 } \bigl \ { \frac{b}{m } - \frac{\epsilon}{m } \sum_{n \in { \mathcal{n } } } q_{n}(\tau ) - \nonumber \\& \sum_{n \in { \mathcal{n } } } \sum_{k \in { \mathcal{k}}_{n } } u_{n , k } ( a_n(\bar{p}_{n , k})^{\beta } + \delta ) \bigr \}\end{aligned}\ ] ] by arranging the terms , we have \geq \nonumber \\ & \limsup_{t \rightarrow \infty } \frac{1}{t } \sum_{\tau = 0}^{t-1 } \bigl \ { \sum_{n \in { \mathcal{n } } } \sum_{k \in { \mathcal{k}}_{n } } u_{n , k } ( a_n(\bar{p}_{n , k})^{\beta } + \delta ) - \frac{b}{m } \nonumber \\ & + \frac{\epsilon}{m } \sum_{n \in { \mathcal{n } } } q_{n}(\tau ) \bigr \}\end{aligned}\ ] ] since is positive for any , we have \geq \nonumber \\ & \limsup_{t \rightarrow \infty } \frac{1}{t } \sum_{\tau = 0}^{t-1 } \bigl \ { \sum_{n \in { \mathcal{n } } } \sum_{k \in { \mathcal{k}}_{n } } u_{n , k } ( a_n(\bar{p}_{n , k})^{\beta } + \delta ) - \frac{b}{m } \bigr \}\end{aligned}\ ] ] which leads to \geq \nonumber \\ & \sum_{n \in { \mathcal{n } } } \sum_{k \in { \mathcal{k}}_{n } } u_{n , k } ( a_n(\bar{p}_{n , k})^{\beta } + \delta ) - \frac{b}{m } \end{aligned}\ ] ] this proves that the admitted flow rates converge to the utility optimal operating point with increasing .this concludes the proof . | we investigate the performance of first - in , first - out ( fifo ) queues over wireless networks . we characterize the stability region of a general scenario where an arbitrary number of fifo queues , which are served by a wireless medium , are shared by an arbitrary number of flows . in general , the stability region of this system is non - convex . thus , we develop a convex inner - bound on the stability region , which is provably tight in certain cases . the convexity of the inner bound allows us to develop a resource allocation scheme ; . based on the structure of , we develop a stochastic flow control and scheduling algorithm ; . we show that achieves optimal operating point in the convex inner bound . simulation results show that our algorithms significantly improve the throughput of wireless networks with fifo queues , as compared to the well - known queue - based flow control and max - weight scheduling . |
in his seminal work on the reaction - rate theory in the diffusion - controlled limit smoluchowski established a quantitative connection between thermal fluctuations in the form of molecular diffusion and a macroscopically observable time evolution of the concentration of reactants and products .some 60 years later berg and purcell showed that thermal diffusion also limits the accuracy of biochemical receptors and hence sets physical bounds to the precision of cellular signalling .namely , cellular signalling typically involves low copy numbers of messenger molecules and is thereby inevitably subjected to appreciable fluctuations in the count of molecular binding events at biochemical receptors . in a similar way counting noise limits the precision and sensitivity of modern microscopic diagnostic devices .state - of - the - art single particle tracking techniques indeed highlight the inherent stochasticity of such molecular signalling events . however , despite the significant sample - to - sample fluctuations cellular signalling operates at remarkable precision . inside living cellssome signalling molecules , typically entrapped in vesicles , do not only move by thermal diffusion alone but may also be actively transported along cellular filaments by molecular motors causing intermittent ballistic excursions .free molecules , such as messenger rna , may as well attach to motors , or proteins may move in a directed fashion due to cytoplasmic drag . enhanced spreadingmay finally be facilitated by cytoplasmic streaming .a practical way to incorporate active motion in the stochastic dynamics of signalling molecules is the model of random intermittent search which was recently used to analyse reaction kinetics in active media and the speed and precision of receptor signalling in 3-dimensional media . .when the particle reaches the receptor ( orange sphere ) it binds / dissociates with rates and . due to the specific geometrythe system is effectively 1-dimensional . ] in a mean field picture of receptor signalling at equilibrium , developed by bialek and setayeshgar , signalling molecules diffuse in space and reversibly bind to the receptor in a markovian fashion ( fig .[ schm ] ) . the central object of the theory is the so - called receptor - noise correlation time .namely , in a setting where the receptor measures the concentration over a period longer than any correlation time in the system the noise in the receptor occupancy statistic will be poissonian , and the concentration estimate will improve with the number of independent measurements . the correlation time is set by the thermal noise in the binding to the receptor and the thermal diffusion of the signalling molecules but can be altered by certain details of the transport , such as intermittent sliding along dna in the so - called facilitated diffusion model of gene regulation and intermittent active excursion by hitchhiking molecular motors .in addition , depends on the dimensionality of the cell or domain in which it occurs .moreover , when molecules explore their surrounding space in a compact manner the motion is recurrent in the sense of returning to already visited sites such as the one observed in 1-dimensional diffusion , the recurrences prolong and thus reduce within a given fixed .conversely , the interaction with a confining domain disrupts the positional correlations at long times and thereby truncates , causing an improvement of the sensing precision especially in low dimensions .it was shown in the case of chemical reactions coupled to active transport that the effect of active excursions is most pronounced in low dimensions since they act by disrupting the recurrence of 1-dimensional brownian motion . herewe demonstrate that the effect of intermittent active motion in 1-dimensional diffusive systems is even stronger when it comes to the sensing precision .we compute analytically the accuracy limit for receptor mediated concentration measurements in dimension 1 and argue that active excursions allow for enhanced precision of signalling in neurons .we consider a signalling molecule ( mrna or protein ) diffusing on the real line and randomly switching between a passive diffusion phase with diffusivity and an active ballistic phase with velocity , see fig .[ schm ] and .the duration of active / passive phases is exponentially distributed with mean .the concentrations of freely diffusing and motor - bound signalling molecules are and and denotes motor - bound signalling molecules moving to the left / right , respectively .in addition , while passively diffusing the signalling molecule can reversibly bind to a receptor at in a markov fashion . in a mean field description the fractional occupancy with on / off rates evolves according to the coupled equations [ eqs ] -k_{\mathrm{off}}n(t ) , \label{governing}\\ \frac{\partial c_{p}(x , t)}{\partial t}&=&d\partial^2_xc_p+\frac{c_a^+(x , t ) + c_a^-(x ,t)}{\tau_a}-\frac{c_p(x , t)}{\tau_p}-\delta(x - x_0)\frac{dn(t)}{dt } , \label{governing2}\\ \frac{\partial c_a^{\pm}(x , t)}{\partial t}&=&\mp v \partial_x c_a^{\pm}(x , t ) -\frac{c_a^{\pm}(x , t)}{\tau_a}+\frac{c_p(x , t)}{2\tau_p } , \label{governing3}\end{aligned}\ ] ] where detailed balance is fulfilled for the binding involving the binding free energy .( [ governing])-([governing3 ] ) describe the motion of a molecule randomly switching between phases of passive diffusion and ballistic motion with rates and .once the molecule locates the receptor at while being in the passive phase , it can bind to it .the total binding rate is proportional to the intrinsic rate , the probability to find the molecule at in the passive phase , and the probability that the receptor is unoccupied .once being bound to the receptor the molecule unbinds with a first order rate proportional to the intrinsic unbinding rate and the probability to find the receptor occupied .note that since has units of 1/length and is dimensionless , the rates have different units , i.e. has the units of length / time and has units of 1/time . to obtain a closed equation for the dynamics of close to equilibrium , we linearise eqs .( [ governing])-([governing3 ] ) around the respective equilibrium values , and to obtain , in terms of small fluctuations , and , and .moreover , the detailed balance condition imposes the constraint on the free energy fluctuations . by fourier transforming in time and in space , =\int_0^{\infty}\mathrm{e}^ { i\omega t}(\cdot)dt ] , and solving the resulting system of ordinary equations we arrive at an exact generalised langevin equation for the fluctuations around the equilibrium receptor occupancy within the linear regime here denotes the correlation time of two - state markov switching , , and the noise in the form of the free energy fluctuations has zero mean and obeys the fluctuation - dissipation theorem .the memory kernel in terms of an inverse fourier transform operator reads ,\ ] ] and the contribution due to the intermittent active excursions is the limit in eq .( [ memory ] ) is to be understood as a finite receptor size taken to zero after the integral is evaluated in order for the integral to converge . the memory term in the langevin equation ( [ langevin ] ) reflects the fact that it takes a finite time before the receptor feels the effect of because the signalling molecule moves throughout space before ( re)binding .according to linear response theory we can write where the generalised susceptibility becomes =\hat { \mathcal{f}}^{-1}_t\left[\frac{\delta\tilde{n}(\omega)}{\delta\tilde{f}(\omega ) } \right],\ ] ] and the power spectrum of is in turn obtained according to the fluctuation - dissipation theorem from the imaginary part of , .\ ] ] since the receptor s sensitivity is limited to frequencies , the uncertainty in measuring the occupation fraction will be moreover , a change in concentration is equivalent to a change in , .using this one can also show that and use this to relate the uncertainty in to the precision at which the receptor can determine .we split the signalling process in an equilibration phase , during which the system equilibrates to a new concentration , and the measurement phase , during which the receptor reads out this equilibrium concentration .moreover , we assume that the equilibration time corresponds to the time during which the signalling molecules move a distance of the order of the size of the cell or a cellular compartment .the equilibration time is then defined implicitly by the mean squared displacement via .we here neglect the binding to the receptor given by eq .( [ governing ] ) and adopt a probabilistic interpretation of eqs .( [ governing2 ] ) and ( [ governing3 ] ) , which we solve by laplace transforming in time and fourier transforming in space .the mean squared displacement for a particle starting at the origin in the passive phase is obtained from the laplace transform {k=0} ] . to estimate the equilibration rate of active transport with respect to diffusion we compare with the purely passive equilibration time .[ rate]a)-c ) shows results for various biologically relevant pclet numbers . of equilibration times for passive diffusion ( subscript ) and intermittent active motion ( subscript ) as a function of the typical lengths of active ( ) and passive ( ) displacements for various pclet numbers .the yellow line corresponds to . whenever active motion leads to faster equilibration . ] from fig .[ rate ] we find that active transport is more efficient for larger values .more precisely , the required typical displacement in the active phase needed to enhance the equilibration with respect to bare diffusion is smaller for larger . in the biologically relevant settingthe molecular motor speed is widely independent of the particle size and the values for the diffusion coefficients span a scale between corresponding to large cargo such as vesicles , and corresponding to smaller proteins .conversely , the dimension of effectively linear cells such as neurons or their sub - structures ( i.e. dendrites ) falls between and , which means that values are in fact robustly expected .therefore , according to fig . [ rate ] it is quite plausible that intermittent active motion indeed enhances signalling speed in vivo .the physical principle underlying the enhancement is rooted in the fundamental difference in the time scaling of diffusive and active motion , versus .for example , comparing only purely passive and active motion we find that for active motion is more efficient . in the intermittent casethe motion has a transient period of duration , which corresponds to a parameter dependent combination of both regimes .after this transient period the effective diffusive regime is established with diffusivity , which may or may not be larger than the bare . can therefore be smaller or larger than .shuttling of large cargo therefore almost universally profits from active motion , whereas active motion of smaller proteins will only be more efficient over sufficiently large distances .the observed features thus provide a simple explanation why experimentally active transport is observed mostly in the trafficking of larger particles . similarly , active diagnostics can also be faster and hence could enable for a higher diagnostic throughput .we now address the signalling precision and focus first on the situation , where molecules move in space by thermal diffusion alone . in this case and the -integral in eq .( [ memory ] ) is evaluated exactly , after taking the limit yielding .\ ] ] using eq .( [ mem_p ] ) in eqs .( [ fdt ] ) to ( [ uncc ] ) we arrive at the power spectrum of concentration fluctuations experienced by the receptor , where denotes the principal value of the argument .integrating over the frequency range we obtain the final result for the variance of the concentration measured by the receptor , where the first part describes the noise due to the two - state markov switching ( i.e. the binding alone ) and the second term stands for the noise due to diffusion .note that for the recurrent nature of 1-dimensional diffusion and the fact that the receptor is point - like , we _ can not _ approximate the precision at which the receptor can determine with as in the 3-dimensional case ( see e.g. ) .more precisely , in contrast to the lorentzian shape of in the 3-dimensional case , diverges as .the integral over nevertheless converges and leads to eq .( [ unc_p ] ) .moreover , in contrast to the 3-dimensional case where the squared measurement error decreases as , for 1-dimensional diffusion we find the much slower decay .that is , and the receptor measurement is thus much less efficient in 1-dimension .as we are interested in the signalling precision at equilibrium and hence consider values which are much longer than any correlation time in the motion such that , we may take the limit in as well as in eq .( [ memory ] ) . this way we recover , after performing the integral over in eq .( [ memory ] ) and taking the limit , an effective white noise asymptotic on the slow time scale , ^{-1}+[v\tau_a]^{-2}\right)^{3/2}}\right],\ ] ] and correspondingly an effectively lorentzian fluctuation spectrum at small frequencies ( see ) . from eq .( [ uncc ] ) we obtain also the low frequency region of the power spectrum concentration fluctuations , for , where we introduced the typical distance the signalling molecule moves in the passive and motor bound phases .as before , the first term in eq .( [ unca ] ) corresponds to the two - state switching noise and the second term to the noise due to spatially extended intermittent dynamics .note that in contrast to the 3-dimensional setting , where the active excursions merely rescale the correlation time , we here find a qualitative change in the properties of the noise , compare eqs .( [ unc_p ] ) and ( [ unca ] ) .using eq .( [ unca ] ) we can now approximate the precision at which the receptor can determine with and obtain our main result here we are interested in the transport - controlled sensing . comparing the noise due to the spatially extended motion for passive and active intermittent motion we find that that active motion allows for more precise absolute concentration measurements as soon as the inequality holds such that in the limit of long active excursions we end up with the condition .note that the right hand side of this inequality is essentially the characteristic time of the asymptotic exponential decay of the first passage time density of a 1-dimensional random walk in a domain of length if we set .in other words , for active signalling to be more precise in 1-dimension the receptor needs to measure long enough for the particle to find the target in the passive phase , which is an intuitive result . in order to be more concretewe compare the scaled variances of measurement errors for active and passive motion . in the transport - controlled regimewe have for intermittent active motion and , where denotes the total concentration of molecules .note that here and throughout the entire paper we implicitly assume that the number of molecules exceeds the number of receptors .the relative precision ratio reads ^ 2\right)^{3/2 } } , \label{prec_rat}\ ] ] where we introduced dimensionless times and as well as , the dimensionless ratio between the squared typical lengths of passive versus active displacements during the measurement time .the results for various values of are presented in fig .[ precision ] .with for active intermittent ( subscript ) versus passive ( subscript ) transport as a function of the relative duration of active ( ) and passive ( ) phases with respect to the measurement time for various values of dimensionless ratio between the squared typical length of passive versus active displacements during the measurement time .whenever active motion leads to more precise signalling . note that and in order to assure equilibrium sensing conditions . ]we find that the minimal value of that is required for improved sensing precision with respect to bare diffusion ( i.e. for , which corresponds to the region to the left of the yellow curve in fig .[ precision ] ) decreases with decreasing .in other words , for large particles with a smaller the active displacements can become arbitrarily short . given that the typical measurement times lie between sec and min the conditions for improved signalling accuracy appear to be robustly satisfied .to understand this we need to recall that , while larger monotonically leads to lower absolute read - out errors ( see eq .( [ uncc_a ] ) ) , it simultaneously decreases and hence renormalises .the improved accuracy in fig . [ precision ] is thus a result of a trade - off between a decreases of the absolute concentration fluctuations and a lower equilibrium probability to be at the receptor site .this result is striking as it suggests that even the slightest active displacements can disrupt the recurrence and improve the read - out precision as long as their length is larger than the receptor size . physically , this observation is due to the fact that the receptor collects new information only from statistically independent binding events .correlations between consecutive measurements arise due to a finite markov binding time and due to the return and rebinding of a previously bound molecule . moreover , we assume that only freely diffusing molecules can bind to the receptor. therefore , the receptor necessarily experiences the binding of those molecules , which are ballistically swept towards the binding site over a distance larger than the receptor size , as statistically independent . in turn ,molecules which are ballistically flushed away from the receptor after unbinding will also contribute statistically independent binding events , regardless of how they return to the receptor .the non - existence of a lower - bound on is thus an artefact of assuming a point - like receptor .note that in an alternative setting , in which we compare the precision to determe the same concentration of passively moving molecules , which corresponds to a higher in the intermittent active case ( i.e. ) , the signalling precision would be improved unconditionally .therefore , in contrast to the 3-dimensional case , where active motion only improves sensing precision for certain values of parameters , active transport can robustly and much more efficiently improve sensing accuracy in 1-dimensional systems for sufficiently long measurement times .the degree of recurrence of spatial exploration is essential for random target search processes . for example , in the facilitated diffusion model of gene regulation the topological coupling of 1- and 3-dimensional diffusion allows for a more efficient search ( e.g. ) . in a similar mannerintermittent active excursions can significantly speed up random search .in contrast , the topological coupling of 1- and 3-dimensional diffusion does not appreciably improve the signalling precision .in addition , we showed previously that in a 3-dimensional setting active motion only conditionally improves the signalling accuracy , by decreasing the correlation time of the counting noise in a process called active focusing .here we find , strikingly , that active excursions effect qualitative changes in the power spectrum of concentration fluctuations experienced by the receptor in 1-dimensional systems such as neurons . by adding the active component the power spectrum changes from for thermal diffusion alone to a lorentzian shape with a finite plateau .this lorentzian shape is also observed for passive signalling in 3-dimensions .therefore , active excursions disrupt the recurrent nature of 1-dimensional diffusion .existing studies provide insight into how receptor clustering and cooperativity , dimensionality , spatial confinement , receptor diffusion and active transport affect the precision of receptor signalling .the overall dependence of the counting noise on the manner the signalling molecules explore their surrounding space suggests that a heterogeneous diffusivity profile and spatial disorder would alter the signalling precision as well . both have been observed in experiments .in addition , signalling molecules or transport versicles often exhibit anomalous diffusion , both in the form of passive and active motion. it would therefore be interesting to investigate the impact of these features on the sensing precision in the future .ag acknowledges funding through an alexander von humboldt fellowship and arrs project z1 - 7296 .99 endres r g and wingreen n s 2008 _ proc .natl . acad .usa _ * 105 * , 15749 ; + rappel w - j and levine h 2008 _ phys .lett . _ * 100 * , 228101 ; + hu b , kessler d a , rappel w - j , and levine h 2011 _ phys .lett . _ * 107 * , 148101 ; + govern c and ten wolde p r 2012 _ phys .lett . _ * 109 * , 218103 ; + kaizu c et al .2014 _ biophys .j. _ * 106 * , 976 ; + tkaik g , gregor t , and bialek w 2008 _ plos one _ * 3 * , e2774 .salman h et al ., _ biophys .j. _ * 89 * , 2134 ( 2005 ) ; + huet s , karatekin e , tran v s , cribier s and henry j p 2006 _ biophys .j. _ * 91 * , 3542 ; + vermehren - schmaedick a et al .2014 _ plos one _ * 9 * , e95113 ; + arcizet d , meier b , sackmann e , rdler j o , and heinrich d 2008 _ phys .lett . _ * 101 * , 248103 .hippel ph and berg og 1989 j. biol . chem . *264 * , 675 ; + sheinman o , bnichou o , kafri y , and voituriez r 2012 _ rep .phys . _ * 75 * , 026601 ; + pulkkineno and metzler r 2013 _ phys .* 110 * , 198101 ; + bauer m and metzler r 2012 _ biophys .j. _ * 102 * , 2321 ; + bauer m and metzler r 2013 _ plos one _ * 8 * , e53956 ; + koslover e f , daz de la rosa m a d , and spakowitz a j 2011 _ biophys .j. _ * 101 * , 856 ; + kolomeisky a 2011 _ phys . chem .* 13 * , 2088 ; + wunderlich z and mirny l a 2008 _ nucleic acids res . _ * 36 * , 3570 .godec a and metzler r 2015 _ phys .e _ * 91 * , 052134 ; + viccario g , antoine c , and talbot j 2015 _ phys . rev .* 115 * , 240601 ; + cherstvy a g , chechkin a v , and metzler r 2014 _ j. phys .theor . _ * 47 * , 485002 .sabhapandit s , majumdar s n , and comtet a 2006 _ phys .e _ * 73 * 051102 ; + majumdar s n , and comtet a 2002 _ phys .lett . _ * 89 * 060601 ; + burov s and barkai e 2007 _ phys .lett . _ * 98 * 250601 ; + dean d s , gupta s , oshanin g , rosso a , and schehr g 2014 _ j. phys . a : math .* 47 * , 372001 ; + krsemann h , godec a , and metzler r. 2014 _ phys .e _ * 89 * , 040101(r ) ; + krsemann h , godec a , and metzler r. 2015 _ j. phys . a : math .theor . _ * 48 * , 285001 ; + godec a , chechkin a v , barkai e , kantz h and metzler r 2014 _ j. phys .a : math . theor . _ * 47 * , 492002 english b p , hauryliuk v , sanamrad a , tankov s , dekker n h , and elf j 2011 _ proc . natl .* 108 * , e365 ; + cutler p j , malik m d , liu s , byars j s , lidke d s , and lidke k a 2013 _ plos one _ * 8 * , e64320 ( 2013 ) .di rienzo c , piazza v , gratton e , beltram f , and cardarelli f 2014 _ nature commun . _ * 5 * , 5891 . ;+ jeon j - h , tejedor v , burov s , barkai e , selhuber - unkel c , berg - srensen k , oddershede l and metzler r 2011 _ phys .lett . _ * 106 * , 048103 ; + golding i and cox e c 2006 _ phys ._ 96 , 098102 .caspi a , granek r , and elbaum m 2002 _ phys .e _ * 66 * , 011916 ; + gal n and weihs d 2010 _ phys . rev .e _ * 81 * , 020903(r ) ; + goychuk i , kharchenko v o , and metzler r 2014 _ phys .* 16 * , 16524 ( 2014 ) ; + seisenberger g , ried mu , endre t , bning h , hallek m and bruchle c 2001 _ science _ * 294 * , 1929 . | molecular signalling in living cells occurs at low copy numbers and is thereby inherently limited by the noise imposed by thermal diffusion . the precision at which biochemical receptors can count signalling molecules is intimately related to the noise correlation time . in addition to passive thermal diffusion , messenger rna and vesicle - engulfed signalling molecules can transiently bind to molecular motors and are actively transported across biological cells . active transport is most beneficial when trafficking occurs over large distances , for instance up to the order of 1 metre in neurons . here we explain how intermittent active transport allows for faster equilibration upon a change in concentration triggered by biochemical stimuli . moreover , we show how intermittent active excursions induce qualitative changes in the noise in effectively one - dimensional systems such as dendrites . thereby they allow for significantly improved signalling precision in the sense of a smaller relative deviation in the concentration read - out by the receptor . on the basis of linear response theory we derive the exact mean field precision limit for counting actively transported molecules . we explain how intermittent active excursions disrupt the recurrence in the molecular motion , thereby facilitating improved signalling accuracy . our results provide a deeper understanding of how recurrence affects molecular signalling precision in biological cells and novel diagnostic devices . |
modern experiments in atomic and molecular physics often use widely tunable lasers and require both large tunability and precise frequency stabilization . in many cases , spectroscopy has to be performed with stable lasers in a large frequency range to find and characterize initially unknown atomic or molecular states , for example for the production of molecules from ultracold atoms or magneto - optical trapping of molecules .furthermore , it can be beneficial to address different known transitions spaced several ghz or even tens of ghz sequentially with a single laser .for example , we use a mid - infrared optical parametric oscillator ( opo ) to excite a number of rovibrational transitions spaced more than 10ghz in cold molecules held in an electrostatic trap .frequency switching on a millisecond timescale allows us to perform motional cooling , rotational - state preparation , and state detection with a single laser in the same run of an experiment .self - referenced optical frequency combs ( ofc) which are now available commercially have become a common tool for stabilizing lasers in a wide bandwidth by stabilizing the radiofrequency beat note between the laser and a mode of the ofc .a number of methods have been developed to achieve both absolute frequency stability and large tunability over many comb modes of a continuous - wave ( cw ) laser or opo referenced to an ofc . for tuning , the comb , the cw laser , or both can be scanned .tuning over 10ghz without changing the frequency lock has been demonstrated by adding an external electro - optic modulator .further , phase - stable tuning over almost 30ghz in about a second has been achieved by shifting the carrier - envelope offset frequency of the ofc between subsequent pulses of the ofc .the objective of this work was to build a device tuning our frequency - comb - referenced opo over tens of ghz in a couple of milliseconds , yet stabilizing the opo to sub - mhz precision on single frequencies .these requirements were set because we wanted to drive several rovibrational transitions in polyatomic molecules quasi - simultaneously for optical pumping , demanding frequency switching in a time shorter than the typical decay time of vibrational excitations .additionally , the ofc s spectrum should remain unchanged during frequency tuning of the opo such that several independent lasers can be stabilized to the same ofc . in this paper , we demonstrate fast , precise and widely tunable frequency control of the idler wave of a singly - resonant opo by controlling the frequencies of pump and signal which are referenced to the ofc .we show tuning over more than , with ramps performed in less than .although not actively stabilized during a ramp , the opo frequencies are instantly relocked to the ofc at the end of each ramp .sequences of fast ramps to any idler frequency within the mode - hop free tuning range can be performed hands - free and reliably .when the laser frequency is ramped relative to the frequency of a certain comb mode . if the laser frequency lies about midway between two comb modes , the lowest and next to lowest beat frequencies are close to each other which is indicated by the dashed extensions of the tooth structure . ] the frequency control mechanism is based on beating the laser with an ofc and using the lowest beat note for frequency tracking during tuning and for precise stabilization .the principle is illustrated in fig .[ fig : beat ] .the lowest `` signed '' beat frequency determines the laser frequency with respect to the nearest comb mode .`` signed '' refers to being positive ( negative ) if the closest comb mode has a smaller ( larger ) optical frequency . is confined by the repetition rate of the mode - locked laser generating the ofc : .consequently , tuning the frequency of the laser results in a regular tooth structure of as shown .we start with the laser being locked to the ofc at a known absolute optical frequency . to perform a frequency ramp ,the lock is switched off and the frequency is swept while continuously measuring and counting the comb modes passed . once the target frequency is reached , the ramp is stopped and the laser is instantly relocked to the ofc by stabilizing at a value of choice .the main difficulty of controlling the beat frequency during tuning over many comb lines lies in correct handling of the frequency ranges in which the laser frequency coincides with a comb mode or lies midway between two modes . inthe former case is zero , in the latter the lowest and next higher beat frequencies are degenerate and can be mixed up ( see fig .[ fig : beat]b ) . demonstrated solutions to this problem include sudden jumps to a beat note of opposite sign via a sudden step of a control voltage or the use of an acousto - optic modulator to avoid such regions of the spectrum . in both cases , a lock to the ofcwas maintained during frequency tuning possibly limiting the speed .our frequency tracking approach allows for fast ramping to a target frequency independent of the locking electronics .we can simply ignore frequency measurements in the delicate parts of the spectrum and interpolate the frequency there without losing track of it . to ensure sufficient suppression of beat frequencies of higher order near and block low - frequency noise we chose to filter the beat note with a pass band of 10 to 115 mhz ( points of filters ) .this permits clean measurements of inside the pass band .an example of a measurement for the signal wave of the opo during a linear ramp is shown in fig .[ fig : signalramp ] .the data was directly obtained with the control electronics ( see sec . [sec : electronics ] ) and its quality already suggests that tracking of the optical frequency will be possible . ) .the dashed lines mark the points of the radio - frequency filters . outside the pass bandthe frequency counter measures noise . ] due to the unavoidable filtering , the parts of the beat spectrum outside the pass band of the filters are not directly accessible to frequency measurement and locking . if one wanted to control the frequency of a single laser beam and demanded locking of the laser to an arbitrary optical frequency inside the tuning range , an additional frequency shifter , e.g. , an electro- or acousto - optic modulator , would have to be introduced .it would shift the frequency by , e.g. , 20mhz if the desired lock frequency lies in the clipped areas of the beat spectrum .we note that this shift would have to be applied once per ramp , independent of the number of comb modes crossed for that ramp .we apply the frequency control scheme to the idler wave of a singly - resonant opo ( see sec . [sec : setup ] and fig .[ fig : opo ] ) . in our casethe aforementioned filtering does not pose any restrictions .the idler frequency can be changed by independently tuning the pump laser ( ) or the signal frequency which is resonant to the opo cavity and we reference the latter two to the ofc .the optical frequency of each comb mode of a self - referenced ofc is with the carrier - envelope offset frequency .then , we find , where and are the signed beat frequencies of pump and signal with the comb .thus , the idler frequency can be expressed as note that cancels here . for reasons which will be explained later , the final lock point of the pump beam always set to be well separated from the filter cut - off , .the idler frequency can then be ramped to arbitrary values without employing additional frequency shifters by choosing appropriately inside the pass band of the filters .a simple sketch of the optical part of the experimental apparatus is shown in fig . [fig : opo ] . the commercially available cw opo ( lockheed martin aculight , argos 2400-sf-15 , module c ) , which has been described in detail elsewhere , is pumped with up to at and generates an idler wave in the range of to .coarse wavelength tuning of idler and signal is realized by varying the position of the periodically poled nonlinear crystal and the tilt angle of an intracavity etalon with a free spectral range of .although it has been shown before that both elements can be tuned with a computer - controlled algorithm , for us a manual adjustment of those two components suffices .a piezo - electric transducer ( pzt ) varying the cavity length allows mode - hop - free tuning of the signal frequency over more than one free spectral range of the cavity which is about .the pump frequency can be tuned continuously over by strain variation of the seed laser fiber length via another pzt element , which is the main tuning mechanism used to adjust the idler frequency during experiments .the setup provides two complementary modes of frequency measurement .first , fast and precise frequency measurements relative to a known initial frequency are obtained with the ofc .therefore , we beat a few mw of pump and signal light with radiation from a self - referenced ofc synthesizer ( menlo systems , fc1500 ) , which provides a frequency comb with mode spacing at and about 1510 to . repetition rate and offset frequency are stabilized to a stable reference obtained from an h-maser .the resulting beat notes of pump and signal beams are measured with high speed ingaas photodiodes ( thorlabs , det01cfc ) and processed further by the electronics .second , the idler frequency can be determined with an absolute accuracy of with a calibrated wavemeter ( bristol instruments , 621a - ir ) .however , wavemeter measurements can be performed with a maximum rate of and after large frequency ramps the device needs up to about to adjust to the new frequency and display correct results .the wavemeter is used to fix the absolute optical frequency once before fast ramps can be performed .the structure of the custom - built electronics controlling the two beat frequencies and hence the optical frequencies of the opo reflects the fact that the opo system can basically be in two states : performing a frequency ramp or being locked to frequencies of choice .ramps are controlled by a microcontroller that measures and adjusts all relevant parameters .the microcontroller is an atmel at91sam7xc256 with 32bit arm - based architecture which runs custom software programmed in c. the software is specialized to our opo and our experimental sequences but it is available from the authors upon request . for precise stabilization to a single frequency between two rampswe use an analog proportional - integral ( pi ) regulation circuit . except for the microcontrollerall components exist twice as we have to control the pump and the signal beam of the opo .a schematic of the electronics is displayed in fig .[ fig : electronics ] . the beat signal recorded by the photo diode is filtered ( pass band ) and amplified ( red box in the figure ) . for frequencyramping and tracking ( green box ) the beat signal is digitized by a frequency counter with a high sampling rate of . based on the frequency measurement the microcontroller applies voltage ramps to the pzt element of the laser ( via a high precision , low - noise dac and a fast , low - noise high voltage amplifier ) andthus ramps the laser frequency .once the laser is at the desired frequency , the pi regulation circuit stabilizes the beat frequency ( blue box in schematic ) .a reference frequency in the range of is generated via direct digital synthesis . comparing the recorded beat signal and the reference, a custom - built phase - frequency detector produces an error signal that is fed to the pi controller .the output voltage of the pi controller is scaled such that the frequency range covered by the pi regulation loop is rather small , on the order of .the total voltage applied to the pzts via the high voltage amplifiers is a sum of two contributions , the output of the dac set by the microcontroller and the output of the pi controller .consequently , for both pump and signal the microcontroller sets the optical frequency approximately by adjusting the dac voltage during a ramp , which can span many ghz . on top of that , precise frequency stabilization is realized with the analog control loop .when the laser is locked to a fixed frequency , the microcontroller measures the output voltage of the pi controller with a built - in adc and adjusts the dac voltage if the pi voltage approaches a border of the regulation range due to long - term drifts .note that the microcontroller also controls the reference frequency for the beat lock , can switch the pi controller on and off and receives measurements from the wavemeter .thus , it is capable of controlling the frequencies of the opo fully automatized during ramps and while being locked to the ofc .in this section , we explain in detail the implementation of fast frequency ramps of the idler starting with the opo being locked to the ofc at a known absolute frequency ( cf .[ sec : basics ] ) . as mentioned before ,the only optical frequency of interest for us is the idler frequency .furthermore , idler tuning is mainly accomplished by tuning the pump laser , whereas the signal frequency is ramped only for fine tuning . for internal bookkeeping, we therefore process the frequencies in a simplified manner as , with the contributions from the pump and from the signal beam .our basic protocol for a frequency ramp is the following .first , the target frequencies ( lock frequencies ) and are calculated with the boundary conditions and . the latter condition is just given by the pass band of the radio - frequency filters .note again that these conditions do not restrict the idler frequency to specific values .second , the pi controllers are switched off , i.e. , the lock to the ofc is released .in fact , we only switch off the integral parts for technical reasons .this ensures that the controllers are in well - defined states at the time of relock .third , the reference frequencies for the final beat lock are set .fourth , the optical frequencies of pump and signal are ramped towards the target values , and and are tracked by repeatedly measuring the beat frequencies .the voltage applied to the pzts and the frequency counter measurements are updated every .finally , the opo is relocked to the ofc as soon as the target frequencies are reached by reactivating the pi controllers .special attention has to be paid to the frequency ramping and tracking part for two reasons .first , the tracking algorithm has to discard beat frequency measurements in the clipped areas of the beat spectrum ( see fig . [fig : signalramp ] ) and interpolate the frequency there .second , many piezo - electric crystals show a nonlinear response to an applied voltage ramp . in particular , for the pzt in our pump laser we observe a delayed response of the piezo to an applied voltage and drifts of the laser frequency after the end of a fast voltage ramp . even for a fixed step size ,the delay and further drift depend on the direction of the voltage ramp , the amplitude , the slew rate , and the starting voltage .this precludes the use of any kind of look - up table relating an applied voltage to a particular frequency and is one of the main reasons for implementing the frequency tracking with `` live '' feedback .frequency tracking is identical for both pump and signal . in every step of the ramp ( every ) a predicted laser frequency anda predicted frequency change for the next step are calculated as follows . a preliminary value is obtained from .to compare the predicted value with measurement , the signed beat frequency is computed from modulo . , if ; , if . ] .only if lies in a frequency range where good counter measurements are expected , i.e. , in the pass band of the filters , we calculate the deviation of the measured value from the predicted frequency as finally , the two parameters of interest are corrected as with numerical factors and . to be less sensitive to single measurement errors we continuously average over a few consecutive steps of the calculation by choosing these factors smaller than unity . as a result ,this frequency tracking approach is quite insensitive to single corrupted measurements .moreover , it requires only a few data points per slope of the beat frequency tooth structure to work properly .we perform all ramps starting with a fixed voltage change per step . consequently , the initial values for the expected frequency change are not changing and can be predetermined experimentally for each of the two beams .the ramp speed and the frequency span , however , differ quite significantly for pump and signal .ramps of the signal frequency are always much shorter than , as they are used for fine tuning only .consequently , the duration of a signal ramp does not limit the overall performance of the system and we can ramp with relatively low speed . we chose a constant step size of 1v/16s which translates to a ramp speed of about 188mhz / ms .for these settings we do not observe any significant delays in response or further drifts of the frequency after the end of the ramp .the signal can be directly locked to the ofc by switching on the pi controller , if the measured beat frequency is close to the target frequency .in contrast , the ramp has to be optimized more carefully for the pump beam , because ramps span many ghz , the response of the pzt has to be taken into account , and speed matters in this case as it sets the timescale of the entire frequency ramp .we start the ramp with a step size of 0.1v/16s which corresponds to about 3ghz / ms in the middle of a frequency ramp . to prevent an overshoot , we significantly slow down the ramp at its end and regulate the applied voltage .the slowdown of the pump ramp and the regulation at its end have two contributions , ensuring effective compensation of the delayed response and further drifts of the pzt and hence the optical frequency .both are applied once the target frequency is closer than 2ghz .first , the voltage change applied to the pzt per step is decreased from initially 0.1v to zero approximately proportional to the square root of the frequency difference to the target .this would ideally lead to a linear decrease of ramp speed in time , if the pzt would not show hysteresis . to compensate the overshoot of the pztwe additionally apply a correction to the voltage change which is proportional to the difference between the expected linear decrease and the actual frequency change . during the very last part of the ramp , when the pump frequency is less than 20mhz away from the target , an effective proportional - differential regulation is implemented by adjusting as with numerical factors . as a result , the actual laser frequency quickly settles to its target value .this final regulation in the frequency interval of around the target frequency is the reason for restricting the target beat frequency of the pump laser to a narrower band than the pass band of the radio - frequency filters ( see above ) . for the pump beamwe simply need some `` frequency space '' to slow the ramp down .the aforementioned details of the frequency ramping procedure are also visible in data recorded with the control electronics .figure [ fig : rampanalysis ] shows a ramp of the pump laser spanning almost 14ghz . in part( a ) we plot frequency vs. time , in particular the measured beat frequency and the predicted value calculated by the tracking algorithm .the agreement is excellent .additionally , the predicted frequency change per step is shown . from the curvesit is evident that the ramp starts slowly due to the delayed response of the pzt , but accelerates quite quickly as a constant per step is applied ( fig .[ fig : rampanalysis](b ) ) .the slowdown of the ramp is also apparent in both the beat frequency tooth structure and . during the final regulation towards the targetbeat frequency of 60mhz there are some oscillations visible .we believe that further optimization of the ramp parameters can eliminate the oscillations , but did not find that the effort is necessary at the moment . in the figure ,the vertical dashed lines mark the switch - on of the pi controller after 8.4ms of ramp time .as expected the beat frequency is stable from that point on .we plot the output voltage of the pi controller during the whole frequency ramp ( see fig .[ fig : rampanalysis](c ) ) .the fact that the voltage stays close to zero , i.e. in the center of the regulation range of \,\mathrm{v}12 & 12#1212_12%12[1][0] link:\doibase 10.1126/science.1163861 [ * * , ( ) ] link:\doibase 10.1088/1367 - 2630/17/7/075016 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.116.063004 [ * * , ( ) ] \doibase http://dx.doi.org/10.1016/j.jms.2015.07.009 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.107.263003 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.115.233001 [ * * , ( ) ] link:\doibase 10.1088/1367 - 2630/17/5/055022 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.84.5102 [ * * , ( ) ] link:\doibase 10.1364/ol.31.003594 [ * * , ( ) ] link:\doibase 10.1364/oe.22.032429 [ * * , ( ) ] link:\doibase 10.1364/oe.10.000515 [ * * , ( ) ] link:\doibase 10.1364/ol.30.002323 [ * * , ( ) ] link:\doibase 10.1364/ol.38.000555 [ * * , ( ) ] link:\doibase 10.1364/ao.53.007476 [ * * , ( ) ] link:\doibase 10.1364/ol.40.004372 [ * * , ( ) ] link:\doibase 10.1364/oe.17.004890 [ * * , ( ) ] link:\doibase 10.1364/ao.45.004910 [ * * , ( ) ] link:\doibase 10.1364/oe.21.005793 [ * * , ( ) ] link:\doibase 10.1364/ol.39.004080 [ * * , ( ) ] link:\doibase 10.1364/oe.20.009178 [ * * , ( ) ] link:\doibase 10.1364/opex.14.000767 [ * * , ( ) ] link:\doibase 10.1007/s00340 - 006 - 2342 - 7 [ * * , ( ) ] link:\doibase 10.1063/1.4776179 [ * * , ( ) ] link:\doibase 10.1103/revmodphys.77.783 [ * * , ( ) ] link:\doibase 10.1103/physreva.80.041401 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.116.063005 [ * * , ( ) ] | optical frequency combs ( ofc ) provide a convenient reference for the frequency stabilization of continuous - wave lasers . we demonstrate a frequency control method relying on tracking over a wide range and stabilizing the beat note between the laser and the ofc . the approach combines fast frequency ramps on a millisecond timescale in the entire mode - hop free tuning range of the laser and precise stabilization to single frequencies . we apply it to a commercially available optical parametric oscillator ( opo ) and demonstrate tuning over more than 60ghz with a ramping speed up to 3ghz / ms . frequency ramps spanning 15ghz are performed in less than 10ms , with the opo instantly relocked to the ofc after the ramp at any desired frequency . the developed control hardware and software is able to stabilize the opo to sub - mhz precision and to perform sequences of fast frequency ramps automatically . |
from the appearance of the quantum mechanics many attempts have been made to recover the laws of the classic mechanics through some classic limit .the more common scheme of this type includes the _ _ quantum decoherence__. this process is in charge to erase the terms of interference of the density matrix , that are classically inadmissible , since they prevent the use of a classical ( boolean ) logic .in addition , decoherence leads to the rule that selects the candidates for classic states .as it is pointed out in the brief historical summary of paper , three periods can be schematically identified in the development of the general program of decoherence . a first period , when the arrival to the equilibrium of irreversible systems was studied .during this period , authors as van kampen , van hove , daneri , et al__. _ _ developed a formalism for explaining the decoherence phenomenon that was not successful at the time but it established the bases of this study .the main problem of this period was that too long decoherence times were found , if compared with the experimental ones . in a second periodthe decoherence in open systems was studied , the main characters of this period were zeh and zurek . in their works ,the decoherence is an interaction process between an open quantum system and its environment .this process , called _ environment - induced decoherence _ ( eid ) , determines , case by case , which is the privileged basis , usually called _ moving preferred basis _ where decoherence takes place in a _ decoherence time _ and it defines the observables that acquire classic characteristics and they could be interpreted in some particular cases as properties that obey a boolean logic .this is the orthodox position in the subject .the decoherence times in this period were much smaller , solving the problem of the first period . recently , in a third period it becomes evident that dissipation was not a necessary condition for decoherence and the study of the arrival to equilibrium of closed systems was also considered .we will not discuss closed systems in this paper but for the sake of completeness we will make only some comments .closed system will be discussed at large elsewhere . in this workwe focus the attention on eid , which is a well known theory , with well established experimental verifications , which makes unnecessary any further explanation . on the contrary other formalismsare not so well established , but they must be taken into account for the sake of completeness ( , , , , , , , , , ) . in this paper , we will introduce a tentative definition ( for eid and other formalisms ) of moving preferred basis where the state decoheres in a very short time , so the main problem of the first period is solved in a convenient and general way .our main aim is to present a new conceptual perspective that will clarify some points that still remain rather obscure in the literature on the subject , e. g. the definition of the moving preferred basis . in previous workswe have resumed the common characteristics of the different approaches of decoherence , which suggest the existence of a general framework for decoherence within which these approaches can all be framed ( see , and ) . according to this general framework , that was developed in andwill be completed in future papers , decoherence is just a particular case of the general problem of irreversibility in quantum mechanics .since the quantum state follows a unitary evolution , it can not reach a final equilibrium state for .therefore , if the non - unitary evolution towards equilibrium is to be accounted for , a further element has to be added to this unitary evolution .the way to introduce this non - unitary evolution must include the splitting of the whole space of observables into the relevant subspace and the irrelevant subspace .once the essential role played by the selection of the relevant observables is clearly understood , the phenomenon of decoherence can be explained in four general steps : 1 .* first step : * the space of relevant observables is defined .* second step : * the expectation value , for any , is obtained .this step can be formulated in two different but equivalent ways : * a coarse - grained state is defined by for any , and its non - unitary evolution ( governed by a master equation ) is computed ( this step is typical in eid ) . * is computed and studied as the expectation value of in the state .this is the generic case for other formalisms .* third step : * it is proved that reaches a final equilibrium value , then + this also means that the coarse - grained state evolves towards a final equilibrium state: + the characteristic time for these limits is the , the _ relaxation time .* fourth step : * also a _ moving preferred basis_ must be defined as we will see in section i.b .this basis is the eigen basis of certain state such that the characteristic time for this limit is the , the _ decoherence time . _the final equilibrium state is obviously diagonal in its own eigenbasis , which turns out to be the final preferred basis .but , from eqs .( [ int-01 ] ) or ( [ int-02 ] ) we can not say that or then , the mathematicians say that the unitarily evolving quantum state of the whole system _ only has a _ _ weak limit , _ symbolized as : equivalent to eq .( [ int-01 ] ) . as a consequence, the coarse - grained state also has a weak limit , as follows from eq.([int-02 ] ) : equivalent to eq .( [ int-02 ] ) . also these weak limits mean that , although the off - diagonal terms of never vanish through the unitary evolution , the system decoheres _ from an observational point of view _ , that is , from the viewpoint given by any relevant observable . from this general perspective , the phenomenon of destructive interference , that produced the decoherence phenomenon , is relative because the off - diagonal terms of and vanish only from the viewpoint of the relevant observables * * , and the superselection rule that precludes superpositions only retains the states defined by the corresponding decoherence bases as we will see .the only difference between eid and other formalisms for decoherence is the selection of the relevant observables ( see for details ) : .: : in eid the relevant observables are those having the following form: where are the observables of the system and is the identity operator of the environment. then eq .( [ 0 ] ) reads .: : in the other formalisms other restriction in the set of observables are introduced the moving preferred basis was introduced , case by case in several papers ( see ) in a non systematic way . on the other hand in references and roland omns introduces a rigorous and almost general definition of the moving preferred basis based in a reasonable choice of the relevant observables , and other physical considerations . in this paperwe will introduce an alternative general definition to define this basis : as it is well known the eigen values of the hamiltonian are the inverse of the characteristic frequencies of the unitary evolution of an oscillatory system .analogously , for non - unitary evolutions , the poles of the complex extension of the hamiltonian are the _ catalogue _ of the decaying modes of these non - unitary evolutions towards equilibrium ( see ). this will be the main idea to implement the definition of our moving preferred basis .i. e. we will use these poles .we will compare and try to unify these two methods in the future .really we already began this approach with omns in section iii . in sectioni we have introduced a general framework for decoherence . a general candidate for moving decoherence basis is introduced in section [ gendefmovdecbas ] , which is implemented in three toy models and the time of decoherence and the relaxation time in these approaches are defined . in principlethese definitions can be used in eid and probably for generic formalisms . in section iiiwe will present the paradigmatic eid : omns ( or lee - friedrich ) model . and show that the pole method yields the same results .finally in section [ conclusions ] we will draw our conclusions .one appendix completes this paper .in this section we will try to introduce a very general theory for the moving preferred basis for _ any relevant observable space _ it is necessary to endow the coordinates of observables and states in the hamiltonian basis ( i.e. the functions and with extra analytical properties in order to find the definition of a moving preferred basis in the most , general , convincing , and simplest way. it is well known that this move is usual in many chapters of physic e. g. in the scattering theory ( see ) .it is also well known that evolution towards equilibrium has two phases .i.- a exponential dumping phase that can be described studying the analytical continuation of the hamiltonian into the complex plane of the energy ( see , , , , , , ) , a fact which is also well known in the scattering theory .ii.- a final polynomial decaying in known as the long time of khalfin effect ( see , ) , which is very weak and difficult to detect experimentally ( see ). these two phases will play an important role in the definition of the moving preferred basis .they can be identified by the theory of analytical continuation of vectors , observables and states . to introduce the main equation we will make a short abstract of papers and .we begin reviewing the analytical continuation for pure states .let the hamiltonian be where the free hamiltonian satisfies ( see .( 8) or ) and ( see .( 9)) then ( see .( 10 ) ) and ( see .( 11 ) ) where the are the eigenvectors of , that also satisfy eq .( [ i ] ) .the eigen vectors of are given by the lippmann - schwinger equations ( see .( 12 ) and ( 13)) let us now endow the function of with adequate analytical properties ( see ) . e.g.let us consider that the state ( resp . is such that it does not create poles in ( resp . in and therefore this function is analytic in the whole complex plane .this is a simplification that we will be forced to abandon in realistic cases as we will see .moreover we will consider that the function ( resp . is analytic but with just one simple pole at in the lower halfplane ( resp .another pole on the upper halfplane ( see for details of figure 1 . ] ) .there can be many of such poles but , by now , we will just consider one pole for simplicity , being the generalization straightforward. then we make an analytic continuation of the positive axis to the curve of the lower half - plane as in figure 1 .then ( see .( 29 ) ) we can define and ( see .( 31)) where means analytic continuation .finally it can be proved that ( see ) a simple extension of the eigen - decomposition of to the complex plane we could repeat what we have said about the pure states and the hamiltonian with the states , observables , and the liouvillian operator ( see a review in ) .but we prefer to follow the line of and keep the hamiltonian framework and discuss the analytical continuation of that we will also symbolize as .in fact from section i.a we know that this scalar is the main character so we will study its analytical properties ad nauseam .so let us call ( see .( 42 ) ) then a generic relevant observable is ( see eq .( 42 ) or .( 42)) and the generic states is ( eq .( 45 ) or .( 45 ) ) where are defined those of eqs .( [ 53 ] ) and ( [ 54 ] ) in the case ( see also eq . ( 44 ) or .( 45)) we will keep the treatment as general as possible , i.e. would be any observable such that and any state . andmore general than those of other formalisms .this is why we can find the moving preferred basis in a general case containing eid as particular case .anyhow the analyticity conditions must also be satisfied . in the case of eidwe can substitute by and by in fact , in the next subsection we will only consider the generic mean value for three paradigmatic model below .model 1 with just one pole and the khalfin effect .model 2 with two poles and model 3 with poles .it can be proved ( cf .( ) eq .( 67 ) ) that the evolution equation of the mean value is i.e. this real mean value reads where these vectors are defined in eqs .( [ 53 ] ) and ( [ 54 ] ) . then, if we endow the functions with analytical properties of subsection c and there is just one pole in the lower halfplane , we can prove ( eq . (70 ) ) that where and where and are the analytical continuation in the lower half - plane of ( see ( eq .( 54)) and(\widetilde{\varepsilon}|+\int d\varepsilon\int d\varepsilon^{\prime}\langle\omega^{+}|\varepsilon\rangle\langle\varepsilon^{\prime}|\omega ^{\prime+}\rangle\widetilde{(\varepsilon,\varepsilon^{\prime}}| \label{54}\ ] ] and where is the simple pole of figure 1 in the lower half - plane . and can be defined as in the case of eq .( [ l.1 ] ) and ( [ l.2 ] ) .the and can also be defined as a simple generalization of the vectors and ( . eq .. then the eqs .( [ 53 ] ) and ( [ 54 ] ) allow us to compute the limits ( [ int-01 ] ) and ( [ int-02 ] ) for any therefore we can conclude than the last four terms of equation ( [ 70 ] ) vanish with characteristic times respectively .let us observe that i- .the vanishing of the second , third , and fourth therms of eq .( [ 70 ] ) are _ exponential decaying_. this will also be the case in more complicated models with many poles .ii.- the means that the evolution of the last term of this equation corresponds to a polynomial decaying in , i. e. to the _ khalfin evolution_. this is a very weak effect detected in 2006 .if there is a finite number of poles the khalfin term corresponds to the integral along the curve and contains the contribution of all the poles placed bellow with imaginary part such that , then we can choose a curve the poles a .then the integral along the curve contains the effect of the poles thus we can choose the curve in such a way hat the decaying times corresponding to these poles , would be so small that can be neglected . ] . a closed system model for khalfin effect can be found in , section 6 , and an eid - like model in , section 5 .now for times , eq . ( [ 70 ] ) reads since for the poles term has vanished is just an order of magnitude we consider that the three first imaginary parts of eqs .( [ tc ] ) and ( [ tc ] ) are essentially equivalent . ] .let us diagonalize as or the initial conditions may just choose a discrete basis ( see below).] where is the moving eigenbasis of .then let us define a state the _ preferred state _, such that , _ for all times , _ it would be so is a state that evolves in a model with no poles and with only the khalfin term .these evolutions exist and can be found using an adequate interaction .proton state in a woods - saxon potential ( see figure 3 ) .] so we can plot and in figure 2 .it is quite clear that for while for and that for and also all their derivatives .the eigen states of the are those that we will choose for the moving decoherence basis .in fact , diagonalizing we have and when we have that so from eqs .( [ asterisco ] ) and ( [ asterisco-b ] ) we see that the eigenbasis of and also converge namely the basis converge to and therefore becomes diagonal in .thus is our definition for the _ moving preferred basis_. since becomes diagonal in the just defined preferred basis when and is the definition of decoherence time . in this modelthe relaxation time is the corresponding to the khalfin term , i.e. an extremely long time so khalfin term is so small ( see ) that can be neglected in most of the experimental cases .so let us consider the case of two poles and ( and no relevant khalfin term ) where eq .( [ 70 ] ) reads : where , and we will also consider that ( see section 3 , for details ) .then the four characteristic times ( [ tc ] ) now read now for times , eq .( [ 700 ] ) reads and we can define a state such that , for _ all times _ , it would be repeating the reasoning of eqs .( [ 700 ] ) to ( [ asterisco-c ] ) we can see that , diagonalizing this last equation , we obtain the moving preferred basis .then in this case we see that the relaxation is obtained by an exponential dumping ( not a khalfin term ) and again , in this case when we have that so once more we reach eq .( [ asterisco-c ] ) .namely becomes diagonal in the moving preferred basis in a time . before considering the many poles caselet us make some general remarks .i.- let us observe that some and may be zero , depending in the observable so , in the case of many poles , may be some poles can be detected by and others may not be detected and disappear from the formulae ( see appendix ) .this also is the cases for the initial conditions : and may be zero .but also the or the may create some poles .so some poles may be eliminated or created by the observables or the initial conditions while others may be retained .but in general we will choose and in such a way that they would neither create or eliminate poles .ii.- from what we have learned in both models ( see eqs .( [ nuevo-0 ] ) and ( [ nuevo ] ) ) we always have let us now sketch the case of a system with poles located at these poles are the ones that remain after and have eliminated ( or created ) some poles ( see remark i ) .in this case it is easy to see that eq .( [ 70 ] ) ( with no khalfin term ) becomes: where is the final equilibrium value of in the most general case the will be placed either at random or not . anyhow in both cases they can be ordered as so in the case of poles , we can plot , and in figure 3. then if it is quite clear that the relaxation time is so the relaxation time is defined with no ambiguity .this is not the case for the decoherence time .really each pole defines a decaying mode with characteristic time these poles contain the essence of the decaying phenomenon and the definition of the decoherence time depends on their distribution and other data like the initial condition .precisely i.- for a completely random distribution clearly the best choice is then and the moving preferred basis is i.e. the basis that diagonalizes ii.- but for other kinds of distributions , if the distribution of poles obey certain law or have some patterns , we may chose something like as we will see in the example in the next section .the choice of is based on the _ initial conditions _ and usually also introduces the concept of _ _ macroscopicity _ _ ) and variable .example of this choice are the omns model and the toy models in the final remake i of the next section . once the decoherence time chosen the definition ( [ def ] ) changes to where the sum contains all the terms such that then the poles in evolution ( [ ev ] ) are those that produces the slowest decaying modes that we will call the _p - relevant poles , _i. e. those that have influence in the period _ _ the remaining poles such that that we will can the _ p - irrelevant poles _ , and have no influence in the period once the decoherence time is chosen the moving preferred basis is univocally defined .it is , the basis that diagonalize , a state that evolves only influenced by the p - relevant poles , and such that when is our candidate for a general definition of moving preferred basis .our more complete and simplest example of decoherence in open systems is the omns pendulum ( i. e. oscillator ) in a bath of oscillators , that we will compare with the poles theory in the following subsections .in fact the omns model could be considered a poles model if we retain the poles and neglect the khalfin term .moreover in the omns philosophy the moving preferred basis must be related to some collective variables in such a way that they would be experimentally accessible . in this casethis variable is the center of mass of the pendulum , i. e. the mean value of the position of a coherent state . in page 285 a one dimensional `` pendulum '' ( the system ) in a bath of oscillators ( the environment ) is considered .the hamiltonian reads ) and equation ( [ pre ] ) .in fact , in some stages of the treatment omns is forced go to the continuos spectrum .a complete treatment of this continuous model can be found in .] where is the creation ( annihilation ) operator for the system , are the creation ( annihilation ) operator for each mode of the environment , and are the energies of the system and each mode of the environment and are the interaction coefficients .then let consider a state where are _ coherent _ states for the `` system '' corresponding to the operator and are a coherent state for the environment corresponding to the operator let the initial condition be then moreover it can be shown , under reasonable hypotheses and approximations ( that correspond to the elimination of the khalfin terms , see below ) , that evolution of the is given by +\text{small fluctuations } \label{rt-01}\ ] ] where is a shift and a dumping coefficient that produces that the system arrives at a state of equilibrium at , the _ relaxation time _ of the system , ( the small fluctuations are usually neglected ) in the next subsections using the concepts of the previous section we will prove that the omns model is a particular case of our general scheme .let us now consider the condition of _ experimentally accessibility ._ in fact , in the model under consideration , the initial states corresponds to the linear combination of two coherent , macroscopically different states that evolve to .now the diagonal part of reads and , it can easily be shown that , with the choice of initial conditions of eqs .( [ ci-17 ] ) and ( [ ci-18 ] ) , that the non diagonal part of is \ ] ] then if ( that will be the case if is very big ) we have \label{om}\ ] ] where are the initial mean value of the position of the two coherent states .this decaying structure is obviously produced by the combination of the initial states and the particular evolution of the system according to the discussion in the final part of the last section .then , since when decoheres in the decoherence basis \{ , which is the moving preferred basis , and the decoherence time of the system is ^{-1}t_{r } \label{131'}\ ] ] where in the next subsection we will see that we are dealing with a many poles model where the effect of decoherence is produced by these poles and the particular coherent states initial conditions , which produce a `` new collective pole mode '' with in the case of the `` pendulum '' the moving preferred basis \{ is clear experimentally accessible since , in principle , the mean value of the position of the two coherent states can be measured and the and turn out to be two `` collective variables '' ( since they are mean values ) .in fact , in this formalism , the main characteristic of the moving preferred basis is to be related to the `` collective variables '' .moreover the decoherence time depends on the initial distance we can have different decoherence times depending on the initial conditions .let us now consider that , \text { \ \ \ } \phi=\operatorname{im}(\alpha_{1}\alpha_{2}^{\ast}-\alpha_{1}^{\ast}\alpha _ { 2})\ ] ] where ^{\frac{1}{2}}\ ] ] so : i.- even if in general .ii.- when or we have thus when the distance between the two centers of the coherent states is very big we have a small and the basis \{ be almost orthonormal .these are the main characteristics of the experimental accessible decoherence basis of omns .but it is important to insist that , generally , is only a _non - orthonormal moving preferred basis _ , that we can approximately suppose orthonormal only in the macroscopic case , that is to say , when and are far apart . in conclusion , in this macroscopic case becomes a orthonormal _ moving preferred basis _ where becomes diagonal in a very small time .this will be the case of the decoherence basis in , chapter 17 , and in a many examples that we can find in the bibliography ( , , ) . without this macroscopic propertyit is difficult to find any trace of a boolean logic in the moving decoherence basis context of the general case or in this section .in fact , omns obtains the boolean logic in a complete different way ( see chapter 6 of ) .anyhow in this particular model the moving preferred basis has a perfect example for the macroscopic case .let us now present the relation of this formalism with the poles theory .particular important models can be studied , like the one in , with hamiltonian i.e. a continuous version of ( [ 1 ] ) . in this continuous versionwe are forced to endow the scalar with the some analyticity conditions .precisely function ( where is chosen in such a way that which does not vanish for and its analytic extension to the lower half plane only has a simple pole at .this fact will have influence on the poles of as in section ii and we know that the study of is the essential way to understand the whole problem ( see section i a ) .the hamiltonian ( [ pre ] ) is sometimes called the lee - friedrich hamiltonian and it is characterized by the fact that it contains different _ number of modes sector _ ( number of particle sectors in qft ) .in fact , and are creation operators that allow to define these numbers of mode sectors .e. g. the one mode sector will contain states like and ( where then the action of ( or simple the one of will conserve the number of modes of this sector in just one mode , since in ( [ pre ] ) all the destruction operators are preceded by a creation operator .this also is the case for the sector .the hamiltonian of the one mode sector , is just the one of the so called friedrich model i. e. ( expressed just in variable one that it is analytically continued as a consequence of the analyticity condition above this simple friedrich model just shows one resonance .in fact , th is resonance is produced in .let be the hamiltonian of the complex extended friedrich model , then as in section ii.]: where is the only pole and . the lee - friedrich model , describing the interaction between a quantum oscillator and a scalar field , is extensively analyzed in the literature .generally , this model is studied by analyzing first the one excited mode sector , i.e. the friedrich model .then , if we compute the pole , of this last model , up to the second order in we obtain that so the pole ( that will corresponds to the pole closest to the real axis in the lee - friedrich model ) can be calculated ( see eq .these results coincide ( mutatis mutandis ) with the one of omns book page 288 , for the pole corresponding the relaxation time .in fact : where symbolizes the principal part , so then if we have namely the results of page 288 , and the one contained in eq .( [ rt-01]).: so the omns result for the decoherence time _ coincides _ , as we have already said , with the one obtained by the pole theory , so in both frameworks .let us now consider the lee - friedrich hamiltonian ( [ pre ] ) for the many modes sector , e. g. , as an example , for the three mode sector .then we have that: where in the real complex plane the spectrum of is 1.- from the eigenvalue three points of the curve 2.- from the eigenvalue , a pole at two points of the curve 3.- from the eigenvalue a pole at and one point of the curve 4.- from the eigenvalue a pole at see figure 4 : of course in the general case and as a consequence the spectrum is the curves , . , ... in fact then if we neglect the khalfin term , since it corresponds to extremely long times , the disappears and we simply have then under this approximation the system has an effective ( non hermitian ) hamiltonian where are the creation and annihilation operators for the mode corresponding to the pole and is the corresponding number of poles operator . nowthe hamiltonian of the harmonic oscillator is thus we see that in the no khalfin terms approximation , and taking ( or the last equation ) and if is very large only affects the real part of the pole and not the imaginary one that produces the time scales . ]so , in this approximation , the effective lee - friedrich hamiltonian simply is a ( non hermitian ) version of with a dumping term moreover the basis of and are the same one , i. e. the probability amplitude that a pure state would be in the pure state at time is: the most general linear superposition of the eigenvectors of , in basis is: and the time evolution for must be: then we can compute , then where from eqs .( [ rt-01 ] ) and ( [ omnes ] ) , or eq .4.47 of we have : then if we neglect the khalfin term the `` energy '' levels are multiples of the fundamental `` energy '' i. e. where and the coefficients and depend in the initial conditions ( according to eq . 4.26 of ) . with the expression ( [ cuac-15 ] ) eq .( [ ci-05 ] ) becomes the same recipe could be used in the fundamental scalar instead of with similar results but with more difficult calculations . as initial conditions , it is possible to choose any linear combination of the elements with . .so we can choose the coherent states but we can also choose as the boundary condition an approximated version where the number modes is and we take namely an approximated quasi - coherent states or quasi - gaussian ( that becomes a coherent state when as we will consider below thus then let us choose the initial conditions as the sum of two quasi - gaussian functions , namely: where and are quasi - coherent states , precisely and thus the initial state is: therefore the time evolved state is where is the diagonal part ( in the basis ) of and is the non - diagonal part of we choose the two quasi - gaussian ( [ ci-11 ] ) and ( [ ci-12 ] ) with center at , ( see eq .( 7.15 ) page 284 ) and so and are real numbers . without loss of generality (since with a change of coordinates we can shift and we can consider that the and are both positive .for this reason we will interchange and below .let us not consider in the basis of the initial condition we have we will prove that for macroscopic initial conditions , i.e. when the peaks of the two gaussians are far from each other , the states are quasi - orthogonal basis , i.e. and indeed this is the macroscopicity condition . in fat so using the cauchy product and the binomial theorem we have and again using the cauchy product and the binomial theorem we have then so for we have orthogonality as we have promised to demonstrate .now we can consider the limit .thus the last scalar product is equal to the truncated taylor series of exponential function. then we may introduce the difference with the complete taylor series , and we obtain where is a correction of order with ] .so in the case given by ( [ ci-32 ] ) and ( [ ci-33 ] ) we have ^{\frac{1}{2\left ( n+1\right ) } } \label{ci-34}\ ] ] i.e. ^{\frac{1}{2\left ( n+1\right ) } } \gg \text{\ } \frac{m\omega}{\sqrt{2m\hbar^{2}\omega}}l_{0 } \label{ci-34b}\ ] ] then if we substitute ( [ ci-32 ] ) , ( [ ci-33 ] ) and ( [ ci-34 ] ) in eq . ( [ ci-25 ] ) , ( [ ci-27 ] ) , ( [ ci-29 ] ) and ( [ ci-31 ] ) and we take into account ( [ ci-34]) then if we substitute ( [ ci-35 ] ) , ( [ ci-36 ] ) , ( [ ci-37 ] ) and ( [ ci-38 ] ) in eq . ( [ ci-23 ] ) we have we see that in the last equation there is an exponential of an exponential , and if we develop the second exponential and substituting for its value according to eq . ( [ ci-07 ] ) , we have from eq .( [ ci-20 ] ) and eq .( [ ci-33]) so a simple decaying time is given by the original pole of eq .( [ cuac-03 ] ) but a new decaying pole appears with an imaginary part so , the new decaying time is or the same time was found by omns in or ( [ 131 ] ) and corresponds to the definition ( [ 34 ] ) .in fact , we can recover the same result . in result is valid for small . in the general case , and considering that , from eqs .( [ ci-20 ] ) and ( [ ci-39 ] ) we have: \label{ci-43}\ ] ] the same expression that can be found on page 290 of .so the coincidence of both formalisms is completely proved .[ [ final - remarks . ] ] final remarks .+ + + + + + + + + + + + + + i.- as we have said in the macroscopic case the basis \{ is orthogonal and it is the one defined in section ii.f .in fact for the evolution of is produced by the p - relevant poles while for the evolution is produced by all the poles , either p - relevant and p - irrelevant .moreover the corresponding and coincide at with all their derivatives .ii.- let us see what happens if we change the two gaussian initial condition by then it can be proved that if the decoherence times are .therefore these times change with .these example shows that the initial condition chooses and the relevant poles define as explained in section ii.f nevertheless in any case we always have that these results coincide with those of zurek spin model where we also have an initial condition dependence .this examples shows that there are many candidates decoherence times and that the initial conditions choose among them .in this paper we have : i.- discussed a general scheme of decoherence , that in principle can be used by many formalisms .ii.- we have given a quite general definition of a moving preferred basis and of the relaxation time iii.- we have introduced different characteristic ( decaying evolution ) times , and also how the decoherence time is chosen by the initial conditions .we hope that these general results will produce some light in the general problem of decoherence .the omns formalism , of references , and , contains the most general definition of moving preferred basis of the literature on the subject .our basis have another conceptual frame : the catalogue of decaying modes in the non - unitary evolution of a quantum system .but since the omns formalism is the best available it is very important for us to show the coincidence of both formalisms , as we have done in one model , at least ( see section iii ) .of course we realize that , to prove our proposal , more examples must be added , as we will do elsewhere .but we also believe that we have a good point of depart .in fact , probably the coincidences that we have found in the omns model could be a general feature of the decoherence phenomenon .essentially because , being the poles catalogue the one that contains _ all the possible decaying modes _ of the non unitary evolutions , since relaxation and decoherence are non - unitary evolutions , necessarily they must be contained within this catalogue , .we are very grateful to roberto laura , olimpia lombardi , roland omns and maximilian schlosshauer for many comments and criticisms .this research was partially supported by grants of the university of buenos aires , the conicet and the foncyt of argentina .in this appendix we will introduce a particular example of observables , of the same system , such that some observables would see some poles while other would see other ones .essentially it is a bi - friedrich - model .let us consider a system with hamiltonian: where and d\omega+\int_{b}^{\infty}v_{\omega^{\prime } } ^{(2)}\left [ |\omega^{\prime}\rangle\langle2|+|2\rangle\langle \omega^{\prime}|\right ] d\omega\ ] ] where and this hamiltonian can also reads: where d\omega^{\prime}\ ] ] then it is easy to prove that=0\ ] ] and that let us now decompose the system as where part is related with hamiltonian and part related with hamiltonian .let us observe that these two parts are not independent since they share a common continuous spectrum , i. e. .moreover let the corresponding relevant observable spaces be for and for where has basis \{ and has basis \{ while has basis \{ and basis \{ moreover let us consider the two relevant observables of system where the are the corresponding unit operators. then and therefore only sees the evolution in part while only sees the evolution in part then , since the poles of part correspond to the decaying modes of the evolution of this part ( and we know that the friedrich model of this subsystem generically do have poles ) only sees the poles of part .respectively only sees the poles of part q. e. d. now we can consider that the poles of part define a relaxation time the poles of part define a relaxation time if decoheres and becomes classical in a short time remains quantum for a large time then for such that part behaves classically while part remains quantum .precisely : system observed by seems classical while observed by seems quantum .in fact this is the behavior of a generic physical system . | there are many formalisms to describe quantum decoherence . however , many of them give a non general and ad hoc definition of pointer basis or moving preferred basis , and this fact is a problem for the decoherence program . in this paper we will consider quantum systems under a general theoretical framework for decoherence and present a very general definition of the moving preferred basis . in addition , this definition is implemented in a well known model and the time of decoherence and the relaxation time are defined and compared with those of this model . |
the scarcity of cellular frequency for achieving the 1,000x higher data rate in 5 g brings about exploiting the millimeter - wave ( mmwave ) frequency band having hundreds times more spectrum amount .this approach however has two major drawbacks .firstly , mmwave signals are vulnerable to physical blockages , yielding severe distance attenuation .secondly , the use of extremely wide mmwave bandwidth makes uplink mmwave transmissions at mobiles demanding due to high peak - to - average - ratio ( papr ) , leading to significant rate difference between uplink and downlink . to compensate both drawbacks , we consider base station ( bs ) ultra - densification .it is another promising way to enhance the data rate by increasing per - user available resource amount and even by improving per - user average spectral efficiency .this study combines such complementary two methods , and thereby proposes a mmwave overlaid ultra - dense cellular network .the proposed network in distinction from other existing heterogeneous networks is the asymmetric uplink and downlink structure where mmwave band only operates for downlink transmissions ( see fig .the reason is even the state - of - the - art mmwave supporting power amplifiers are low energy efficient , so uplink mmwave transmissions are demanding of mobile users .uplink communication therefore solely resorts to the current micro - wave ( ) cellular frequency band . due to the scarcity of the spectrum, the uplink data rate can not keep up with the rising ample downlink data rate as mmwave resource and bs densification grow , which may hinder consistent user experiences .this uplink / downlink asymmetry motivates to turn our attention toward assuring the minimum uplink average rate , and engenders the mmwave overlaid cellular network design problem : _ how to maximize the downlink average rate while guaranteeing a target uplink average rate_. we answer this question from the radio resource management and cell planning perspectives . exploiting stochastic geometry and the technique proposed in our preliminary work , we derive downlink mmwave and uplink / downlink spectral efficiencies in closed forms . utilizing these tractable results, we analyze the impacts of resource management and cell planning on downlink and uplink average rates .such results provide the design guidelines of the mmwave overlaid ultra - dense cellular networks .the main contributions of this paper are listed as follows . 1 ._ most of the resource should be dedicated to uplink transmissions _ in order to guarantee the uplink average rate by at least of the downlink rate in a practical scenario where and mmwave resource amounts respectively are mhz and mhz ( see proposition 3 and fig .this runs counter to the current resource allocation trend that is likely to allocate more resource to downlink transmissions .2 . to achieve the ever - growing downlink average rate while guaranteeing uplink average rate , _ densifying bs can not be a sole remedy in practice _( see proposition 4 ) , _ but should be in conjunction with procuring more spectrums _ ( see corollary 2 ) .the reason behind is more spectrum amount linearly increases the uplink average rate while bs densification logarithmically increases the rate ( see proposition 1 ) .the spectral efficiencies in uplink / downlink ( see proposition 1 ) and downlink mmwave bands ( see proposition 2 ) under an ultra - dense environment are derived in closed forms via a lower bound approximation , which reveals _ bs densification logarithmically increases the spectral efficiency_. due to the lack of the space , the omitted proofs of propositions and lemmas are deferred to : _http://tiny.cc / jhparkgc15pf_. wave bands whereas the uplink only via band .indoor region is not penetrated by mmwave but signals .users associate with the nearest non - blocked mmwave bss ( outdoor user 1 associates with a farther mmwave bs than the nearest indoor bs ) as well as with the nearest bss without any restriction ( indoor user 2 associates with the outside nearest bs ) .the bss having no serving users are turned - off.,width=340 ]the proposed network comprises : ( i ) mmwave bss whose locations follow a two - dimensional homogeneous poisson point process ( ppp ) with density ; and ( ii ) bss whose coordinates follow a homogeneous ppp with density , independent of . due to the implementational difficulty of mmwave transmissions at mobile users , mmwave bs only supports downlink mode whereas bs provides both downlink and uplink modes .uplink transmissions are therefore resort to solely depend on bss .a bs having no serving user is turned off ( see the transparent bss in fig .1 ) . mobile user coordinates independently follow a homogeneous ppp with density . without loss of generality , represents both downlink and uplink users .users receive downlink signals via both mmwave and simultaneously while transmitting uplink signals only via . specifically , downlink users associate with their nearest non - blocked mmwave bss , and also independently with their nearest bss as in .uplink users associate with the nearest bss that are identical with their downlink associated bss but the mmwave associated bss .such associations are visually shown by fig . 1 .consider indoor regions whose boundaries are mmwave impenetrable walls. followed by a boolean model , the indoor regions are regarded as uniformly distributed circles having radius with density . for simplicity without loss of generality , assume the indoor regions always guarantee line - of - sight ( los ) communications .additionally , we neglect the overlapping indoor regions that can be compensated by a sufficiently large network .note that signals are not affected by the indoor walls thanks to their high diffraction and penetration characteristics .the indoor complementary regions are outdoor regions .our channel model is three - fold in order to capture the different propagation behaviors of outdoor / indoor mmwave and signals .a mmwave antenna array directionally transmits a downlink signal with unity power to its associated user , and the signal experiences path loss attenuation with the exponent as well as rayleigh fading with unity mean .the transmitted directional beam has the main lobe angle ( radian ) , and the received signal powers at the same distances within are assumed to be identical .users are able to receive mmwave signals only if there exist no indoor walls along the paths to their associated bss . to specify this event , consider a typical user located at the origin and an arbitrary bs at with distance .let denote the opposite unit direction vector of the signal transmission direction from , defined as .define as the non - blockage distance indicating the line length from with the direction to the point when the line firstly intersects an impenetrable indoor wall .a user then can receive a transmitted signal if the condition holds . for interfering links, we consider the interferers as the undesired active bss whose serving directions , the main lobe centers , pointing to .it leads to antenna gain , which is an increasing function of as well as the number of the bs s serving users . at when located outside the indoor regions , the corresponding mmwave signal - to - interference - plus - noise ( ) is represented as : indicates active non - blocked mmwave interfering outdoor bss , fading power , and noise power .path loss exponent is set as since there is no mmwave blockages within indoor regions as described in section [ sect : inout ] .the rest of the settings are the same as in the case of the outdoor mmwave . at located within indoor regions , the mmwave is given as : denotes active mmwave interfering indoor bss . a transmitted signal with unity power experiences path loss attenuation with the exponent as well as rayleigh fading , resulting in the fading gain that follows an exponential distribution with mean . at , the given as : represents active interfering bss .this section derives closed - form mmwave and spectral efficiencies , defined as ergodic capacity ] in scales with , resulting in the linear scaling of with .the tightness of the analysis is numerically verified as shown in fig .4 . combining the outdoor and indoor mmwave spectral efficiencies ( lemmas 3 and 4 ), the following result provides the overall downlink mmwave spectral efficiency as below . __ ( downlink mmwave )_ at for , downlink mmwave spectral efficiency at is lower bounded as follows .^{e^{-\lambda_g s } } \ ) \label{eq : prop2}\ ] ] _ the result indicates mmwave downlink spectral efficiency is a logarithmic function of bs density .the exponent of shows densification is more effective when 1 ) outdoor mmwave attenuation is severe ( large ) and/or 2 ) users are more likely to be in outdoor region ( small ) . in addition , sharper beam ( small ) increases the spectral efficiency . on the other hand , the spectral efficiency decreases with user density since more users bring about larger interference .this section analyzes resource allocation behaviors , and consequently provides the mmwave overlaid cellular network design guidelines in the perspectives of resource allocation and cell planning .define downlink and uplink average rates and as follows . let denote the minimum ratio of uplink to downlink rates .we consider the following problem : & \underset{w_\mu } { \text{max } } \ ; r_{\text{d } } \notag\\ \end{aligned } \notag\\ & \text{subject to } \notag \\ & \quad r_{\text{u}}/r_{\text{d } } \geq t \label{eq : uplinkqos } \\ & \quad w_{\mu.\text{d } } + w_{\mu.\text{u } } = w\end{aligned}\ ] ] where , , respectively denote downlink , uplink , and entire bandwidths .the objective function is maximized when the equality in holds , leading to the following downlink resource allocation . _ _ ( resource allocation ) _ if , the following downlink resource allocation maximizes the average rate while guaranteeing the minimum uplink rate : where ; otherwise , the minimum uplink rate requirement can not be satisfied ._ increasing mmwave downlink rate ( and/or ) leads to less downlink allocation and more uplink allocation .the reason is because the mmwave band provides most of the downlink transmissions without the aid of band , and thus all the band becomes dedicated to the uplink transmissions to assure the minimum uplink rate requirement .in addition , increasing resource allows more and , but the latter increases faster ( notice for ) .such uplink biased resource allocation tendency is the opposite way of the current resource management trend seeking more downlink resources , to be further elucidated under practical scenarios in section v. increasing bs density , on the other hand , makes the resource more prone to be allocated to downlink transmissions .recalling uplink - downlink reciprocity in lemma 2 , increasing identically improves both uplink and downlink rates . in spite of such identical increments , the uplink / downlink ratio increases since the uplink rate is no larger than downlink rate .this results in increasing until the equality in holds . wave resource allocations with mmwave bandwidth mhz ( , , , , , , ).,width=340 ] applying the resource allocation results to yields the following uplink rate requirement guaranteeing maximized downlink average rate ._ _ ( downlink average rate with minimum uplink rate requirement ) _if for , maximized downlink average rate while guaranteeing the minimum uplink rate is given as : } { \lambda_\mu}^{\frac{w \alpha_\mu}{2 } } \)\]]where ^{w_\text{m}\ ( 1-\sqrt{\frac{s}{\lambda_\text{m}}}\ ) } \hspace{-10pt } \( \rho_\mu \lambda_\text{u}\)^{-\frac{\alpha_\mu w}{2 } } \nonumber\end{aligned}\ ] ] otherwise , the rate can not satisfy the uplink rate requirement . _the result reveals bs densification logarithmically increases the downlink rate while and mmwave resource amounts and path loss exponents linearly increases the rate .larger indoor region area under exponentially decreases the rate .additionally , increasing the minimum uplink rate target decreases the downlink rate .this result is visually elucidated by fig . 6 in section v.this section provides resource management and cell planning guidelines based on the closed - form uplink / downlink ( see proposition 1 ) and mmwave downlink ( see proposition 2 ) spectral efficiency lower bounds derived in sections [ sect : muwavese ] and d. as fig . 3 and 4numerically validate the tightness of the lower bounds , we henceforth regard these lower bounds as approximations . to simplify our exposition , we focus on the asymptotic behaviors as .consider the uplink requirement feasible condition from proposition 3 , leading to the minimum number of the required bss along with increasing mmwave bss ._ _ ( required bs ) _ for , guaranteeing the minimum uplink rate requires the following and mmwave bs density relation : }}\ ) \ ] ] _ to achieve the ever - growing downlink average rate while guaranteeing the minimum uplink average rate , the result shows the required bs density should be increased with much higher order of the mmwave bs density in practical scenarios where .the reason is the logarithmic spectral efficiency improvement by bs densification ( see proposition 1 ) can not overtake the linearly increased mmwave downlink rate .ameliorating this situation in practice therefore requires to procure additional resource as fig .7 visualizes in section v. the following corollary investigates how much amount of resource is required to achieve the goal when bs densification is linearly proportional to the mmwave bs s ._ _ ( required resource ) _ for and for a constant , the minimum uplink average rate constraint requires resource amount as follows .\ ] ] _ the result insists that even when the number of bss increases with the same order of the mmwave bss , it is imperative to procure additional resource in order to increase downlink average rate with assuring the minimum uplink rate .it is visually elaborated by fig . 5 in section v. under bandwidth mhz and mmwave bandwidth mhz ( , , , , , ).,width=340 ] wave bs density as mmwave bs density increases for bandwidths mhz and mhz under mmwave bandwidth mhz ( , , , , , , , ).,width=340 ]this section visualizes the resource management and cell planning guidelines proposed in section iv under practical scenarios . according to ,path loss exponents are set as : , .the indoor mmwave path loss exponent is set as 2 as assumed in section [ sect : channel ] .additionally , user density , indoor region density , main lobe beam width , and noise power is fixed as . the radio resource amount for set as mhz as default , and the amount for mmwave as mhz .5 illustrates proposition 3 and corollary 2 that shows the uplink ( thick blue ) and downlink ( thin green ) resource allocations . for the given environment , guaranteeing the uplink average rate by of the downlink rate requires at least mhz uplink bandwidth . considering the current mhz bandwidth , it implies implies even achieving such low uplink / downlink ratio requires to dedicate most of the bandwidth to the uplink due to severe uplink / downlink rate asymmetry .this contradicts with the current resource allocation tendency that is likely to allocate more resource to downlink . focusing on the curve slopesonce the minimum uplink rate requirement is achieved , it shows the surplus bandwidths are mostly allocated to the downlinks so as to maximize the downlink rates .6 corresponds to corollary 1 that illustrates the maximized downlink rate with assuring the minimum uplink / downlink rate ratio along with increasing mmwave bs density .the figure compares the effect of the uplink rate requirement on the resultant downlink rate ( solid versus dashed ) , capturing the downlink rate is pared down to the point achieving the minimum uplink rate .moreover , the figure revels the effect of the indoor region area ( blue versus green ) that larger leads to the lower downlink rate as expected in the discussion of corollary 1 .in addition , the figure reveals that gbps downlink average rate is achieved when mmwave bs density is times larger than the user density for and when it is times larger for .fig . 7 visualizes proposition 4 that validates the proposed resource management and cell planning guidelines .it shows bs densification can not independently cope with the uplink rate requirement problem , but requires the aid of procuring more spectrum .the impact of procuring more spectrum is observed by the curve increasing tendencies in the figure .the bs density for bandwidth mhz ( dotted green ) shows power - law increase while the density for mhz ( solid blue ) does sub - linear increase .these different required bs increasing rates corroborate the necessity of the additional spectrum .the figure in addition reveals that the effect of indoor region area that decreases the minimum required bs density due to its downlink rate reduction discussed in corollary 1 and visualized in fig .in this paper we propose a mmwave overlaid ultra - dense cellular network operating downlink transmissions via both mmwave and bands whereas uplink transmissions only via band due to its technical implementation difficulty at mobile users . regarding this asymmetric uplink and downlink structure ,we provide the resource management and cell planning guidelines so as to maximize downlink average rate while guaranteeing the minimum uplink rate ( see propositions 3 and 4 as well as corollary 2 ) .such results are calculated on the basis of the closed - form mmwave ( see proposition 1 ) and ( see proposition 2 ) spectral efficiencies derived by using stochastic geometry .the weakness of this study is the use of an arbitrary indoor region ( or blockage ) area when calculating mmwave spectral efficiencies , which may alter the network design results .moreover , the blockage modeling in this paper is necessary to be compared with the recent mmw blockage analysis such as .further extension should therefore contemplate more realistic building statistics as well as a rigorous comparison with the preceding works .t. s. rappaport , s. sun , r. mayzus , h. zhao , y. azar , k. wang , g. n. wong , j. k. schulz , m. sammi , and f. gutierrez , `` milimeter wave mobile communications for 5 g cellular : it will work !, '' _ ieee access _ ,vol . 1 , pp . 335349 , 2013 .j. park , s .- l .kim , and j. zander , `` asymptotic behavior of ultra - dense cellular networks and its economic impact , '' _ in proc .ieee global communications conference ( globecom 2014 ) , austin , united sates _, december 2014 .t. kim , j. park , j .- y .seol , s. jeong , j. cho , and w. roh , `` tens of gbps support with mmwave beamforming systems for next generation communications , '' _ proc .ieee global communications conference ( globecom 2014 ) _ , pp . 36853690 , 2013 .m. n. kulkarni , s. singh , and j. g. andrews , `` coverage and rate trends in dense urban mmwave cellular networks , '' _ in proc .ieee global communications conference ( globecom 2014 ) , austin , united sates _ , december 2014 . | this paper proposes a cellular network exploiting millimeter - wave ( mmwave ) and ultra - densified base stations ( bss ) to achieve the far - reaching 5 g aim in downlink average rate . the mmwave overlaid network however incurs a pitfall that its ample data rate is only applicable for downlink transmissions due to the implementation difficulty at mobile users , leading to an immense difference between uplink and downlink rates . we therefore turn our attention not only to maximize downlink rate but also to ensure the minimum uplink rate . with this end , we firstly derive the mmwave overlaid ultra - dense cellular network spectral efficiencies for both uplink and downlink cases in closed forms by using stochastic geometry via a lower bound approximation . in a practical scenario , such tractable results of the proposed network reveal that incumbent micro - wave ( ) cellular resource should be mostly dedicated to uplink transmissions in order to correspond with the mmwave downlink rate improvement . furthermore , increasing uplink rate via bs densification can not solely cope with the mmwave downlink / uplink rate asymmetry , and thus requires additional spectrum in 5 g cellular networks . ultra - dense cellular networks , millimeter - wave , heterogeneous cellular networks , radio resource management , cell planning , stochastic geometry , coverage process , boolean model . |
the origins of the languages have been an issue of investigation and broad interest since ancient times , and recent advances in archeology , genetics and linguistics have been important to a better comprehension of the linguistic diversification . however , there is not a universal consensus concerning the evolution of this diversity .some similarities among distinct groups of languages suggest that they must have a common ancestor . by comparing languages that belong to a same family, linguists try to construct the hypothetical ancestor language . according to the _ out of africa hypothesis _ , the modern human beings originated in africa about 100,000 years ago and substituted all the populations outside africa .this hypothesis receives strong confirmation from the family tree based on a sampling of nuclear dna from a number of living populations .molecular genetics can also give us some insights regarding to the distribution of languages on the earth .cavalli - sforza compared the family tree which one obtains from molecular genetic data at a world level with a family tree established using only linguistic data and his results indicate a fair degree of overlap .the future of languages is a matter of interest and concern as well .it is estimated that at least of the existing languages may be extinct in the next century .while one hundred of languages are spoken by about of the world population , most languages are present in a single or in a few small regions .the loss of linguistic diversity is a subject of worry not only by the linguists , because languages provide an important way to better understand the past of our species .even more , since some languages possess a very elaborated vocabulary to describe the world , the loss would imply also the loss of ecological knowledge . by analyzing all the approximately 6,700 languages on earth, gomes et al showed that ( i ) the language diversity scales with area according to a power law , where , over almost six decades , and ( ii ) the number of languages spoken by a population of size larger than , , also display a power law behaviour : .the critical exponent is comparable to the ones we observe in ecology for the relationships between species diversity and area , which are usually in the range 0.1 to 0.45 .here we study the evolution of the linguistic diversity by introducing a spatial model which considers the underlying diffusion mechanisms that generate and sustain this diversity .the model is used to describe the occupation of a given area by populations speaking various languages . in the process of colonization of regions , language mutation or differentiation andlanguage substitution can take place , and so increase the linguistic diversity . in the context of language dynamics , mutations are variations of languages with respect to a common ancestor language .the probability of producing reverse mutations is zero , that is , the language generated by a mutation is always different of the previous ones .our model is defined on a two - dimensional lattice composed by sites with periodic boundary conditions .each lattice site represents a region that can be occupied by a population speaking just one language .we ascribe to each site a given capability , whose value we estimate from a uniform distribution , in the range 0 - 1 .this capability means the amount of resources available to the population which will colonize that place .it is expected that the population size in each cell is proportional to the capability .therefore , the populations are distributed in a heterogeneous environment . in the first step of the dynamics , we randomly choose one site of the lattice to be colonized by a population that speaks the ancestor language .to each language , we assign a fitness value which is defined as the sum of the capabilities of the sites containing populations which speak that specific language . therefore , the initial fitness of the ancestor language is the capability of the initial site . in the second step ,one of the four nearest neighbors of this site will be chosen to be colonized with probability proportional to its capability .we assume that regions containing larger amount of resources are more likely to be colonized faster than poor regions . the referred site is then occupied by a population speaking the ancestor language or a mutant version of it .the assumption of mutations mimics the situation at which one language is initially spoken by populations in different regions , and after some time , modifications of this initial language emerge in one or both populations and the language split into two different languages .the probability of occurrence of a mutation in the process of propagation of the language is , where is a constant , and so the mutation probability is inversely proportional to the fitness of the language .this rule for the mutation procedure , we borrow from population genetics .we observe that small populations are more vulnerable to genetic drift and the rate of drift is inversely proportional to the population size .genetic drift is a mechanism of evolution that acts to change the characteristics of species over time .it is a stochastic effect that arises due the random sampling in the reproduction process . in the subsequent steps , we check what are the empty sites which are on the boundary of the colonized cluster , and we choose one of those empty sites according to their capabilities .those ones with higher capabilities have a higher likelihood to be occupied .we then choose the language to occupy the cell among their neighboring sites .the languages with higher fitness have a higher chance to colonize the cell .this process will continue up to all sites be colonized . at this point, we verify the total number of languages . in order to give to the reader some insight about our model , in figure 1we present the snapshot for a typical realization of the dynamics at the first moment of colonization of all sites . in this figureeach color represent a different language domain .the color bar shows the label for each language .and ,width=340,height=340 ] in figure 2 , we show as a function of the area ( total number of sites in the lattice ) for two different values of the constant .we obtain each point by taking averages over 100 independent simulations for , over 50 for , 400 and 500 and over 20 for .we notice from this figure the existence of two distinct scaling regions , where . when , we estimate the exponent for , whereas for .when , we find for , and when . for both values of , we obtain exponents in agreement with those observed for the distribution of languages on earth .each power law extends over approximately two or more decades . for small and intermediate sized areas ,the language diversity increases more quickly with the area , when compared to large areas . when , it is not possible to distinguish the two scaling regimes and in this case we estimate .as the simulation is very time consuming for lattices of size , it is not perfectly clear for us if the second regime is a true scaling or a transient regime with going to a constant value as increases or perhaps with growing logarithmically with for very large values of area . as a function of the area for two values of mutation probability : , 0.73 ( from bottom to top ) .the exponents obtained for are for , and for .for we have for , and for .,width=340,height=340 ] in order to characterize the diffusion in this process we investigate the time evolution of the average area occupied by the typical language . in our model , one time step represents the process of colonization of just one site by one language .the average area is where is the area occupied by the language in time and is the diversity of languages in time .by the end of the process of colonization we have , and since the total area is equal to and the diversity is proportional to . is a measure of the capacity of the languages to diffuse or to spread across the entire territory .thus , the diffusion exponent can be introduced using the relation from ( 2 ) and ( 3 ) we conclude that , and thus assumes the standard brownian value if . for have anomalous diffusion and , indicating a progressive difficulty for language diffusion . for the current distribution of languages on earththis exponent is , reflecting the geographical limitations that languages have to face in their process of expansion .this particular anomalous value is close to those obtained for diffusion in two - dimensional percolation below the percolation threshold .we compare our previous estimative ( eq .( 3 ) ) with the value of obtained by observing the evolution of in the simulations and verify a good agreement between them . in figure 3we illustrate the evolution of for one run where and . for this particular situation we obtained , i. e. an exponent close to the brownian value . for and .for this curve , we have .,width=340,height=340 ] ( o ) and as a function of for ( a ) small and intermediate areas and ( b ) large areas.,width=340,height=340 ] number of languages with population greater than , , as a function of . with for .,width=340,height=340 ] number of languages as a function of the area for after the process of interaction among populations .the distinct curves show the cases where the fitness of the language that occupy the site is multiplied by 1 , 10 and 100 ( from bottom to top).,width=340,height=340 ] figure 4 displays the dependence of the exponents and on the parameter : we observe that for small and intermediate areas , scales with as , which means that a small increment of the mutation probability results in a fast increasing of the diversity . for large areas is approximately constant for and quickly grows for ( this sudden increase in possibly reflects the finite size of the lattice ) . in figure 5we plot the number of languages with population size greater than , , as a function of . in order to obtain the curves, we assume that each site contributes with 1 person to the population .the data points were estimated over 10 independent runs with and . in close analogy with the distribution of languages on earth , we find the scaling regime where , along almost three decades in .after the populations have filled up the lattice , we initiate the second part of our analysis , which consider the process of interaction among populations . this interaction represents the flow of people that speak different languages among nations or even the people using new technologies which permit their communication throughout the world .now , each time step corresponds to visit all cells on the lattice . in the visit to a given cellwe compare the fitness of its language with the fitness values of languages which are spoken in its neighborhood .one of the five languages will be chosen to invade ( or to stay ) in that site with probability proportional to their fitness .the probability of mutation is , as before , proportional to the inverse of the fitness of the invading or staying language .after the stabilization , we estimate the average of the number of languages over a given time interval . in figure 6we plot the number of languages as a function of the area after considering the process of interaction .we obtain each point taking averages over 10 independent simulations .we also show the cases where the fitness of the language that is already in the site is increased by a factor 10 and 100 in order to compete with the neighbors .the purpose of this augment in the fitness is to increase the selective advantage of the population that already colonizes the site .when this selective advantage is small , the number of languages presents a linear growth and then decreases with the area up to reaching an asymptotic value .when the selective advantage is high , the number of languages initially presents a linear growth with the area , followed by a regime of fluctuation of the diversity and above a given area threshold it decreases abruptly . for small areas ,the fitnesses of populations are not high when compared to those one which can be obtained by populations in large areas .therefore , in order to compete with the populations which already colonize the sites ( and possess a selective advantage ) , languages need to colonize large areas .this is the reason why we observe the initial linear growth of language diversity for small area when we consider a high selective advantage . above a certain area ,some populations have a very high fitness in order to compete with other languages and dominate .thus , the diversity drastically decreases and only a few languages survive .we have introduced in this work a simple computer model to simulate some aspects of the linguistic diversity on earth .surprisingly , this model is able to generate important scaling laws ( figs . 2 and 5 ) in close resemblance to those observed in the actual distribution of languages .we verified that the mutation probability of languages displays a decisive role for the maintenance of the linguistic diversity . on the other hand ,the diffusion exponent is very large for values of close to 1 ( fig .4 ) , that is , in the situation in which we have a high diversity per area , indicating that languages do not have much facility to diffuse and remain essentially localized in linguistic niches . we do not discard the possibility that the presence of two scaling regimes for the linguistic diversity in our simulations may be a consequence of the regular structure of the lattice . in order to investigate the role of the topology in this model, we are currently studying the process of diffusion of languages in a percolation cluster .after the process of interaction among languages we observed the surviving of just a few languages with a very high fitness ( fig .6 ) . from this, we can ascertain how important is for linguistic diversity to have well stablished languages spoken by a large number of people . | here we describe how some important scaling laws observed in the distribution of languages on earth can emerge from a simple computer simulation . the proposed language dynamics includes processes of selective geographic colonization , linguistic anomalous diffusion and mutation , and interaction among populations that occupy different regions . it is found that the dependence of the linguistic diversity on the area after colonization displays two power law regimes , both described by critical exponents which are dependent on the mutation probability . most importantly for the future prospect of world s population , our results show that the linguistic diversity always decrease to an asymptotic very small value if large areas and sufficiently long times of interaction among populations are considered . departamento de fsica , universidade federal de pernambuco , 50670 - 901 , recife , pe , brazil |
over the course of the past 140 years , the field of professional pure mathematics ( analysis in particular ) , and to a large extent also its professional historiography , have become increasingly dominated by a particular philosophical disposition .we argue that such a disposition is akin to nominalism , and examine its ramifications . in 1983 , j. burgess proposed a useful dichotomy for analyzing nominalistic narratives .the starting point of his critique is his perception that a philosopher s job is not to rule on the ontological merits of this or that scientific entity , but rather to try to understand those entities that are employed in our best scientific theories . from this viewpoint ,the problem of nominalism is the awkwardness of the contortions a nominalist goes through in developing an alternative to his target scientific practice , an alternative deemed ontologically better " from his reductive perspective , but in reality amounting to the imposition of artificial strictures on the scientific practice . burgess introduces a dichotomy of _ hermeneutic _ versus _ revolutionary _ nominalism .hermeneutic nominalism _ is the hypothesis that science , properly interpreted , already dispenses with mathematical objects ( entities ) such as numbers and sets . meanwhile , _ revolutionary nominalism _ is the project of replacing current scientific theories by alternatives dispensing with mathematical objects , see burgess and burgess and rosen .nominalism in the philosophy of mathematics is often understood narrowly , as exemplified by the ideas of j. s. mill and p. kitcher , going back to aristotle .however , the burgessian distinction between hermeneutic and revolutionary reconstructions can be applied more broadly , so as to include nominalistic - type reconstructions that vary widely in their ontological target , namely the variety of abstract objects ( entities ) they seek to challenge ( and , if possible , eliminate ) as being merely conventional , see .burgess quotes at length yu .manin s critique in the 1970s of mathematical nominalism of constructivist inspiration , whose ontological target is the classical infinity , namely , _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ abstractions which are infinite and do not lend themselves to a constructivist interpretation . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ this suggests that burgess would countenance an application of his dichotomy to nominalism of a constructivist inspiration .the ontological target of the constructivists is the concept of cantorian infinities , or more fundamentally , the logical principle of the law of excluded middle ( lem ) .coupled with a classical interpretation of the existence quantifier , lem is responsible for propelling the said infinities into a dubious existence .lem is the abstract object targeted by bishop s constructivist nominalism , which can therefore be called an anti - lem nominalism .may be found in section [ two ] , see footnote [ root ] . ]thus , anti - lem nominalism falls within the scope of the burgessian critique , and is the first of the nominalistic reconstructions we wish to analyze .the anti - lem nominalistic reconstruction was in fact a _ re_-reconstruction of an earlier nominalistic reconstruction of analysis , dating from the 1870s .the earlier reconstruction was implemented by the great triumviratec .boyer refers to cantor , dedekind , and weierstrass as `` the great triumvirate '' , see .] of cantor , dedekind , and weierstrass .the ontological target of the triumvirate reconstruction was the abstract entity called the _, a basic building block of a continuum , according to a line of investigators harking back to the greek antiquity.see also footnote [ cauchy ] on cauchy . ] to place these historical developments in context , it is instructive to examine felix klein s remarks dating from 1908 .having outlined the developments in real analysis associated with weierstrass and his followers , klein pointed out that _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the scientific mathematics of today is built upon the series of developments which we have been outlining .but an essentially different conception of infinitesimal calculus has been running parallel with this [ conception ] through the centuries . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ such a different conception , according to klein , _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ harks back to old metaphysical speculations concerning the _ structure of the continuum _ according to which this was made up of [ ... ] infinitely small parts [ emphasis added authors ] ._ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the significance of the triumvirate reconstruction has often been measured by the yardstick of the extirpation of the infinitesimal.thus , after describing the formalisation of the real continuum in the 1870s , on pages 127 - 128 of his retiring presidential address in 1902 , e. hobson remarks triumphantly as follows : `` it should be observed that the criterion for the convergence of an aggregate [ i.e. an equivalence class defining a real number ] is of such a character that no use is made in it of _ infinitesimals _ '' [ emphasis added authors ] .hobson reiterates : `` in all such proofs [ of convergence ] the only statements made are as to relations of finite numbers , no such entities as _ infinitesimals _ being recognized or employed .such is the essence of the [ proofs with which we are familiar '' [ emphasis added authors ] .the tenor of hobson s remarks is that weierstrass s fundamental accomplishment was the elimination of infinitesimals from foundational discourse in analysis . ]the infinitesimal ontological target has similarly been the motivating force behind a more recent nominalistic reconstruction , namely a nominalistic re - appraisal of the meaning of cauchy s foundational work in analysis .we will analyze these three nominalistic projects through the lens of the dichotomy introduced by burgess .our preliminary conclusion is that , while the triumvirate reconstruction was primarily revolutionary in the sense of burgess , and the ( currently prevailing ) cauchy reconstruction is mainly hermeneutic , the anti - lem reconstruction has combined elements of both types of nominalism .we will examine the effects of a nominalist disposition on historiography , teaching , and research .a traditional view of 19th century analysis holds that a search for rigor inevitably leads to epsilontics , as developed by weierstrass in the 1870s ; that such inevitable developments culminated in the establishment of ultimate set - theoretic foundations for mathematics by cantor ; and that eventually , once the antinomies sorted out , such foundations were explicitly expressed in axiomatic form by zermelo and fraenkel .such a view entails a commitment to a specific destination or ultimate goal of scientific devepment as being pre - determined and intrinsically inevitable .the postulation of a specific outcome , believed to be the inevitable result of the development of the discipline , is an outlook specific to the mathematical community .challenging such a _ belief _ appears to be a radical proposition in the eyes of a typical professional mathematician , but not in the eyes of scientists in related fields of the exact sciences .it is therefore puzzling that such a view should be accepted without challenge by a majority of historians of mathematics , who tend to toe the line on the mathematicians belief .could mathematical analysis have followed a different path of development ?related material appears in alexander , giordano , katz and tall , kutateladze , mormann , sepkoski , and wilson .[ two ] this section is concerned with e. bishop s approach to reconstructing analysis .bishop s approach is rooted in brouwer s revolt against the non - constructive nature of mathematics as practiced by his contemporaries .is there meaning after lem ?the brouwer hilbert debate captured the popular mathematical imagination in the 1920s .brouwer s crying call was for the elimination of most of the applications of lem from meaningful mathematical discourse .burgess discusses the debate briefly in his treatment of nominalism in .we will analyze e. bishop s implementation of brouwer s nominalistic project . for more details .this could not be otherwise , since a verificational interpretation of the quantifiers necessarily results in a clash with classical mathematics . as a matter of presentation, the conflict with classical mathematics had been de - emphasized by bishop .bishop finesses the issue of brouwer s theorems ( e.g. , that every function is continuous ) by declaring that he will only deal with uniformly continuous functions to begin with . in bishopian mathematics, a circle can not be decomposed into a pair of antipodal sets .a counterexample to the classical extreme value theorem is discussed in , see footnote [ lpo ] for details . ]it is an open secret that the much - touted success of bishop s implementation of the intuitionistic project in his 1967 book is due to philosophical compromises with a platonist viewpoint that are resolutely rejected by the intuitionistic philosopher m. dummett .thus , in a dramatic departure from both kronecker , in the main text around footnote [ boniface ] . ] and brouwer , bishopian constructivism accepts the completed ( actual ) infinity of the integers .intuitionists view as a potential totality ; for a more detailed discussion see , e.g. , . ]bishop expressed himself as follows on the first page of his book : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in another universe , with another biology and another physics , [ there ] will develop mathematics which in essence is the same as ours ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ since the sensory perceptions of the human _ body _ are physics- and chemistry - bound , a claim of such trans - universe invariance amounts to the positing of a _disembodied _ nature of the infinite natural number system , transcending physics and chemistry .. bishop s disembodied integers illustrate the awkward philosophical contorsions which are a tell - tale sign of nominalism .an alternative approach to the problem is pursued in modern cognitive science .bishop s disembodied integers , the cornerstone of his approach , appear to be at odds with modern cognitive theory of _ embodied _ knowledge , see tall , lakoff and nez , sfard , yablo .reyes presents an intriguing thesis concerning an allegedly rhetorical nature of newton s attempts at grounding infinitesimals in terms of _ moments _ or _ nascent and evanescent quantities _ , and leibniz s similar attempts in terms of `` a general heuristic under [ the name of ] the principle of continuity '' .he argues that what made these theories vulnerable to criticism is the reigning principle in 17th century methodology according to which abstract objects must necessarily have empirical counterparts / referents .d. sherry points out that `` formal axiomatics emerged only in the 19th century , after geometry embraced objects with no empirical counterparts ( e.g. , poncelet s points at infinity ... ) '' .see also s. feferman s approach of conceptual structuralism , for a view of mathematical objects as mental conceptions . ]what type of nominalistic reconstruction best fits the bill of bishop s constructivism ?bishop s rejection of classical mathematics as a `` debasement of meaning '' would place him squarely in the camp of revolutionary nominalisms in the sense of burgess ; yet some elements of bishop s program tend to be of the hermeneutic variety , as well . as an elementary example , consider bishop s discussion of the irrationality of the square root of in .irrationality is defined constructively in terms of being quantifiably apart from each rational number .the classical proof of the irrationality of is a proof by contradiction .namely , we _ assume _ a hypothesized equality , examine the parity of the powers of , and arrive at a contradiction .at this stage , irrationality is considered to have been proved , in classical logic.the classical proof showing that is _ not rational _ is , of course , acceptable in intuitionistic logic . to pass from this to the claim of its _ irrationality _ as defined above , requires lem ( see footnote [ root ] for details ) .] however , as bishop points out , the proof can be modified slightly so as to avoid lem , and acquire an enhanced _numerical meaning_. thus , _ without _ exploiting the equality , one can exhibit effective positive lower bounds for the difference in terms of the denominator , resulting in a constructively adequate proof of irrationality.such a proof may be given as follows .for each rational , the integer is divisible by an odd power of , while is divisible by an even power of .hence ( here we have applied lem to an effectively decidable predicate over , or more precisely the law of trichotomy ) . since the decimal expansion of starts with , we may assume .it follows that yielding a numerically meaningful proof of irrationality , which is a special case of liouville s theorem on diophantine approximation of algebraic numbers , see .] such a proof is merely a modification of a classical proof , and can thus be considered a hermeneutic reconstruction thereof .a number of classical results ( though by no means all ) can be reinterpreted constructively , resulting in an enhancement of their numerical meaning , in some cases at little additional cost .this type of project is consistent with the idea of a hermeneutic nominalism in the sense of burgess , and related to the notion of _ liberal constructivism _ in the sense of g. hellman ( see below ) .the intuitionist / constructivist opposition to classical mathematics is predicated on the the philosophical assumption that meaningful " mathematics is mathematics done without the law of excluded middle .e. bishop ( following brouwer but surpassing him in rhetoric ) is on record making statements of the sort _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` very possibly classical mathematics will cease to exist as an independent discipline '' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ( to be replaced , naturally , by constructive mathematics ) ; and _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` brouwer s criticisms of classical mathematics were concerned with what i shall refer to as ` the debasement of meaning ' '' . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ such a stance posits intuitionism / constructivism as an _ alternative _ to classical mathematics , and is described as _radical constructivism _ by g. hellman .radicalism is contrasted by hellman with a _ liberal _ brand of intuitionism ( a _ companion _ to classical mathematics ) .liberal constructivism may be exemplified by a. heyting , who was brouwer s student , and formalized intuitionistic logic . to motivate the long march through the foundations occasioned by a lem - eliminative agenda, bishop goes to great lengths to dress it up in an appealing package of a _ theory of meaning _ that first conflates meaning with numerical meaning ( a goal many mathematicians can relate to ) , and then numerical meaning with lem extirpation .. ] rather than merely rejecting lem or related logical principles such as trichotomy which sound perfectly unexceptionable to a typical mathematician , bishop presents these principles in quasi metaphysical garb of principles of omniscience".[multiblock footnote omitted ] bishop retells a creation story of intuitionism in the form of an imaginary dialog between brouwer and hilbert where the former completely dominates the exchange .indeed , bishop s imaginary brouwer - hilbert exchange is dominated by an unspoken assumption that brouwer is the only one who seeks meaning " , an assumption that his illustrious opponent is never given a chance to challenge .meanwhile , hilbert s comments in 1919 reveal clearly his attachment to meaning which he refers to as _ internal necessity _ : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we are not speaking here of arbitrariness in any sense .mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules .rather , it is a conceptual system possessing internal necessity that can only be so and by no means otherwise ( cited in corry ) . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ a majority of mathematicians ( including those favorable to constructivism ) feel that an implementation of bishop s program does involve a significant complication of the technical development of analysis , as a result of the nominalist work of lem - elimination .bishop s program has met with a certain amount of success , and attracted a number of followers .part of the attraction stems from a detailed lexicon developed by bishop so as to challenge received ( classical ) views on the nature of mathematics .a constructive lexicon was a _sine qua non _ of his success .a number of terms from bishop s constructivist lexicon constitute a novelty as far as intuitionism is concerned , and are not necessarily familiar even to someone knowledgeable about intuitionism _ per se_.it may be helpful to provide a summary of such terms for easy reference , arranged alphabetically , as follows . _ debasement of meaning _ is the cardinal sin of the classical opposition , from cantor to keisler , but see footnote [ keisler2 ] . ]committed with _( see below ) .the term occurs in bishop s _ schizophrenia _ and _ crisis _ texts . _ fundamentalist excluded thirdist _ is a term that refers to a classically - trained mathematician who has not yet become sensitized to implicit use of the law of excluded middle ( i.e. , _ excluded third _ ) in his arguments , see .this use of the term `` fundamentalist excluded thirdist '' is in a text by richman , not bishop .i have not been able to source its occurrence in bishop s writing . in a similar vein ,an ultrafinitist recently described this writer as a `` choirboy of infinitesimology '' ; however , this term does not seem to be in general use .see also footnote [ rich2 ] . ] _ idealistic mathematics _ is the output of platonist mathematical sensibilities , abetted by a metaphysical faith in _( see below ) , and characterized by the presence of merely a _ peculiar pragmatic content _ ( see below ) . _ integer _ is the revealed source of all _ meaning _ ( see below ) , posited as an alternative foundation displacing both formal logic , axiomatic set theory , and recursive function theory .the integers wondrously escape . ] the vigilant scrutiny of a constructivist intelligence determined to uproot and nip in the bud each and every platonist fancy of a concept _ external _ to the mathematical mind . _ integrity _ is perhaps one of the most misunderstood terms in errett bishop s lexicon .pourciau in his _ education _ appears to interpret it as an indictment of the ethics of the classical opposition . yet in his _ schizophrenia _ text , bishop merely muses : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [ ... ] i keep coming back to the term `` integrity '' . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ note that the period is in the original .bishop describes _integrity _ as the opposite of a syndrome he colorfully refers to as _ schizophrenia _ , characterized 1 . by a rejection of common sense in favor of formalism , 2 . by _ debasement of meaning _ ( see above ) , 3 . as well as by a list of other ills but _ excluding _ dishonesty .now the root of is identical with that of _ integer _ ( see above ) , the bishopian ultimate foundation of analysis .bishop s evocation of _ integrity _ may have been an innocent pun intended to allude to a healthy constructivist mindset , where the _ integers _ are uppermost.[multiblock footnote omitted ] _ law of excluded middle ( lem ) _ is the main source of the non - constructivities of classical mathematics . and footnote [ lpo ] for some examples . ]every formalisation of _ intuitionistic logic _ excludes _ lem _ ; adding _ lem _ back again returns us to _ classical logic_. _ limited principle of omniscience ( lpo ) _ is a weak form of _ lem _ ( see above ) , involving _lem_-like oracular abilities limited to the context of integer sequences . for a discussion of lpo . ]the _ lpo _ is still unacceptable to a constructivist , but could have served as a basis for a _ meaningful _ dialog between brouwer and hilbert ( see ) , that could allegedly have changed the course of 20th century mathematics . _ meaning _ is a favorite philosophic term in bishop s lexicon , necessarily preceding an investigation of _ truth _ in any coherent discussion . in bishops writing , the term _ meaning _ is routinely conflated with _ numerical meaning _ ( see below ) . _ numerical meaning _ is the content of a theorem admitting a proof based on intuitionistic logic , and expressing computationally meaningful facts about the integers .appears in footnote [ root ] . ]the conflation of _ numerical meaning _ with _ meaning _ par excellence in bishop s writing , has the following two consequences : 1 .it empowers the constructivist to sweep under the rug the distinction between pre - lem and post - lem numerical meaning , lending a marginal degree of plausibility to a dismissal of classical theorems which otherwise appear eminently coherent and meaningful ; for a discussion of the classical extreme value theorem and its lemless remains . ] and 2 .it allows the constructivist to enlist the support of _ anti - realist _ philosophical schools of thought ( e.g. michael dummett ) in the theory of meaning , inspite of the apparent tension with bishop s otherwise _ realist _ declarations ( see entry _realistic mathematics _ below ) . _ peculiar pragmatic content _ is an expression of bishop s that was analyzed by billinge .it connotes an alleged lack of empirical validity of classical mathematics , when classical results are merely _ inference tickets _ used in the deduction of other mathematical results . _ realistic mathematics_. the dichotomy of `` realist '' _ versus _ `` idealist '' ( see above ) is the dichotomy of `` constructive '' _ versus _ `` classical '' mathematics , in bishop s lexicon .there are two main narratives of the intuitionist insurrection , one _ anti - realist _ and one _ realist_. the issue is discussed in the next section .the _ anti - realist _ narrative , mainly following michael dummett , traces the original sin of classical mathematics with _lem _ , all the way back to aristotle.the entry under _ debasement of meaning _ in section [ glossary ] would read , accordingly , `` the classical opposition from aristotle to keisler '' ; see main text at footnote [ keisler1 ] . ]the law of excluded middle ( see section [ glossary ] ) is the mathematical counterpart of geocentric cosmology ( alternatively , of phlogiston , see ) , slated for the dustbin of history.following kronecker and brouwer , dummett rejects actual infinity , at variance with bishop . ]the anti - realist narrative dismisses the quine - putnam indispensability thesis ( see feferman ( * ? ? ?* section iib ) ) on the grounds that a _ philosophy - first _ examination of first principles is the unique authority empowered to determine the correct way of doing mathematics.in hellman s view , `` any [ ... ] attempt to reinstate a ` first philosophical ' theory of meaning prior to all science is doomed '' .what this appears to mean is that , while there can certainly be a philosophical notion of meaning before science , any attempt to _ prescribe _ standards of meaning _ prior _ to the actual practice of science , is _doomed_. ] generally speaking , it is this narrative that seems to be favored by a number of philosophers of mathematics .dummett opposes a truth - valued , bivalent semantics , namely the notion that truth is one thing and knowability another , on the grounds that it violates dummett s _ manifestation requirement _ , see shapiro .the latter requirement , in the context of mathematics , is merely a _restatement _ of the intuitionistic principle that truth is tantamount to verifiability ( necessitating a constructive interpretation of the quantifiers ) .thus , an acceptance of dummett s manifestation requirement , leads to intuitionistic semantics and a rejection of lem . in his 1977 foundational text originating from 1973 lecture notes ,dummett is frank about the source of his interest in the intuitionist / classical dispute in mathematics : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ this dispute bears a * strong resemblance * to other disputes over realism of one kind or another , that is , concerning various kinds of subject - matter ( or types of statement ) , including that over realism about the physical universe [ emphasis added authors ] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ what dummett proceeds to say at this point , reveals the nature of his interest : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ but intuitionism represents the only sustained attempt by the opponents of a * realist view * to work out a coherent embodiment of their philosophical beliefs [ emphasis added authors ] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ what interests dummett here is his fight against the _ realist view_. what endears intuitionists to him , is the fact that they have succeeded where the phenomenalists have not : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ phenomenalists might have attained a greater success if they had made a remotely comparable effort to show in detail what consequences their interpretation of material - object statements would have for our employment of our language . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ however , dummett s conflation of the mathematical debate and the philosophical debate , could be challenged .we hereby explicitly sidestep the debate opposing the realist ( as opposed to the super - realist , see w. tait ) position and the anti - realist position . on the other hand, we observe that a defense of indispensability of mathematics would necessarily start by challenging dummett s `` manifestation '' .more precisely , such a defense would have to start by challenging the extension of dummett s manifestation requirement , from the realm of philosophy to the realm of mathematics . while dummett chooses to pin the opposition to intuitionism , to a belief in an __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ interpretation of mathematical statements as referring to an independently existing and objective reality [ , ] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ( i.e. a platonic world of mathematical entities ) , j. avigad memorably retorts as follows : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we do not need fairy tales about numbers and triangles prancing about in the realm of the abstracta.[multiblock footnote omitted ] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ meanwhile , the _ realist _ narrative of the intuitionist insurrection appears to be more consistent with what bishop himself actually wrote . in his foundational essay , bishop expresses his position as follows : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ as pure mathematicians , we must decide whether we are playing a game , or whether our theorems describe an external reality ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the right answer , to bishop , is that they do describe an external reality .the dichotomy of `` realist '' _ versus _ `` idealist '' is the dichotomy of `` constructive '' _ versus _ `` classical '' mathematics , in bishop s lexicon ( see entry under _idealistic mathematics _ in section [ glossary ] ) .bishop s ambition is to incorporate `` such mathematically oriented disciplines as physics '' as part of his constructive revolution , revealing a recognition , on his part , of the potency of the quine - putnam indispensability challenge .category '' . ] n. kopell and g. stolzenberg , close associates of bishop , published a three - page _ commentary _ following bishop s _ crisis _ text .their note places the original sin with _ lem _ at around 1870 ( rather than greek antiquity ) , when the `` flourishing empirico - inductive tradition '' began to be replaced by the `` strictly logico - deductive conception of pure mathematics '' .kopell and stolzenberg do nt hesitate to compare the _ empirico - inductive tradition _ in mathematics prior to 1870 , to physics , in the following terms : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [ mathematical ] theories were theories about the phenomena , just as in a physical theory ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ similar views have been expressed by d. bridges , as well as heyting .w. tait argues that , unlike intuitionism , constructive mathematics is part of classical mathematics . in fact , it was frege s revolutionary logic ( see gillies ) and other foundational developments that created a new language and a new paradigm , transforming mathematical foundations into fair game for further investigation , experimentation , and criticism , including those of constructivist type .the philosophical dilemmas in the anti - lem sector discussed in this section are a function of the nominalist nature of its scientific goals .a critique of its scientific methods appears in the next section .we would like to analyze more specifically the constructivist dilemma with regard to the following two items : 1 . the extreme value theorem , and 2 .the hawking - penrose singularity theorem .concerning ( 1 ) , note that a constructive treatment of the extreme value theorem ( evt ) by troelstra and van dalen brings to the fore the instability of the ( classically unproblematic ) maximum by actually constructing a counterexample .such a counterexample relies on assuming that the principle fails .. ] this is a valuable insight , if viewed as a companion to classical mathematics .if viewed as an alternative , we are forced to ponder the consequences of the loss of the evt .kronecker is sometimes thought of as the spiritual father of the brouwer / bishop / dummett tendency .kronecker was active at a time when the field of mathematics was still rather compartmentalized .thus , he described a 3-way partition thereof into ( a ) analysis , ( b ) geometry , and ( c ) mechanics ( presumably meaning mathematical physics ) .kronecker proceeded to state that it is only the analytic one - third of mathematics that is amenable to a constructivisation in terms of the natural numbers that `` were given to us , etc . '' , but readily conceded that such an approach is inapplicable in the remaining two - thirds , geometry and physics.see boniface and schappacher .] nowadays mathematicians adopt a more unitary approach to the field , and kronecker s partition seems provincial , but in fact his caution was vindicated by later developments , and can even be viewed as visionary . consider a field such as general relativity , which in a way is a synthesis of kronecker s remaining two - thirds , namely , geometry and physics .versions of the extreme value theorem are routinely exploited here , in the form of the existence of solutions to variational principles , such as geodesics , be it spacelike , timelike , or lightlike . at a deeper level , s.p .novikov wrote about hilbert s meaningful contribution to relativity theory , in the form of discovering a lagrangian for einstein s equation for spacetime .hilbert s deep insight was to show that general relativity , too , can be written in lagrangian form , which is a satisfying conceptual insight .a radical constructivist s reaction would be to dismiss the material discussed in the previous paragraph as relying on lem ( needed for the evt ) , hence lacking numerical meaning , and therefore meaningless .in short , radical constructivism ( as opposed to the liberal variety ) adopts a theory of meaning amounting to an ostrich effectsuch an effect is comparable to a traditional educator s attitude toward students nonstandard conceptions studied by ely , see main text in section [ seven ] around footnote [ ostrich2 ] .] as far as certain significant scientific insights are concerned . a quarter century ago , m. beeson already acknowledged constructivism s problem with the calculus of variations in the following terms : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ calculus of variations is a vast and important field which lies right on the frontier between constructive and non - constructive mathematics . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ an even more striking example is the hawking - penrose singularity theorem , whose foundational status was explored by hellman .the theorem relies on fixed point theorems and therefore is also constructively unacceptable , at least in its present form .however , the singularity theorem does provide important scientific insight . roughly speaking, one of the versions of the theorem asserts that certain natural conditions on curvature ( that are arguably satisfied experimentally in the visible universe ) force the existence of a singularity when the solution is continued backward in time , resulting in a kind of a theoretical justification of the big bang .such an insight can not be described as `` meaningless '' by any reasonable standard of meaning preceding nominalist commitments .this section analyzes a nominalistic reconstruction successfully implemented at the end of the 19th century by cantor , dedekind , and weierstrass .the rigorisation of analysis they accomplished went hand - in - hand with the elimination of infinitesimals ; indeed , the latter accomplishment is often viewed as a fundamental one .we would like to state from the outset that the main issue here is not a nominalistic attitude on the part of our three protagonists themselves .such an attitude is only clearly apparent in the case of cantor ( see below ) . rather , we argue that the historical context in the 1870s favored the acceptance of their reconstruction by the mathematical community , due to a certain philosophical disposition .some historical background is in order .as argued by d. sherry , george berkeley s 1734 polemical essay conflated a logical criticism and a metaphysical criticism.robinson distinguished between the two criticisms in the following terms : `` the vigorous attack directed by berkeley against the foundations of the calculus in the forms then proposed is , in the first place , a brilliant exposure of their logical inconsistencies . but in criticizing infinitesimals of all kinds , english or continental , berkeley also quotes with approval a passage in which locke rejects the actual infinite ... it is in fact not surprising that a philosopher in whose system perception plays the central role , should have been unwilling to accept infinitary entities '' . ] in the intervening centuries , mathematicians have not distinguished between the two criticisms sufficiently , and grew increasingly suspicious of infinitesimals .the metaphysical criticism stems from the 17th century doctrine that each theoretical entity must have an empirical counterpart / referent before such an entity can be used meaningfully ; the use of infinitesimals of course would fly in the face of such a doctrine .. ] today we no longer accept the 17th century doctrine .however , in addition to the metaphysical criticism , berkeley made a poignant logical criticism , pointing out a paradox in the definition of the derivative .the seeds of an approach to resolving the logical paradox were already contained in the work of fermat , .robinson modified the definition of the derivative by introducing the standard part function , which we refer to as the fermat - robinson standard part in sections [ rival1 ] and [ rival2 ] . ] but it was robinson who ironed out the remaining logical wrinkle .thus , mathematicians throughout the 19th century were suspicious of infinitesimals because of a lingering influence of 17th century doctrine , but came to reject them because of what they felt were logical contradictions ; these two aspects combined into a nominalistic attitude that caused the triumvirate reconstruction to spread like wildfire . the tenor of hobson s remarks , .] as indeed of a majority of historians of mathematics , is that weierstrass s fundamental accomplishment was the elimination of infinitesimals from foundational discourse in analysis .infinitesimals were replaced by arguments relying on real inequalities and multiple - quantifier logical formulas .the triumvirate transformation had the effect of a steamroller flattening a b - continuum .] into an a - continuum .even the ardent enthusiasts of weierstrassian epsilontics recognize that its practical effect on mathematical discourse has been appalling " ; thus , j. pierpont wrote as follows in 1899 : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the mathematician of to - day , trained in the school of weierstrass , is fond of speaking of his science as ` die absolut klare wissenschaft . ' any attempts to drag in _ metaphysical speculations _are resented with _indignant energy_. with almost _ painful emotions _ he looks back at the sorry mixture of metaphysics and mathematics which was so common in the last century and at the beginning of this [ emphasis added authors ] . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ pierpont concludes : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the analysis of to - day is indeed a transparent science .built up on the simple notion of number , its truths are the most solidly established in the whole range of human knowledge .it is , however , not to be overlooked that the price paid for this clearness is _ appalling _ , it is total separation from the world of our senses [ emphasis added authors ] . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ it is instructive to explore what form the indignant energy " referred to by pierpont took in practice , and what kind of rhetoric accompanies the `` painful emotions '' .a reader attuned to 19th century literature will not fail to recognize _ infinitesimals _ as the implied target of pierpont s epithet metaphysical speculations " . thus , cantor _ published _ a `` proof - sketch '' of a claim to the effect that the notion of an infinitesimal is inconsistent . by this time , several detailed constructions of non - archimedean systems had appeared , notably by stolz and du bois - reymond . when stolz published a defense of his work , arguing that technically speaking cantor s criticism does not apply to his system , cantor responded by artful innuendo aimed at undermining the credibility of his opponents . at no point did cantor vouchsafe to address their publications themselves . in his 1890 letter to veronese ,cantor specifically referred to the work of stolz and du bois - reymond .cantor refers to their work on non - archimedean systems as not merely an abomination " , but a self contradictory and completely useless " one .p. ehrlich analyzes the errors in cantor s `` proof '' and documents his rhetoric .the effect on the university classroom has been pervasive . in an emotionally charged atmosphere ,students of calculus today are warned against taking the apparent ratio literally . by the time one reaches the chain rule , the awkward contorsions of an obstinate denial are palpable throughout the spectrum of the undergraduate textbooks .. ] who invented the real number system ? according to van der waerden , simon stevin s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ general notion of a real number was accepted , tacitly or explicitly , by all later scientists . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ d. fearnley - sander writes that _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the modern concept of real number [ ... ] was essentially achieved by simon stevin , around 1600 , and was thoroughly assimilated into mathematics in the following two centuries . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ d. fowler points out that _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ stevin [ ... ] was a thorough - going arithmetizer : he published , in 1585 , the first popularization of decimal fractions in the west [ ... ] ; in 1594 , he desribed an algorithm for finding the decimal expansion of the root of any polynomial , the same algorithm we find later in cauchy s proof of the intermediate value theorem . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the algorithm is discussed in more detail in ( * ? ? ?* , p. 475 - 476 ) .unlike cauchy , who _ halves _ the interval at each step , stevin subdivides the interval into _ ten _ equal parts , resulting in a gain of a new decimal digit of the solution at every iteration of the algorithm . at variance with these historical judgments , the mathematical community tends overwhelmingly to award the credit for constructing the real number system to the great triumvirate , for the origin of this expression . ] in appreciation of the successful extirpation of infinitesimals as a byproduct of the weierstrassian epsilontic formulation of analysis . to illustrate the nature of such a reconstruction , consider cauchy s notion of continuity . h. freudenthal notes that `` cauchy invented our notion of continuity '' .cauchy s starting point is a description of perceptual continuity of a function in terms of `` varying by imperceptible degrees '' .such a turn of phrase occurs both in his letter to coriolis of 1837 , and in his 1853 text .cauchy transforms perceptual continuity into a mathematical notion by exploiting his conception of an infinitesimal as being generated by a null sequence ( see ) . both in 1821 and in 1853, cauchy defines continuity of in terms of an infinitesimal -increment resulting in an infinitesimal change in .the well - known nominalistic residue of the perceptual definition ( a residue that dominates our classrooms ) would have be continuous at if for every positive epsilon there exists a positive delta such that if is less than delta then is less than epsilon , namely : lord kelvin s technician , wishing to exploit the notion of continuity in a research paper , is unlikely to be interested in 4-quantifier definitions thereof .regardless of the answer to such a question , the revolutionary nature of the triumvirate reconstruction of the foundations of analysis is evident .if one accepts the thesis that elimination of ontological entities called infinitesimals " does constitute a species of nominalism , then the triumvirate recasting of analysis was a nominalist project .we will deal with cantor and dedekind in more detail in section [ cd ] .cantor is on record describing infinitesimals as the cholera bacillus of mathematics " in a letter dated 12 december 1893 , quoted in meschkowski ( see also dauben and ) .cantor went as far as publishing a purported proof " of their logical inconsistency , as discussed in section [ triumvirat ] .cantor may have extended numbers both in terms of the complete ordered field of real numbers and his theory of infinite cardinals ; however , he also passionately believed that he had not only given a logical foundation to real analysis , but also simultaneously eliminated infinitesimals ( the cholera bacillus ) .dedekind , while admitting that there is no evidence that the true " continuum indeed possesses the property of completeness he championed ( see m. moore ) , at the same time formulated his definition of what came to be known as dedekind cuts , in such a way as to rule out infinitesimals .s. feferman describes dedekind s construction of an complete ordered line as follows : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ dedekind s construction of such an is obtained by taking it to consist of the rational numbers together with the numbers corresponding to all those cuts in for which has no largest element and has no least element , ordered in correspondence to the ordering of cuts when is a proper subset of .dedekind himself spoke of this construction of a[s ] individual cuts in for which has no largest element and no least element as _ the creation of an irrational number _ , though he did not identify the numbers themselves with those cuts . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in this way the `` gappiness '' of the rationals is overcome , in dedekind s terminology . now requiring that an element of the continuum should induce a partition of does not yet rule out infinitesimalshowever , requiring that a given partition of should correspond to a _unique _ element of the continuum , does have the effect of ruling out infinitesimals . in the context of an infinitesimal - enriched continuum ,it is clear that a pair of quantities in the cluster ( halo ) infinitely close to , for example , will define the _ same _ partition of the rationals .therefore the clause of only one " forces a collapse of the infinitesimal cluster to a single quantity , in this case . was rigor linked to the elimination of infinitesimals ?e. hobson in his retiring presidential address in 1902 summarized the advances in analysis over the previous century , and went on explicitly to make a connection between the foundational accomplishments in analysis , on the one hand , and the elimination of infinitesimals , on the other , by pointing out that an equivalence class defining a real number `` is of such a character that no use is made in it of infinitesimals , '' for more details on hobson . ] suggesting that hobson viewed them as logically inconsistent ( perhaps following cantor or berkeley ) .the matter of rigor will be analyzed in more detail in the next section .in criticizing the nominalistic aspect of the weierstrassian elimination of infinitesimals , does one neglect the mathematical reasons why this was considered desirable ? the stated goal of the triumvirate program was mathematical rigor .let us examine the meaning of mathematical rigor . conceivably rigor could be interpreted in at least the following four ways , not allof which we endorse : 1 . it is a shibboleth that identifies the speaker as belonging to a clan of professional mathematicians ; 2 .it represents the idea that as the field develops , its practitioners attain greater and more conceptual understanding of key issues , and are less prone to error ; 3 .it represents the idea that a search for greater correctness in analysis inevitably led weierstrass to epsilontics in the 1870s ; 4 .it refers to the establishment of ultimate foundations for mathematics by cantor , eventually explicitly expressed in axiomatic form by zermelo and fraenkel.it would be interesting to investigate the role of the zermelo frankel axiomatisation of set theory in cementing the nominalistic disposition we are analyzing .keisler points out that `` the second- and higher - order theories of the real line depend on the underlying universe of set theory [ ... ] thus the _ properties _ of the real line are not uniquely determined by the axioms of set theory '' [ emphasis in the original the authors ] .he adds : `` a set theory which was not strong enough to prove the unique existence of the real line would not have gained acceptance as a mathematical foundation '' .edward nelson has developed an alternative axiomatisation more congenial to infinitesimals . ]item ( 1 ) may be pursued by a fashionable academic in the social sciences , but does not get to the bottom of the issue .meanwhile , item ( 2 ) could apply to any exact science , and does not involve a commitment as to which route the development of mathematics may have taken .item ( 2 ) could be supported by scientists in and outside of mathematics alike , as it does not entail a commitment to a specific destination or ultimate goal of scientific devepment as being pre - determined and intrinsically inevitable . on the other hand ,the actual position of a majority of professional mathematicians today corresponds to items ( 3 ) and ( 4 ) .the crucial element present in ( 3 ) and ( 4 ) and absent in ( 2 ) is the postulation of a specific outcome , believed to be the inevitable result of the development of the discipline .challenging such a _ belief _ appears to be a radical proposition in the eyes of a typical professional mathematician , but not in the eyes of scientists in related fields of the exact sciences .it is therefore particularly puzzling that ( 3 ) and ( 4 ) should be accepted without challenge by a majority of historians of mathematics , who tend to toe the line on the mathematicians _ belief_. it is therefore necessary to examine such a belief , which , as we argue , stems from a particular philosophical disposition akin to nominalism .could mathematical analysis have followed a different path of development ? in an intriguing text published a decade ago , pourciau examines the foundational crisis of the 1920s and the brouwer hilbert controversy , and argues that brouwer s view may have prevailed had brouwer been more of an errett bishop .. ] while we are sceptical as to pourciau s main conclusions , the unmistakable facts are as follows : 1 .a real struggle did take place ; 2 .some of the most brilliant minds at the time did side with brouwer , at least for a period of time ( e.g. , hermann weyl ) ; 3 .the battle was won by hilbert not by mathematical means alone but also political ones , such as maneuvering brouwer out of a key editorial board ; 4 .while retroactively one can offer numerous reasons why hilbert s victory might have been inevitable , this was not at all obvious at the time .we now leap back a century , and consider a key transitional figure , namely cauchy . in 1821 ,cauchy defined continuity of in terms of `` an infinitesimal -increment corresponding to an infinitesimal -increment . '' for more details on cauchy s definition . ]many a practicing mathematician , brought up on an alleged `` cauchy - weierstrass '' tale , will be startled by such a revelation .the textbooks and the history books routinely obfuscate the nature of cauchy s definition of continuity .fifty years before weierstrass , cauchy performed a hypostatisation by encapsulating a variable quantity tending to zero , into an individual / atomic entity called `` an infinitesimal '' . was the naive traditional definition " of the infinitesimal blatantly self - contradictory ?we argue that it was not .cauchy s definition in terms of null sequences is a reasonable definition , and one that connects well with the sequential approach of the ultrapower construction . for more details . ]mathematicians viewed infinitesimals with deep suspicion due in part to a conflation of two separate criticisms , the logical one and the metaphysical one , by berkeley , see sherry .thus , the emphasis on the elimination of infinitesimals in the traditional account of the history of analysis is misplaced .could analysis could have developed on the basis of infinitesimals ? .] continuity , as all fundamental notions of analysis , can be defined , and were defined , by cauchy in terms of infinitesimals .epsilontics could have played a secondary role of clarifying whatever technical situations were too awkward to handle otherwise , but arguably they need nt have _ replaced _ infinitesimals .as far as the issue of rigor is concerned , it needs to be recognized that gauss and dirichlet published virtually error - free mathematics before weierstrassian epsilontics , while weierstrass himself was not protected by epsilontics from publishing an erroneous paper by s. kovalevskaya ( the error was found by volterra , see ) .as a scientific discipline develops , its practitioners gain a better understanding of the conceptual issues , which helps them avoid errors . butassigning a singular , oracular , and benevolent role in this to epsilontics is philosophically naive . the proclivity to placethe blame for errors on infinitesimals betrays a nominalistic disposition aimed agaist the _ ghosts of departed quantities _ , already dubbed `` _ charlatanerie _ '' by dalembert in 1754 .the following seven questions were formulated by r. hersh , who also motivated the author to present the material of section [ hersh ] , as well as that of section [ six ] . was a nominalistic viewpoint motivating the triumvirate project ?we argue that the answer is affirmative , and cite two items as evidence : 1 .dedekind s cuts and the essence of continuity " , and 2 .cantor s tooth - and - nail fight against infinitesimals .concerning ( 1 ) , mathematicians widely believed that dedekind discovered such an essence .what is meant by the essence of continuity in this context is the idea that a pair of cuts " on the rationals are identical if and only if the pair of numbers defining them are equal . nowthe if " part is unobjectionable , but the almost reflexive only if " part following it has the effect of a steamroller flattening the b - continuum . ] into an a - continuum .namely , it collapses each monad ( halo , cluster ) to a point , since a pair of infinitely close ( adequal ) points necessarily define the same cut on the rationals ( see section [ cd ] for more details ) .the fact that the steamroller effect was gladly accepted as a near - axiom is a reflection of a nominalistic attitude .concerning ( 2 ) , cantor not only published a proof - sketch " of the non - existence of infinitesimals , he is on record calling them an abomination " as well as the cholera bacillus " of mathematics .when stolz meekly objected that cantor s proof " does not apply to his system , cantor responded by the abomination " remark ( see section [ cd ] for more details ) .now cantor s proof contains an error that was exhastively analyzed by ehrlich .as it stands , it would `` prove '' the non - existence of the surreals !incidentally , ehrlich recently proved that maximal " surreals are isomorphic to maximal " hyperreals .can cantor s attitude be considered as a philosophical predisposition to the detriment of infinitesimals ?it has been written that cauchy s concern with clarifying the foundations of calculus was motivated by the need to teach it to french military cadets .cauchy did have some tensions with the management of the _ ecole polytechnique _ over the teaching of infinitesimals between 1814 and 1820 . in around 1820he started using them in earnest in both his textbooks and his research papers , and continued using them throughout his life , well past his teaching stint at the _ecole_. thus , in his 1853 text he reaffirms the infinitesimal definition of continuity he gave in his 1821 textbook .does nt reasoning by infinitesimals require a deep intuition that is beyond the reach of most students ?kathleen sullivan s study from 1976 shows that students enrolled in sections based on keisler s textbook end up having a better conceptual grasp of the notions of calculus than control groups following the standard approach .two years ago , i taught infinitesimal calculus to a group of 25 freshmen .i also had to train the ta who was new to the material . according to persistent reports from the ta, the students have never been so excited about learning calculus . on the contrary , it is the multiple - quantifier weierstrassian epsilontic logical stunts that our students are dressed to perform ( on pretense of being taught infinitesimal calculus ) that are beyond their reach . in an ironic commentary on the nominalistic ethos reigning in our departments , not onlywas i relieved of my teaching this course the following year , but the course number itself was eliminated .it may true that epsilontics " is in practice repugnant to many students .but the question is whether an issue that is really a matter of technical mathematics related to pedagogy is being misleadingly presented as a question of high metaphysics. answer .berkeley turned this into a metaphysical debate .generations of mathematicians have grown up thinking of it as a metaphysical debate .such a characterisation is precisely what we contest .is nt a positive number smaller than all positive numbers " self - contradictory ? is a phrase such as i am smaller than myself " , intelligible ?both carnot and cauchy say that an infinitesimal is generated by a variable quantity that becomes smaller than any fixed quantity .no contradiction here .the otherwise excellent study by ehrlich contains a curious slip with regard to poisson .poisson describes infinitesimals as being `` less than any given magnitude of the same nature '' ( the quote is reproduced in boyer ) .ehrlich inexplicably omits the crucial modifier `` given '' when quoting poisson in footnote 133 on page 76 of .based on the incomplete quote , ehrlich proceeds to agree with veronese s assessment ( of poisson ) that [ t]his proposition evidently contains a contradiction in terms " .our assessment is that poisson s definition is in fact perfectly consistent .infinitesimals were one thorny issue . did nt it take the modern theory of formal languages to untangle that ?not exactly .a long tradition of technical work in non - archimedean continua starts with stolz and du bois - reymond , levi - civita , hilbert , and borel , see ehrlich .the tradition continues uninterruptedly until hewitt constructs the hyperreals in 1948 .then came o s theorem whose consequence is a transfer principle , which is a mathematical implementation of the heuristic law of continuity " of leibniz ( what s true in the finite domain should remain true in the infinite domain " ) .what o and robinson untangled was the transfer principle .non - archimedean systems had a long history prior to these developments .there is still a pedagogical issue .i do understand that keisler s calculus book is teachable .but this says nothing about the difficulty of teaching calculus in terms of infinitesimals back around 1800 .keisler has robinson s non - standard analysis available , as a way to make sense of infinitesimals .cauchy did not .do you believe that cauchy used a definition of infinitesimal in terms of a null sequence of rationals ( or reals ) in teaching introductory calculus ?the historical issue about cauchy is an interesting one .most of his course notes from the period 1814 - 1820 have been lost .his predecessor at the _ ecole polytechnique _ , l. carnot , defined infinitesimals exactly the same way as cauchy did , but somehow is typically viewed by historians as belonging to the old school as far as infinitesimals are concerned ( and criticized for his own version of the `` cancellation of errors '' argument originating with berkeley ) . as far as cauchys textbooks from 1821 onward indicate , he declares at the outset that infinitesimals are an indispensable foundational tool , defines them in terms of null sequences ( more specifically , a variable quantity becomes an infinitesimal " ) , defines continuity in terms of infinitesimals , defines his dirac " delta function in terms of infinitesimals ( see ) , defines infinitesimals of arbitrary _ real _ order in , anticipating later work by stolz , du bois - reymond , and others . the following eight questions were posed by martin davis . how would you answer the query : how do you define a null sequence " ?are nt you back to epsilons ?not necessarily .one could define it , for example , in terms of `` only finitely many terms outside each given separation from zero '' .while epsilontics has important applications , codifying the notion of a null sequence is not one of them .epsilontics is helpful when it comes to characterizing a cauchy sequence , _ if one does not yet know the limiting value_. if one does know the limiting value , as in the case of a null sequence , a multiple - quantifier epsilontic formulation is no clearer than saying that all the terms eventually get arbitrarily small . to be more specific , if one describes a cauchy sequence by saying that `` terms eventually get arbitrarily close to each other '' , the ambiguity can lead and has led to errors , though not in cauchy ( the sequences are rightfully named after him as he was aware of the trap ) .such an ambiguity is just not there as far as null sequences are concerned .giving an epsilontic definition of a null sequence does not increase understanding and does not decrease the likelihood of error .a null sequence is arguably a notion that s no more complex than multiple - quantifier epsilontics , just as the natural numbers are no more complex than the set - theoretic definition thereof in terms of which requires infinitely many set - theoretic types to make the point .is nt a natural number a more primitive notion ?you evoke cauchy s use of variable quantities .but whatever is a variable quantity " ?the concept of variable quantity was not clearly defined by mathematicians from leibniz and lhopital onwards , and is today considered a historical curiosity .cauchy himself sometimes seems to think they take discrete values ( in his 1821 text ) and sometimes continuous ( in his 1823 text ) .many historians agree that in 1821 they were discrete sequences , and cauchy himself gives explicit examples of sequences .now it is instructive to compare such variable quantities to the procedures advocated by the triumvirate .in fact , the approach can be compared to cantor s construction of the real numbers .a real number is a cauchy sequence of rational numbers , modulo an equivalence relation .a sequence , which is not an individual / atomic entity , comes to be viewed as an atomic entity by a process which in triumvirate lexicon is called an equivalence relation , but in much older philosophical terminology is called _ hypostatisation_. in education circles , researchers tend to use terms such as _ encapsulation _ , _ procept _ , and _ reification _ , instead .as you know , the ultrapower construction is a way of hypostatizing a hyperreal out of sequences of reals . as far as cauchy scompeting views of a variable quantity as discrete ( in 1821 ) or continuous ( in 1823 ) , they lead to distinct implementations of a b - continuum in hewitt ( 1948 ) , when a `` continuous '' version of the ultrapower construction was used , and in luxemburg ( 1962 ) , where the discrete version was used ( a more recent account of the latter is in goldblatt ) .is nt the notion of a variable quantity a pernicious notion that makes time an essential part of mathematics ?i think you take zeno s paradoxes too seriously .i personally do nt think there is anything wrong with involving time in mathematics .it has not led to any errors as far as i know , _ pace _zeno . given whatrelativity has taught us about time , is it a good idea to involve time in mathematics ?what did relativity teach us about time that would make us take time out of mathematics ?that time is relative ? but you may be confusing metaphysics with mathematics .time that s being used in mathematics is not an exact replica of physical time .we may still be influenced by 17th century doctrine according to which every theoretical entity must have an empirical counterpart / referent .this is why berkeley was objecting to infinitesimals ( his metaphysical criticism anyway ) .i would put time back in mathematics the same way i would put infinitesimals back in mathematics .neither concept is under any obligation of corresponding to an empirical referent .did nt the triumvirate show us how to prove the existence of a complete ordered field ?simon stevin had already made major strides in defining the real numbers , represented by decimals .some essential work needed to be done , such as the fact that the usual operations are well defined .this was done by dedekind , see fowler . but stevin numbers themselves were several centuries older , even though they go under the soothing name of numbers so _ real _ .. ] my nsa book does it by forming the quotient of the ring of finite hyper - rational numbers by the ideal of infinitesimals the remarkable fact is that this construction is already anticipated by kstner ( a contemporary of euler s ) in the following terms : `` if one partitions without end into smaller and smaller parts , and takes larger and larger collections of such little parts , one gets closer and closer to the irrational number without ever attaining it '' .kstner concludes : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` therefore one can view it as an infinite collection of infinitely small parts '' , cited by cousquer . in section [ rival2 ] . ]_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ but that construction was not available to the earlier generations .but stevin numbers were .they kept on teaching analysis in france throughout the 1870s without any need for constructing " something that had already been around for a century before leibniz , see discussion in laugwitz .are nt you conflating the problem of rigorous foundation with how to teach calculus to beginners ?as far as rigorous foundations are concerned , alternative foundations to zf have been developed that are more congenial to infinitesimals , such as edward nelson s .mathematicians are accustomed to thinking of zf as the foundations " .it needs to be recognized that this is a philosophical assumption .the assumption can be a reflection of a nominalist mindframe .the example of a useful application of infinitesimals analyzed at the end of this section is quite elementary .a more advanced example is the elegant construction of the haar measure in terms of counting infinitesimal neighborhoods , see goldblatt .even more advanced examples such as the proof of the invariant subspace conjecture are explained in your book . for an application to the bolzmann equation ,see arkeryd . as a concrete example of what consequencesa correction of the nominalistic triumvirate attitude would entail in the teaching of the calculus , consider the problem of the _ unital evaluation _ of the decimal , i.e. , its evaluation to the _ unit _ value .students are known overwhelmingly to believe that the number falls short of by an infinitesimal amount .a typical instructor believes such student intuitions to be erroneous , and seeks to inculcate the unital evaluation of .an alternative approach was proposed by ely and katz & katz . instead of refuting student intuitions, an instructor could build upon them to calculate the derivative of at by choosing an infinitesimal and showing that is infinitely close ( adequal ) to , yielding the desired value without either epsilontics , estimates , or limits , see figure [ jul10 ] . here is interpreted as an extended decimal string with an infinite hypernatural s worth of , see . instead of building upon student intuition, a typical calculus course seeks to flatten it into the ground by steamrolling the b - continuum into the a - continuum , see katz and katz .a nominalist view of what constitutes an allowable number system has produced an ostrich effectsuch an effect is comparable to a constructivist s reaction to the challenge of meaningful applications of a post - lem variety , see main text in section [ hersh ] around footnote [ ostrich1 ] . ] whereby mathematics educators around the globe have failed to recognize the legitimacy , and potency , of students nonstandard conceptions of , see ely for details .this section analyzes the reconstruction of cauchy s foundational work in analysis usually associated with j. grabiner , and has its sources in the work of c. boyer .a critical analysis of the traditional approach may be found in hourya benis sinaceur s article from 1973 . to place such work in a historical perspective , a minimal chronology of commentators on cauchy s foundational work in analysis would have to mention f. klein s observation in 1908 that _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ since cauchy s time , the words _ infinitely small _are used in modern textbooks in a somewhat changed sense .one never says , namely , that a quantity _ is _ infinitely small , but rather that it _ becomes _ infinitely small _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ indeed , cauchy s starting point in defining an infinitesimal is a null sequence ( i.e. , sequence tending to zero ) , and he repeatedly refers to such a null sequence as _ becoming _ an infinitesimal . p. jourdain s detailed 1913 study of cauchy is characterized by a total _ absence _ of any claim to the effect that cauchy may have based his notion of infinitesimal , on limits .c. boyer quotes cauchy s definition of continuity as follows : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the function is continuous within given limits if between these limits an infinitely small increment in the variable produces always an infinitely small increment , , in the function itself ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ next , boyer proceeds to _ interpret _ cauchy s definition of continuity as follows : `` the expressions infinitely small _ are here to be understood _ [ ... ] in terms of [ ... ] limits : i.e. , is continuous within an interval if the limit of the variable as approaches is , for any value of within this interval '' [ emphasis added authors ] .boyer feels that infinitesimals _ are to be understood _ in terms of limits .or perhaps they are to be understood otherwise ? in 1967 , a. robinson discussed the place of infinitesimals in cauchy s work .he pointed out that `` the assumption that [ infinitesimals ] satisfy the same laws as the ordinary numbers , which was stated explicitly by leibniz , was rejected by cauchy as unwarranted '' . yet , _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ cauchy s professed opinions in these matters notwithstanding , he did in fact treat infinitesimals habitually as if they were ordinary numbers and satisfied the familiar rules of arithmetic , ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ t. koetsier remarked that , had cauchy wished to extend the domain of his functions to include infinitesimals , he would no doubt have mentioned how exactly the functions are to be so extended . beyond the observation that cauchy did , in fact ,make it clear that such an extension is to be carried out term - by - term , at it by applying to each term in the sequence .brting analyzes cauchy s use of the particular sequence in . ]koetsier s question prompts a similar query : had cauchy wished to base his calculus on limits , he would no doubt have mentioned something about such a foundational stance . instead , cauchy emphasized that in founding analysis he was unable to avoid elaborating the fundamental properties of _ infinitely small quantities _ , see .no mention of a foundational role of limits is anywhere to be found in cauchy , unlike his would - be modern interpreters .l. sad _ et al _ have pursued this matter in detail in , arguing that what cauchy had in mind was a prototype of an ultrapower construction , where the equivalence class of a null sequence indeed produces an infinitesimal , in a suitable set - theoretic framework .. ] to summarize , a post - jourdain nominalist reconstruction of cauchy s infinitesimals , originating no later than boyer , reduces them to a weierstrassian notion of limit . to use burgess terminology borrowed from linguistics , the boyer - grabiner interpretation _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ becomes the hypothesis that certain _ noun phrases _ [ in the present case , infinitesimals ] in the surface structure are without counterpart in the deep structure ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ meanwhile , a rival school of thought places cauchy s continuum firmly in the line of infinitesimal - enriched continua .the ongoing debate between rival visions of cauchy s continuum echoes felix klein s sentiment reproduced above.see discussion of klein in the main text around footnote [ klein ] .the two rival views of cauchy s infinitesimals have been pursued by historians , mathematicians , and philosophers , alike .the bibliography in the subject is vast .the most detailed statement of boyer s position may be found in grabiner .robinson s perspective was developed most successfully by d. laugwitz in 1989 , and by k. brting in 2007 . ]viewed through the lens of the dichotomy introduced by burgess , it appears that the traditional boyer - grabiner view is best described as a hermeneutic , rather than revolutionary , nominalistic reconstruction of cauchy s foundational work .cauchy s definition of continuity in terms of infinitesimals has been a source of an on - going controversy , which provides insight into the nominalist nature of the boyer - grabiner reconstruction .many historians have interpreted cauchy s definition as a proto - weierstrassian definition of continuity in terms of limits .thus , smithies ( * ? ? ?* , footnote 20 ) cites the _ page _ in cauchy s book where cauchy gave the infinitesimal definition , but goes on to claim that the concept of _ limit _ was cauchy s `` essential basis '' for his concept of continuity .smithies looked in cauchy , saw the infinitesimal definition , and went on to write in his paper that he saw a limit definition .such automated translation has been prevalent at least since boyer .smithies cites chapter and verse in cauchy where the latter gives an infinitesimal definition of continuity , and proceeds to claim that cauchy gave a modern one .such awkward contortions are a trademark of a nominalist . in the next section, we will examine the methodology of nominalistic cauchy scholarship .the view of the history of analysis from the 1670s to the 1870s as a 2-century triumphant march toward the yawning heights of the rigor of weierstrassian epsilontics has permeated the very language mathematicians speak today , making an alternative account nearly unthinkable .a majority of historians have followed suit , though some truly original thinkers differed .these include c. s. peirce , felix klein , n. n. luzin , hans freudenthal , robinson , lakatos , laugwitz , teixeira , and brting .meanwhile , j. grabiner offered the following reflection on the subject of george berkeley s criticism of infinitesimal calculus : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [ s]ince an adequate response to berkeley s objections would have involved recognizing that an equation involving limits is a shorthand expression for a sequence of inequalities a subtle and difficult idea no eighteenth century analyst gave a fully adequate answer to berkeley ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ this is an astonishing claim , which amounts to reading back into history , feedback - style , developments that came much later.[multiblock footnote omitted ] such a claim amounts to postulating the inevitability of a triumphant march , from berkeley onward , toward the radiant future of weierstrassian epsilontics ( `` sequence of inequalities a subtle and difficult idea '' ) .the claim of such inevitability in our opinion is an assumption that requires further argument .berkeley was , after all , attacking the coherence of _infinitesimals_. he was not attacking the coherence of some kind of incipient form of weierstrassian epsilontics and its inequalities .is nt there a simpler answer to berkeley s query , in terms of a distinction between `` variable quantity '' and `` given quantity '' already present in lhpital s textbook at the end of the 17th century ?the missing ingredient was a way of relating a variable quantity to a given quantity , but that , too , was anticipated by pierre de fermat s concept of adequality , as discussed in section [ rival1 ] .we will analyze the problem in more detail from the 19th century , pre - weierstrass , viewpoint of cauchy s textbooks . in cauchy s world , a variable quantity can have a `` limiting '' fixed quantity , such that the difference is infinitesimal . consider cauchy s decomposition of an arbitrary infinitesimal of order as a sum ( see ) , where is fixed nonzero , whereas is a variable quantity representing an infinitesimal . if one were to set in this formula , one would obtain a representation of an an arbitrary finite quantity , as a sum if we were to suppress the infinitesimal part , we would obtain `` the standard part '' of the original variable quantity . in the terminology of section [ rival1 ] we are dealing with a passage from a finite point of a b - continuum , to the infinitely close ( adequal ) point of the a - continuum , namely passing from a variable quantity to itslimiting constant ( fixed , given ) quantity .cauchy had the means at his disposal to resolve berkeley s query , so as to solve the logical puzzle of the definition of the derivative in the context of a b - continuum . while he did not resolve it, he did not need the subtle and difficult idea of weierstrassian epsilontics ; suggesting otherwise amounts to feedback - style ahistory .this reader was shocked to discover , upon his first reading of chapter 6 in schubring , that _ schubring is not aware of the fact that robinson s non - standard numbers are an extension of the real numbers_. consider the following three consecutive sentences from schubring s chapter 6 : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [ a ] [ giusti s 1984 paper ] spurred laugwitz to even more detailed attempts to banish the error and confirm that cauchy had used hyper - real numbers .[ b ] on this basis , he claims , the errors vanish and the theorems become correct , or , rather , they always were correct ( see laugwitz 1990 , 21 ) .[ c ] in contrast to robinson and his followers , laugwitz ( 1987 ) assumes that cauchy did not use nonstandard numbers in the sense of nsa , but that his _ infiniment petits _ were infinitesimals representing an extension of the field of real numbers _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ these three sentences , which we have labeled [ a ] , [ b ] , and [ c ] , tell a remarkable story that will allow us to gauge schubring s exact relationship to the subject of his speculations . what interests us are the first sentence [ a ] and the last sentence [ c ] . their literal reading yields the following four views put forth by schubring : ( 1 ) laugwitz intepreted cauchy as using hyperreal numbers ( from sentence [ a ] ) ; ( 2 ) robinson assumed that cauchy used nonstandard numbers in the sense of nsa " ( from sentence [ c ] ) ; ( 3 ) laugwitz disagreed with robinson on the latter point ( from sentence [ c ] ) ; ( 4 ) laugwitz interpreted cauchy as using an extension of the field of real numbers ( from sentence [ c ] ). taken at face value , items ( 1 ) and ( 4 ) together would logically indicate that ( 5 ) laugwitz interpreted cauchy as using the hyperreal extension of the reals ; moreover , if , as indicated in item ( 3 ) , laugwitz disagreed with robinson , then it would logically follow that ( 6 ) robinson interpreted cauchy as _ not _ using the hyperreal extension of the reals ; as to the question what number system robinson _ did _ attribute to cauchy , item ( 2 ) would indicate that ( 7 ) robinson used , not laugwitz s hyperreals , but rather nonstandard numbers in the sense of nsa " .we hasten to clarify that all of the items listed above are incoherent . indeed ,robinson s non - standard numbers " and the hyperreals are one and the same number system ( see section [ rival2 ] for more details ; robinson s approach is actually more general than hewitt s hyperreal fields ) .meanwhile , laugwitz s preferred system is a different system altogether , called omega - calculus .we gather that schubring literally does not know what he is writing about when he takes on robinson and laugwitz . a reader interested in an introduction to popper and fallibilism need look no further than chapter 6 of schubring , who comments on _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the enthusiasm for revising traditional beliefs in the history of science and reinterpreting the discipline from a theoretical , epistemological perspective generated by thomas kuhn s ( 1962 ) work on the structure of scientific revolutions . applying poppers favorite keyword of fallibilism , the statements of earlier scientists that historiography had declared to be false were particularly attractive objects for such an epistemologically guided revision .the philosopher imre lakatos ( 1922 - 1972 ) was responsible for introducing these new approaches into the history of mathematics .one of the examples he analyzed and published in 1966 received a great deal of attention : cauchy s theorem and the problem of uniform convergence .lakatos refines robinson s approach by claiming that cauchy s theorem had also been correct at the time , because he had been working with infinitesimals ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ one might have expected that , having devoted so much space to the philosophical underpinnings of lakatos interpretation of cauchy s sum theorem , schubring would actually devote a thought or two to that interpretation itself . instead , schubring presents a misguided claim to the effect that robinson acknowledged the incorrectness of the sum theorem .. for a summary of the controversy over the sum theorem , see section [ sum ] . ]schubring appears to feel that calling lakatos a popperian and a fallibilist is sufficient refutation in its own right .similarly , schubring dismisses laugwitz s reading of cauchy as solipsistic " ; accuses them of interpreting cauchy s conceptions as _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ some hermetic closure of a _ private _ mathematics [ emphasis in the original the authors ] ; _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ as well as being `` highly anomalous or isolated '' .now common sense would suggest that laugwitz is interpreting cauchy s words according to their plain meaning , and takes his infinitesimals at face value .is nt the burden of proof on schubring to explain why the triumvirate interpretation of cauchy is not `` solipsistic '' , `` hermetic '' , or `` anomalous '' ? schubring does nothing of the sort .why are lakatos and laugwitz demonized rather than analyzed by schubring ?the issue of whether or not schubring commands a minimum background necessary to understand either robinson s , lakatos , or laugwitz s interpretation was discussed above .more fundamentally , the act of contemplating for a moment the idea that cauchy s infinitesimals can be taken at face value is unthinkable to a triumvirate historian , as it would undermine the nominalistic cauchy - weierstrass tale that the received historiography is erected upon .the failure to appreciate the potency of the robinson - lakatos - laugwitz interpretation is symptomatic of an ostrich effect conditioned by a narrow a - continuum vision . , and in the educational context , see footnote [ ostrich2 ] . ]the robinson - lakatos - laugwitz interpretation of cauchy s sum theorem is considered in more detail in section [ sum ] .chapter 6 in schubring is entitled `` cauchy s compromise concept '' .which compromise is the author referring to ?the answer becomes apparent on page 439 , where the author quotes cauchy s `` reconciliation '' sentence : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ my main aim has been to _ reconcile _ rigor , which i have made a law in my cours danalyse , with the simplicity that comes from the direct consideration of infinitely small quantities ( cauchy 1823 , see ) [ emphasis added authors ] ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ cauchy s choice of the word `` reconcile '' does suggest a resolution of a certain tension .what is the nature of such a tension ?the sentence mentions `` rigor '' and `` infinitely small quantities '' in the same breath . this led schubring to a conclusion of cauchy s alleged perception of a `` disagreement '' between them : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in his next textbook on differential calculus in 1823 , cauchy points out expressly that he has adopted a compromise concept and that the `` simplicity of the infinitely small quantities '' [ ... ] disagrees with the rigor " that he wished to achieve in his 1821 textbook ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ schubring s conclusion concerning such an alleged `` disagreement '' , as well as the `` compromise '' of his title , both hinge essentially on a single word _ concilier _( reconcile ) in cauchy .let us analyze its meaning .if it refers to a disagreement between rigor and infinitesimals , how do we account for cauchy s attribution , in 1821 , of a fundamental foundational role of infinitesimals in establishing a rigorous basis for analysis ?had cauchy changed his mind sometime between 1821 and 1823 ?to solve the riddle we must place cauchy s `` reconciliation '' sentence in the context where it occurs . in the sentence immediately preceding it , cauchy speaks of his break with the earlier texts in analysis : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ les mthodes que jai suivies diffrent plusieurs gards de celles qui se trouvent exposes dans les ouvrages du mme genre . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ could he be referring to his own earlier text ? to answer the question, we must read on what cauchy has to say .immediately following the `` reconciliation '' sentence , cauchy unleashes a sustained attack against the flawed method of divergent power series .cauchy does not name the culprit , but clearly identifies the offending treatise .it is the _ mcanique analytique _ , see ( cauchy 1823 , p. 11 ) .the second edition of lagrange s treatise came out in 1811 , when cauchy was barely out of his teens . herelagrange writes : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ lorsquon a bien conu lesprit de ce systme , et quon sest convaincu de lexactitude de ses rsultats par la mthode gomtrique des premires et dernires raisons , ou par la mthode analytique des fonctions drives , on peut employer les infiniment petits comme un instrument sr et commode pour abrger et simplifier les dmonstrations ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ lagrange s ringing endorsement of infinitesimals in 1811 is as unambivalent as that of johann bernoulli , lhopital , or varignon . in rejecting lagrange s flawed method of power series , as well as his principle of the `` generality of algebra '' ,cauchy was surely faced with a dilemma with regard to lagrange s infinitesimals , which had stirred controversy for over a century .we argue that it is the context of a critical re - evaluation of lagrange s mathematics that created a tension for cauchy vis - a - vis lagrange s work of 1811 : can he sift the chaff from the grain ?s great accomplishment was his recognition that , while lagrange s flawed power series method and his principle of the generality of algebra do not measure up to the standard of rigor cauchy sought to uphold in his own work , the infinitesimals can indeed be reconciled with such a standard of rigor .the resolution of the tension between the rejection of lagrange s conceptual framework , on the one hand , and the acceptance of his infinitesimals , on the other , is what cauchy is referring to in his `` reconciliation '' sentence .cauchy s blending of rigor and infinitesimals in 1823 is consistent with his approach in 1821 .cauchy s sentence compromises schubring s concept of a cauchyan ambivalence with regard to infinitesimals , and pulls the rug from under schubring s nominalistic and solipsistic reading of cauchy .in this section , we summarize the controversy over the sum theorem , recently analyzed by brting .the issue hinges on two types of convergence . to clarify the mathematical issues involved, we will first consider the simpler distinction between continuity and uniform continuity .let be in the domain of a function , and consider the following condition , which we will call _ microcontinuity _ at : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ if is in the domain of and is infinitely close to , then is infinitely close to " . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ then ordinary continuity of is equivalent to being microcontinuous on the archimedean continuum ( a - continuum for short ) , i.e. , at every point of its domain in the a - continuum .meanwhile , uniform continuity of is equivalent to being microcontinuous on the bernoulian continuum ( b - continuum for short ) , i.e. , at every point of its domain in the b - continuum .. ] thus , the function for positive fails to be uniformly continuous because microcontinuity fails at a positive infinitesimal .the function fails to be uniformly continuous because of the failure of microcontinuity at a single infinite member of the b - continuum . a similar distinction exists between pointwise convergence and uniform convergence .the latter condition requires convergence at the points of the b - continuum in addition to the points of the a - continuum , see e.g. goldblatt ( * ? ? ?* theorem 7.12.2 , p. 87 ) .which condition did cauchy have in mind in 1821 ?this is essentially the subject of the controversy over the sum theorem .abel interpreted it as convergence on the a - continuum , and presented `` exceptions '' ( what we would call today counterexamples ) in 1826 .after the publication of additional such exceptions by seidel and stokes in the 1840s , cauchy clarified / modified his position in 1853 . in his text , he specified a stronger condition of convergence on the b - continuum , including at .the latter entity is explicitly mentioned by cauchy as illustrating the failure of the error term to tend to zero .the stronger condition bars abel s counterexample .see our text for more details . \ar@{-}@<-0.5pt>[rr ] \ar@{-}@<0.5pt>[rr ] & { } \ar@{->}[d]^{\hbox{st } } & \hbox{\quad b - continuum } \\ { } \ar@{-}[rr ] & { } & \hbox{\quad a - continuum } } \ ] ] a leibnizian definition of the derivative as the infinitesimal quotient whose logical weakness was criticized by berkeley , was modified by a. robinson by exploiting a map called _ the standard part _ , denoted `` st '' , from the finite part of a b - continuum ( for `` bernoullian '' ) , to the a - continuum ( for `` archimedean '' ) , as illustrated in figure [ 31 ] .to the real point st infinitely close to . in other words ,the map `` st '' collapses the cluster ( halo ) of points infinitely close to a real number , back to .] here two points of a b - continuum have the same image under `` st '' if and only if they are equal up to an infinitesimal .this section analyzes the historical seeds of robinson s theory , in the work of fermat , wallis , as well as barrow.while barrow s role is also critical , we will mostly concentrate on fermat and wallis . ] the key concept here is that of _ adequality _ ( see below ) .it should be kept in mind that fermat never considered the local slope of a curve .therefore one has to be careful not to attribute to fermat mathematical content that could not be there . on the other hand ,barrow did study curves and their slope . furthermore , barrow exploited fermat s adequality in his work , as documented by h. breger .the binary relation of `` equality up to an infinitesimal '' was anticipated in the work of pierre de fermat .fermat used a term usually translated into english as `` adequality . ''andr weil writes as follows : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ fermat [ ... ] developed a method which slowly but surely brought him very close to modern infinitesimal concepts .what he did was to write congruences between functions of modulo suitable powers of ; for such congruences , he introduces the technical term _adaequalitas , adaequare _ , etc . , which he says he has borrowed from diophantus .as diophantus v.11 shows , it means an approximate equality , and this is indeed how fermat explains the word in one of his later writings ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ weil ( * ? ? ? * , footnote 5 ) then supplies the following quote from fermat : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ adaequetur , ut ait diophantus , , see weil . ]aut fere aequetur _ ; in mr .mahoney s translation : adequal , or almost equal " ( p. 246 ) ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ here weil is citing mahoney ( cf .mahoney similarly mentions the meaning of `` approximate equality '' or `` equality in the limiting case '' in ( * ? ? ?* , end of footnote 46 ) .mahoney also points out that the term `` adequality '' in fermat has additional meanings .the latter are emphasized in a recent text by e. giusti , who is sharply critical of breger . while the review by weil is similarly sharply critical of mahoney ,both agree that the meaning of `` approximate equality '' , leading into infinitesimal calculus , is at least _one of the meanings _ of the term _ adequality _ for fermat .this meaning was aptly summarized by j. stillwell .stillwell s historical presentation is somewhat simplified , and does not sufficiently distinguish between the seeds actually present in fermat , on the one hand , and a modern interpretation thereof , on the other , above for a discussion of barrow s role , documented by h breger . ] but he does a splendid job of explaining the mathematical background for the uninitiated .thus , he notes that is not equal to ( see figure [ jul10 ] ) , and writes : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ instead , the two are connected by a looser notion than equality that fermat called adequality .if we denote adequality by , then it is accurate to say that and hence that for the parabola is adequal to .meanwhile , is not a number , so is the only number to which is adequal .this is the true sense in which represents the slope of the curve . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ stillwell points out that _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ fermat introduced the idea of adequality in 1630s but he was ahead of his time .his successors were unwilling to give up the convenience of ordinary equations , preferring to use equality loosely rather than to use adequality accurately .the idea of adequality was revived only in the twentieth century , in the so - called non - standard analysis ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we will refer to the map from the ( finite part of the ) b - continuum to the a - continuum as the fermat - robinson standard part , see figure [ fermatwallis ] . as far as the logical criticism formulated by rev .george is concerned , fermat s adequality had pre - emptively provided the seeds of an answer , a century before the bishop ever lifted up his pen to write _ the analyst _ .fermat s contemporary john wallis , in a departure from cavalieri s focus on the geometry of indivisibles , emphasized the arithmetic of infinitesimals , see j. stedall s introduction in .to cavalieri , a plane figure is made of lines ; to wallis , it is made of parallelograms of infinitesimal altitude .wallis transforms this insight into symbolic algebra over the symbol which he introduced .he exploits formulas like in his calculations of areas .thus , in proposition 182 of _ arithmetica infinitorum _ , wallis partitions a triangle of altitude and base into a precise number of parallelograms " of infinitesimal width , see figure [ wallis ] ( copied from ) .he then computes the combined length of the bases of the parallelograms to be , and finds the area to be wallis used an actual infinitesimal in calculations as if it were an ordinary number , anticipating leibniz s law of continuity .wallis s area calculation is reproduced by j. scott , who notes that wallis _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ treats infinity as though the ordinary rules of arithmetic could be applied to it . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ such a treatment of infinity strikes scott as something of a blemish , as he writes : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ but this is perhaps understandable . for many yearsto come the greatest confusion regarding these terms persisted , and even in the next century they continued to be used in what appears to us an amazingly reckless fashion ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ what is the source of scott s confidence in dismissing wallis s use of infinity as `` reckless '' ?scott identifies it on the preceding page of his book ; it is , predictably , the triumvirate `` modern conception of infinity '' .scott s tunnel a - continuum vision blinds him to the potential of wallis s vision of infinity .but this is perhaps understandable .many years separate scott from robinson s theory which in particular empowers wallis s calculation .the lesson of scott s condescending steamrolling of wallis s infinitesimal calculation could be taken to heart by historians who until this day cling to a nominalistic belief that robinson s theory has little relevance to the history of mathematics in the 17th century .nominalism in the narrow sense defines its ontological target as the ordinary numbers .burgess in his essay suggests that there is also a nominalism in a broader sense .thus , he quotes at length manin s criticism of constructivism , suggesting that lem - elimination can also fall under the category of a nominalism understood in a broader sense . in a later text , burgess discusses brouwer under a similar angle .infinitesimals were largely eliminated from mathematical discourse starting in the 1870s through the efforts of the great triumvirate .. ] the elimination took place under the banner of striving for greater rigor , . ] but the roots of the triumvirate reconstruction lay in a failure to provide a solid foundation for a b - continuum ( see section [ rival1 ] ) . had actually useful mathematics been sacrificed on the altar of mathematical rigor " during the second half of the 19th century ? today we can give a precise sense to c.s .peirce s description of the real line as a _ pseudo - continuum_. the totality of real values , rational and irrational '' ( see cp 6.176 , 1903 marginal note . here , and below , cp x.y stands for collected papers of charles sanders peirce , volume x , paragraph y ) .peirce used the word pseudo - continua " to describe real numbers in the syllabus ( cp 1.185 ) of his lectures on topics of logic .thus , peirce s intuition of the continuum corresponded to a type of a b - continuum ( see section [ rival1 ] ) , whereas an a - continuum to him was a pseudo - continuum . ]cantor s revolutionary advances in set theory went hand - in - hand with his emotional opposition to infinitesimals as an `` abomination '' and the `` cholera bacillus '' of mathematics .cantor s interest in eliminating infinitesimals is paralleled nearly a century later by bishop s interest in eliminating lem , and by a traditional nominalist s interest in eliminating platonic counting numbers .the automatic infinitesimal - to - limit translation as applied to cauchy by boyer and others is not only reductionist , but also self - contradictory , see .this section summarizes a 20th century implementation of the b - continuum , not to be confused with incipient notions of such a continuum found in earlier centuries .an alternative implementation has been pursued by lawvere , john l. bell , and others .we illustrate the construction by means of an infinite - resolution microscope in figure [ fermatwallis ] .we will denote such a b - continuum by the new symbol ( `` thick - r '' ) .such a continuum is constructed in formula .we will also denote its finite part , by so that we have a disjoint union where consists of unlimited hyperreals ( i.e. , inverses of nonzero infinitesimals ) .the map `` st '' sends each finite point , to the real point st infinitely close to , as follows : . ]^{{\rm st } } \\ { { \mathbb r}}}\ ] ] robinson s answer to berkeley s _ logical criticism _ ( see d. sherry ) is to define the derivative as instead of .note that both the term `` hyper - real field '' , and an ultrapower construction thereof , are due to e. hewitt in 1948 , see . in 1966 , robinson referred to the _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ theory of hyperreal fields ( hewitt [ 1948 ] ) which ... can serve as non - standard models of analysis ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the _ transfer principle _ is a precise implementation of leibniz s heuristic _ law of continuity _ : `` what succeeds for the finite numbers succeeds also for the infinite numbers and vice versa '' , see .the transfer principle , allowing an extention of every first - order real statement to the hyperreals , is a consequence of the theorem of j. o in 1955 , see , and can therefore be referred to as a leibniz - o transfer principle .a hewitt - o framework allows one to work in a b - continuum satisfying the transfer principle . to elaborate on the ultrapower construction of the hyperreals ,let denote the ring of sequences of rational numbers .let denote the subspace consisting of cauchy sequences .the reals are by definition the quotient field where contains all null sequences .meanwhile , an infinitesimal - enriched field extension of may be obtained by forming the quotient here a sequence is in if and only if the set of indices is a member of a fixed ultrafilter .see figure [ helpful ] .^ { } \ar@{->>}[d]^{\rm st } & & { { \mbox{i\!i\!r}}}_{<\infty } \ar@{->>}[d]^{\rm st } \\ { { \mathbb q}}\ar[rr ] \ar@{^{(}- > } [ urr ] & & { { \mathbb r}}\ar[rr]^{\simeq } & & { { \mathbb r}}}\ ] ] to give an example , the sequence represents a nonzero infinitesimal , whose sign depends on whether or not the set is a member of the ultrafilter . to obtain a full hyperreal field ,we replace by in the construction , and form a similar quotient we wish to emphasize the analogy with formula defining the a - continuum .note that , while the leftmost vertical arrow in figure [ helpful ] is surjective , we have a more detailed discussion of this construction can be found in the book by m. davis .see also baszczyk for some philosophical implications .more advanced properties of the hyperreals such as saturation were proved later , see keisler for a historical outline . a helpful `` semicolon '' notation for presenting an extended decimal expansion of a hyperreal was described by a. h. lightstone .see also p. roquette for infinitesimal reminiscences .a discussion of infinitesimal optics is in k. stroyan , j. keisler , d. tall , and l. magnani and r. dossena , and bair & henry .applications of the b - continuum range from aid in teaching calculus ( see illustration in figure [ jul10 ] ) to the bolzmann equation ( see l. arkeryd ) ; modeling of timed systems in computer science ( see h. rust ) ; mathematical economics ( see anderson ) ; mathematical physics ( see albeverio _ ) ; etc .we are grateful to martin davis , solomon feferman , reuben hersh , david sherry , and steve shnider for invaluable comments that helped improve the manuscript .hilton kramer s influence is obvious throughout .albeverio , s. ; hegh - krohn , r. ; fenstad , j. ; lindstrm , t. : nonstandard methods in stochastic analysis and mathematical physics . _ pure and applied mathematics _ , * 122*. academic press , inc . ,orlando , fl , 1986 .beeson , m. : foundations of constructive mathematics .metamathematical studies .ergebnisse der mathematik und ihrer grenzgebiete ( 3 ) [ results in mathematics and related areas ( 3 ) ] , 6 .springer - verlag , berlin , 1985 .bishop , e. : the crisis in contemporary mathematics .proceedings of the american academy workshop on the evolution of modern mathematics ( boston , mass . , 1974 ) ._ historia math ._ * 2 * ( 75 ) , no . 4 , 507 - 517 .bishop , e. : schizophrenia in contemporary mathematics [ published posthumously ; originally distributed in 1973 ] . in errett bishop : reflections on him and his research ( san diego , calif . , 1983 ) , 132 , _ contemp ._ * 39 * , amer .soc . , providence , ri , 1985 .boniface , j. ; schappacher , n. : `` sur le concept de nombre en mathmatique '' : cours indit de leopold kronecker berlin ( 1891 ) .[ `` on the concept of number in mathematics '' : leopold kronecker s 1891 berlin lectures ] _ rev .histoire math ._ textbf7 ( 2001 ) , no . 2 , 206275 .brting , k. : a new look at e. g. bjrling and the cauchy sum theorem . _exact sci . _ * 61 * ( 2007 ) , no . 5 , 519535 .breger , h. : the mysteries of adaequare : a vindication of fermat .exact sci . _ * 46 * ( 1994 ) , no . 3 , 193219 .cauchy , a. l. ( 1823 ) : rsum des leons donnes lecole royale polytechnique sur le calcul infinitsimal ( paris : imprimrie royale , 1823 ) . in oeuvres compltes , series 2 , vol .paris : gauthier - villars , 1899 .cauchy , a. l. ( 1853 ) note sur les sries convergentes do nt les divers termes sont des fonctions continues dune variable relle ou imaginaire , entre des limites donnes . in _oeuvres compltes _ , series 1 , vol .12 , pp .paris : gauthier villars , 1900 .corry , l. : axiomatics , empiricism , and anschauung in hilbert s conception of geometry : between arithmetic and general relativity .the architecture of modern mathematics , 133156 , oxford univ . press , oxford , 2006 .dauben , j. : conceptual revolutions and the history of mathematics : two studies in the growth of knowledge ( 1984 ) . in revolutions in mathematics , 4971 , oxford sci ., oxford univ .press , new york , 1992 .dauben , j. : arguments , logic and proof : mathematics , logic and the infinite .history of mathematics and education : ideas and experiences ( essen , 1992 ) , 113148 , _ stud ._ , * 11 * , vandenhoeck & ruprecht , gttingen , 1996. davis , m. : applied nonstandard analysis .pure and applied mathematics .wiley - interscience [ john wiley & sons ] , new york - london - sydney , 1977 . reprinted : dover , ny , 2005 , see http://store.doverpublications.com/0486442292.html feferman , s. : conceptions of the continuum [ le continu mathmatique .nouvelles conceptions , nouveaux enjeux ] ._ intellectica _ * 51 * ( 2009 ) 169 - 189 .see also http://math.stanford.edu//papers/conceptcontin.pdf hilbert , d. ( 1919 - 20 ) , natur und mathematisches erkennen : vorlesungen , gehalten 1919 - 1920 in gttingen .nach der ausarbeitung von paul bernays ( edited and with an english introduction by david e. rowe ) , basel , birkhuser ( 1992 ) .hobson , e. w. : on the infinite and the infinitesimal in mathematical analysis . _ proceedings london mathematical society _ ,volume s1 - 35 ( 1902 ) , no . 1 , pp . 117 - 139 ( reprinted in real numbers , generalizations of the reals , and theories of continua , 326 , synthese lib . , 242 , kluwer acad .dordrecht , 1994 ) .katz , m. ; tall , d. : the tension between intuitive infinitesimals and formal mathematical analysis .chapter in crossroads in the history of mathematics and mathematics education .bharath sriraman , editor . soon to be available at http://www.infoagepub.com/products/crossroads-in-the-history-of-mathematics klein , f. : elementary mathematics from an advanced standpoint .i. arithmetic , algebra , analysis .translation by e. r. hedrick and c. a. noble [ macmillan , new york , 1932 ] from the third german edition [ springer , berlin , 1924 ] originally published as elementarmathematik vom hheren standpunkte aus ( leipzig , 1908 ) .kopell , n. ; stolzenberg , g. : commentary on e. bishop s talk ( historia math . 2 ( 1975 ) , 507517 ) .proceedings of the american academy workshop on the evolution of modern mathematics ( boston , mass . , 1974 ) ._ historia math ._ * 2 * ( 1975 ) , no . 4 , 519521 .luzin , n. n. ( 1931 ) two letters by n. n. luzin to m. ya . vygodskii . with an introduction by s. s. demidov .translated from the 1997 russian original by a. shenitzer .monthly _ * 107 * ( 2000 ) , no . 1 , 6482 . lakatos , imre : cauchy and the continuum : the significance of nonstandard analysis for the history and philosophy of mathematics .intelligencer 1 ( 1978 ) , no .3 , 151161 ( originally published in 1966 ) . novikov , s. p. : the second half of the 20th century and its results : the crisis of the society of physicists and mathematicians in russia and in the west .( russian ) istor .- mat7(42 ) ( 2002 ) , 326356 , 369 .novikov , s. p. : the second half of the 20th century and its conclusion : crisis in the physics and mathematics community in russia and in the west .ser . 2 , 212 , geometry , topology , and mathematical physics , 124 , amer .soc . , providence , ri , 2004 .( translated from istor .- mat7(42 ) ( 2002 ) , 326356 , 369 ; by a. sossinsky . )robinson , a. : selected papers of abraham robinson .nonstandard analysis and philosophy . edited and with introductions by w. a. j. luxemburg and s. krner .yale university press , new haven , conn . ,1979 .schubring , g. : conflicts between generalization , rigor , and intuition .number concepts underlying the development of analysis in 1719th century france and germany . sources and studies in the history of mathematics and physical sciences .springer - verlag , new york , 2005 .stevin , simon : the principal works of simon stevin . vols .iia , iib : mathematics .edited by d. j. struik c. v. swets & zeitlinger , amsterdam 1958 .iia : v+pp .1455 ( 1 plate ) .iib : 1958 iv+pp .459976 .stroyan , k. : uniform continuity and rates of growth of meromorphic functions .contributions to non - standard analysis ( sympos . ,oberwolfach , 1970 ) , pp ._ studies in logic and foundations of math .69 , north - holland , amsterdam , 1972 .tall , d. : the psychology of advanced mathematical thinking , in _ advanced mathematical thinking_. edited by d. o. tall , mathematics education library , 11 .kluwer academic publishers group , dordrecht , 1991 .taylor , r. g. : review of real numbers , generalizations of the reals , and theories of continua , edited by philip ehrlich [ see item above ] ._ modern logic _ * 8 * , number 1/2 ( january 1998april 2000 ) , 195212 .veronese , g. : fondamenti di geometria a pi dimensioni e a pi specie di unit rettilinee esposti in forma elementare , lezioni per la scuola di magistero in matematica .padova , tipografia del seminario , 1891 .wallis , j. : the arithmetic of infinitesimals . translated from the latin and with an introduction by jaequeline a. stedall . _sources and studies in the history of mathematics and physical sciences_. springer - verlag , new york , 2004 . | we analyze the developments in mathematical rigor from the viewpoint of a burgessian critique of nominalistic reconstructions . we apply such a critique to the reconstruction of infinitesimal analysis accomplished through the efforts of cantor , dedekind , and weierstrass ; to the reconstruction of cauchy s foundational work associated with the work of boyer and grabiner ; and to bishop s constructivist reconstruction of classical analysis . we examine the effects of an ontologically limitative disposition on historiography , teaching , and research . |
the last few years have seen the emergence of a standard model of cosmology motivated by and consistent with a wide range of observations , including the cosmic microwave background , distant supernovae , big - bang nucleosynthesis , large - scale structure , the abundance of rich galaxy clusters , and local measurements of the hubble constant ( e.g. ) . the power spectrum of fluctuations ( of temperature , density , flux , shear , etc . )is the primary statistic used to constrain cosmological parameters from observations of the cosmic microwave background ( ) , of galaxies ( ; ; ; ) , of the lyman alpha forest ( ; ; ) , and of weak gravitational lensing ( ; ; ; ) . from a cosmological standpoint ,the most precious data lie at large , linear scales , where fluctuations preserve the imprint of their primordial generation .a generic , albeit not universal , prediction of inflation is that primordial fluctuations should be gaussian . at large , linear scales , observations are consistent with fluctuations being gaussian .however , much of the observational data , especially those involving galaxies , lies in the translinear or nonlinear regime .it remains a matter of ongoing research to elucidate the extent to which nonlinear data can be used to constrain cosmology .we recently began a program to measure quantitatively , from cosmological simulations , the fisher information content of the nonlinear matter power spectrum ( specifically , in the first instance , the information about the initial amplitude of the linear power spectrum ) .for gaussian fluctuations , the power spectrum contains all possible information about cosmological parameters . at nonlinear scales , where fluctuations are non - gaussian, it is natural to start by measuring information in the power spectrum , although it seems likely that additional information resides in the 3-point and higher order correlation functions ( ; ) . measuring the fisher information in the power spectrum involves measuring the covariance matrix of power . for gaussian fluctuations ,the expected covariance of estimates of power is known analytically , but at nonlinear scales the covariance of power must be estimated from simulations . a common way to estimate the covariance matrix of a quantity is to measure its covariance over an ensemble of computer simulations ( ; ; ; ; ) .however , a reliable estimate of covariance can be computationally expensive , requiring many , perhaps hundreds ( ; ) of realizations . on the other handit is physically obvious that the fluctuations in the values of quantities over the different parts of a single simulation must somehow encode the covariance of the quantities .if the covariance could be measured from single simulations , then it would be possible to measure covariance from fewer , and from higher quality , simulations . in any case, the ability to measure covariance from a single simulation can be useful in identifying simulations whose statistical properties are atypical .a fundamental difficulty with estimating covariances from single simulations in cosmology is that the data are correlated over all scales , from small to large . as described by ,such correlations invalidate some of the `` jackknife '' and `` bootstrap '' schemes suggested in the literature . in jackknife , variance is inferred from how much a quantity varies when some segments of the data are kept , and some deleted .bootstrap is like jackknife , except that deleted segments are replaced with other segments . as part of the work leading to the present paper , we investigated a form of the bootstrap procedure , in which we filled each octant of a simulation cube with a block of data selected randomly from the cubeunfortunately , the sharp edges of the blocks introduced undesirable small scale power , which seemed to compromise the effort to measure covariance of power reliably . such effects can be mitigated by tapering .however , it seemed to us that bootstrapping , like jackknifing , is a form of re - weighting data , and that surely the best way to re - weight data would be to apply the most slowly possible varying weightings . for a periodic box , such weightingswould be comprised of the largest scale modes , the fundamentals . in the present paper , [ estimate ], we consider applying an arbitrary weighting to the density of a periodic cosmological simulation , and we show how the power spectrum ( and its covariance , and the covariance of its covariance ) of the weighted density are related to the true power spectrum ( and its covariance , and the covariance of its covariance ) . we confirm mathematically the intuitive idea that weighting with fundamentals yields the most reliable estimate of covariance of power . multiplying the density in real space by some weighting is equivalent to convolving the density in fourier space with the fourier transform of the weighting .this causes the power spectrum ( and its covariance , and the covariance of its covariance ) to be convolved with the fourier transform of the square ( and fourth , and eighth powers ) of the weighting .the convolution does least damage when the weighting window is as narrow as possible in fourier space , which means composed of fundamentals . in [ weightings ]we show how to design a best set of weightings , by minimizing the expected variance of the resulting estimate of covariance of power .these considerations lead us to recommend a specific set of weightings , each consisting of a combination of fundamental modes .this paper should have stopped neatly at this point .unfortunately , numerical simulations , described in a companion paper , revealed an unexpected ( one might say insidious ) , substantial discrepancy at nonlinear scales between the variance of power estimated by the weightings method and the variance of power estimated by the ensemble method . in [ beatcoupling ] we argue that this discrepancy arises from beat - coupling , a nonlinear gravitational coupling to the large - scale beat mode between closely spaced nonlinear wavenumbers , when the power spectrum is measured from fourier modes at anything other than infinitely sharp sets of wavenumbers .surprisingly , in cosmologically realistic simulations , the covariance of power is dominated at nonlinear scales by this beat - coupling to large scales .we discuss the beat - coupling problem in [ discussion ] .beat - coupling is relevant to observations because real galaxy surveys yield fourier modes in finite bands of wavenumber , of width where is a chararacteristic linear size of the survey .section [ summary ] summarizes the results .the fundamental idea of this paper is to apply an ensemble of weightings to a ( non - gaussian , in general ) density field , and to estimate the covariance of the power spectrum from the scatter in power between different weightings .this section derives the relation between the power spectrum of a weighted density field and the true power spectrum , along with its expected covariance , and the covariance of its covariance .it is shown , equations ( [ piapprox ] ) , ( [ dphatidphatjf ] ) , and ( [ dphati2dphatj2 g ] ) , that the expected ( ( covariance of ) covariance of ) shell - averaged power of weighted density fields is simply proportional to the true ( ( covariance of ) covariance of ) shell - averaged power , provided that two approximations are made .the two approximations are , firstly , that the power spectrum and trispectrum are sufficiently slowly varying functions of their arguments , equations ( [ papprox ] ) and ( [ tapprox ] ) , and , secondly , that power is estimated in sufficiently broad shells in -space , equation ( [ broadshellapprox ] ) .the required approximations are most accurate if the weightings contain only the largest scale fourier modes , such as the weightings containing only fundamental modes proposed in [ weightings ] . as will be discussed in [ beatcoupling ] , the apparently innocent assumption , equation ( [ tapprox ] ) , that the trispectrum is a slowly varying function of its arguments , is incorrect , because it sets to zero some important beat - coupling contributions . however , it is convenient to pretend in this section and the next , [ estimate ] and [ weightings ] , that the assumption ( [ tapprox ] ) is true , and then to consider in [ beatcoupling ] how the results are modified when the beat - coupling contributions to the trispectrum are included .ultimately we find , [ largescale ] , that the weightings method remains valid when beat - couplings are included , and , [ notquiteweightings ] , that the minimum variance weightings derived in [ weightings ] , while no longer exactly minimum variance , should be close enough to remain good for practical application .this section is necessarily rather technical , because it is necessary to distinguish carefully between various flavours of power spectrum : estimated versus expected ; unweighted versus weighted ; non - shell - averaged versus shell - averaged .subsections [ p ] to [ ddp ] present expressions for the various power spectra , their covariances , and the covariances of their covariances .subsections [ subtractmeanp ] and [ subtractmeanddp ] show how the expressions are modified when , as is usually the case , deviations in power must be measured relative to an estimated rather than an expected value of power .let denote the density of a statistically homogeneous random field at position in a periodic box .choose the unit of length so that the box has unit side .the density might represent , perhaps , a realization of the nonlinearly evolved distribution of dark matter , or of galaxies. the density could be either continuous or discrete ( particles ) .expanded in fourier modes , the density is is used in both real and fourier space .the justification for this notation is that is the same vector in hilbert space irrespective of the basis with respect to which it is expanded .see for example for a pedagogical exposition . ] thanks to periodicity , the sum is over an integral lattice of wavenumbers , with integer , , .the expectation value of the density defines the true mean density , which without loss of generality we take to equal unity the deviation of the density from the mean is the expectation values of the fourier amplitudes vanish , , except for the zeroth mode , whose expectation value equals the mean density , .the fourier amplitude of the zeroth mode is the actual density of the realization , which could be equal to , or differ slightly from , the true mean density , depending on whether the mean density of the realization was constrained to equal the true density , or not . because the density field is by assumption statistically homogeneous, the expected covariance of fourier amplitudes is a diagonal matrix here denotes the discrete delta - function , and is the power spectrum .note that there would normally be an extra factor of on the left hand side of equation ( [ drhokdrhok ] ) , but it is fine to omit the factor here because the mean density is normalized to unity , equation ( [ rhobar ] ) .the reason for dropping the factor of is to maintain notational consistency with equation ( [ pi ] ) below for the power spectrum of weighted density ( where the deviation in density is necessarily _ not _ divided by the mean ) .the symmetry in equation ( [ drhokdrhok ] ) expresses pair exchange symmetry .below , [ shell ] , we will assume that the density field is statistical isotropic , in which case the power is a function only of the scalar wavenumber , but for now we stick to the more general case where power is a function of vector wavenumber .let denote the member of a set of real - valued weighting functions , and let denote the density weighted by the weighting the fourier amplitudes of the weighted density are convolutions of the fourier amplitudes of the weighting and the density : reality of the weighting functions implies the expected mean of the weighted density is proportional to the weighting , in which a factor of on the right hand side has been omitted because the mean density has been normalized to unity , equation ( [ rhobar ] ) .the deviation of the weighted density from the mean is in fourier space the expected mean of the weighted density is and the deviation of the weighted density from the mean is the deviations in the fourier amplitudes of the weighted density are convolutions of the weighting and the deviation in the density similarly to equation ( [ rhoik ] ) . the expected covariance between two weighted densities and at wavenumbers and is , from equations ( [ drhokdrhok ] ) and ( [ drhoik ] ) , the weighting breaks statistical homogeneity , so the expected covariance matrix of fourier amplitudes , equation ( [ rhoikrhojk ] ) , is not diagonal .nevertheless we _ define _ the power spectrum of the weighted density by the diagonal elements of the covariance matrix , the variance note that this definition ( [ pi ] ) of the power spectrum differs from the usual definition of power in that the deviations on the right are fourier transforms of the deviations _ not _ divided by the mean density ( dividing by the mean density would simply unweight the weighting , defeating the whole point of the procedure ) .the power spectrum defined by equation ( [ pi ] ) is related to the true power spectrum by , equation ( [ rhoikrhojk ] ) , now make the approximation that the power spectrum at the wavenumber displaced by from is approximately equal to the power spectrum at the undisplaced wavenumber this approximation is good provided that the power spectrum is slowly varying as a function of wavenumber , and that the displacement is small compared to . in [ weightings ] we constrain the weightings to contain only fundamental modes , with , , = , so that the displacement is as small as it can be without being zero , and the approximation ( [ papprox ] ) is therefore as good as it can be .the approximation ( [ papprox ] ) becomes exact in the case of a constant , or shot noise , power spectrum , except at . under approximation ( [ papprox ] ) , the power spectrum of the weighted density is which is just proportional to the true power spectrum . without loss of generality, let each weighting be normalized so that the factor on the right hand side of equation ( [ piapproxw ] ) is unity then the power spectrum of the weighted density is approximately equal to the true power spectrum thus , in the approximation ( [ papprox ] ) and with the normalization ( [ wnorm ] ) , measurements of the power spectrum of weighted densities provide estimates of the true power spectrum . the plan is to use the scatter in the estimates of power over a set of weightings to estimate the covariance matrix of power .let denote the power spectrum of unweighted density at wavevector measured from a simulation , the hat distinguishing it from the true power spectrum : below , [ shell ] , we will invoke statistical isotropy , and we will average over a shell in -space , but in equation ( [ phat ] ) there is no averaging because there is just one simulation , and just one specific wavenumber . because of statistical fluctuations , the estimate will in general differ from the true power , but by definition the expectation value of the estimate equals the true value , .the deviation in the power is the difference between the measured and expected value : the expected covariance of power involves the covariance of the covariance of unweighted densities } & & \nonumber \\ & \times & \bigl [ \delta\rho({{\bmath k}}_3 ) \delta\rho({{\bmath k}}_4 ) - 1_{{{\bmath k}}_3 + { { \bmath k}}_4 } p({{\bmath k}}_3 ) \bigr ]\bigr\rangle \nonumber \\ & = & \left ( 1_{{{\bmath k}}_1 + { { \bmath k}}_3 } 1_{{{\bmath k}}_2 + { { \bmath k}}_4 } + 1_{{{\bmath k}}_1 + { { \bmath k}}_4 } 1_{{{\bmath k}}_2 + { { \bmath k}}_3 } \right ) p({{\bmath k}}_1 ) p({{\bmath k}}_2 ) \nonumber \\ & & \mbox { } + 1_{{{\bmath k}}_1 + { { \bmath k}}_2 + { { \bmath k}}_3 + { { \bmath k}}_4 } t({{\bmath k}}_1 , { { \bmath k}}_2 , { { \bmath k}}_3 , { { \bmath k}}_4 ) \label{eta}\end{aligned}\ ] ] which is a sum of a reducible , gaussian part , the terms proportional to , and an irreducible , non - gaussian part , the term involving the trispectrum .equation ( [ eta ] ) essentially defines what is meant by the trispectrum .exchange symmetry implies that the trispectrum function is invariant under permutations of its 4 arguments .the momentum - conserving delta - function in front of the trispectrum expresses translation invariance .it follows from equation ( [ drhokdrhokdrhokdrhok ] ) that the expected covariance of estimates of power is similarly to equations ( [ phat ] ) and ( [ dphat ] ) , let denote the power spectrum of the weighted density at wavevector measured from a simulation and let denote the deviation between the measured and expected value the expected covariance between the power spectra of the and weighted densities is , from equations ( [ drhoik ] ) and ( [ drhokdrhokdrhokdrhok ] ) , \ . } & & \end{aligned}\ ] ] assume now that the unweighted density field is statistically isotropic , so that the true power spectrum is a function only of the absolute value of its argument . in estimating the power from a simulation , one would typically average the measured power over a spherical shell of wavenumbers in -space .actually the arguments below generalize immediately to the case where the power is not isotropic , in which case might be chosen to be some localized patch in -space . however , we shall assume isotropy , and refer to as a shell .let denote the measured power averaged over a shell about scalar wavenumber ( the estimated shell - averaged power is written in lower case to distinguish it from the estimate of power at a single specific wavevector ) : here is the number of modes in the shell .we count and its complex conjugate as contributing two distinct modes , the real and imaginary parts of .the expectation value of the estimates of shell - averaged power equals the true shell - averaged power the deviation between the measured and expected value of shell - averaged power is the expected covariance of shell - averaged estimates of power is , from equations ( [ dphat ] ) and ( [ dphatdphat ] ) , \ .\end{aligned}\ ] ] in the usual case , the shells would be taken to be non - overlapping , in which case the intersection in equation ( [ dphatdphat ] ) is equal either to if and are the same shell , or to the empty set if and are different shells . similarly to equation ( [ phat ] ) , let denote the measured shell - averaged power spectrum of the weighted density at wavenumber the expectation value of the estimates is ( compare eq .( [ p ] ) ) in the approximation ( [ papprox ] ) of a slowly varying power spectrum , and with the normalization ( [ wnorm ] ) , the expected shell - averaged power spectrum of the weighted density is approximately equal to the shell - averaged power spectrum of the unweighted density ( compare eq .( [ piapprox ] ) ) the deviation between the measured and expected values is ( compare eq .( [ dphat ] ) ) the expected covariance of shell - averaged power spectra of weighted densities is , from equations ( [ dphati ] ) and ( [ dphatidphatj ] ) , \ . } & & \end{aligned}\ ] ] assume , analogously to approximation ( [ papprox ] ) for the power spectrum , that the trispectrum function in equation ( [ dphatidphatj ] ) is sufficiently slowly varying , and the displacements , , , sufficiently small , that in [ beatcoupling ] we will revisit the approximation ( [ tapprox ] ) , and show that in fact it is not true , in a way that proves to be interesting and observationally relevant . in this section andthe next , [ weightings ] , however , we will continue to assume that the approximation ( [ tapprox ] ) is valid . in the approximations( [ papprox ] ) and ( [ tapprox ] ) that the power spectrum and trispectrum are both approximately constant for small displacements of their arguments , the covariance of shell - averaged power spectra , equation ( [ dphatidphatj ] ) , becomes \ .} & & \end{aligned}\ ] ] consider the gaussian ( ) part of this expression ( [ dphatidphatjapprox ] ) . in the true covariance of shell - averaged power ,equation ( [ dphatdphat ] ) , the gaussian part of the covariance is a diagonal matrix , with zero covariance between non - overlapping shells . by contrast , the gaussian part of the covariance of power of weighted densities , equation ( [ dphatidphatjapprox ] ) , is not quite diagonal . in effect , the gaussian variance in each shell is smeared by convolution with the weighting function , causing some of the gaussian variance near the boundaries of adjacent shells to leak into covariance between the shells . in [ weightings ] , we advocate restricting the weightings to contain only fundamental modes , which keeps smearing to a minimum .whatever the case , if each shell is broad compared the extent of the weightings in -space , then the smearing is relatively small , and can be approximated as zero .mathematically , this broad - shell approximation amounts to approximating in the broad - shell approximation ( [ broadshellapprox ] ) , the expected covariance of shell - averaged power spectra of weighted densities , equation ( [ dphatidphatjapprox ] ) , simplifies to where the factor is in real ( as opposed to fourier ) space , the factor is equation ( [ dphatidphatjf ] ) is the most basic result of the present paper .it states that the expected covariance between estimates of power from various weightings is proportional to the true covariance matrix of power .the nice thing about the result ( [ dphatidphatjf ] ) is that the constant of proportionality depends only on the weightings and , and is independent both of the power spectrum and of the wavenumbers and in the covariance .equation ( [ dphatidphatjf ] ) provides the formal mathematical justification for estimating the covariance of power from the scatter in estimates of power over an ensemble of weightings of density . in [ weightings ] we will craft the weightings so as to minimize the expected variance of the estimated covariance of power .the resulting weightings are `` best possible '' , within the framework of the technique . to determine the minimum variance estimator , it is necessary to have an expression for the ( co)variance of the covariance of power , which we now derive .the expected covariance between estimates of covariance of power is a covariance of covariance of covariance of densities , an 8-point object .this object involves , in addition to the 8-point function , a linear combination of products of lower - order functions adding to 8 points .the types of terms are ( cf . ) in which signifies a product of four 2-point functions , signifies a product of a 2-point function with two 3-point functions , and so on , up to , which signifies the 8-point function .we do not pause to write out all the terms explicitly , because in the same slowly - varying and broad - shell approximations that led to equation ( [ dphatidphatjf ] ) , the covariance of covariance of power spectra of weighted densities simplifies to } & & \nonumber \\ & & \times \bigl [ \delta{\hat{p}}_j(k_3 ) \delta{\hat{p}}_j(k_4 ) - \left\langle \delta{\hat{p}}_j(k_3 ) \delta{\hat{p}}_j(k_4 ) \right\rangle \bigr ] \bigr\rangle \nonumber \\ & \approx & g_{ij } \bigl\langle \bigl [ \delta{\hat{p}}(k_1 ) \delta{\hat{p}}(k_2 ) - \left\langle \delta{\hat{p}}(k_1 ) \delta{\hat{p}}(k_2 ) \right\rangle \bigr ] \nonumber \\ & & \times \bigl [ \delta{\hat{p}}(k_3 ) \delta{\hat{p}}(k_4 ) - \left\langle \delta{\hat{p}}(k_3 ) \delta{\hat{p}}(k_4 ) \right\rangle \bigr ] \bigr\rangle\end{aligned}\ ] ] where is , analogously to equation ( [ fij ] ) , in real ( as opposed to fourier ) space , the factors are equation ( [ dphati2dphatj2 g ] ) states , analogously to equation ( [ dphatidphatjf ] ) , that the expected covariance of covariance of power spectra of weighted densities is proportional to the true covariance of covariance of power . as with the factors , equation ( [ fij ] ) , the constants of proportionality , equation ( [ gij ] ) , depend only on the weightings and , and are independent of the power spectrum or of any of the higher order functions , and are also independent of the wavenumbers , ... , in the covariance , a gratifyingly simple result .the deviation of the shell - averaged power spectrum of the weighted density was defined above , equation ( [ dphati ] ) , to be the difference between the measured value and the expected value of shell - averaged power .however , the expected power spectrum ( the true power spectrum ) is probably unknown .even if the true power spectrum is known in the linear regime ( because the simulation was set up with a known linear power spectrum ) , the true power spectrum in the non - linear regime is not known precisely , but must be estimated from the simulation .in practice , therefore , it is necessary to measure the deviation in power not from the true value , but rather from some estimated mean value .two strategies naturally present themselves .the first strategy is to take the mean power spectrum to be the measured power spectrum of the unweighted density of the simulation . in this casethe deviation between the measured shell - averaged power spectra of the weighted and unweighted densities is ( the deviation is primed to distinguish it from the deviation , eq .( [ dphati ] ) ) the second strategy is to take the mean power spectrum to be the average over weightings of the measured power spectra of weighted densities , . in this casethe deviation between the measured shell - averaged power spectra and their average is ( with the same primed notation for the deviation as in eq .( [ dphati1 ] ) ; it is up to the user to decide which strategy to adopt ) the advantage of the first strategy , equation ( [ dphati1 ] ) , is that the power spectrum of the unweighted density is the most accurate ( by symmetry ) estimate of the power spectrum that can be measured from a single simulation .its disadvantage is that measurements of power spectra of weighted densities yield ( slightly ) biassed estimates of the power spectrum of unweighted density , because the approximation ( [ papprox ] ) can lead to a slight bias if , as is typical , the power spectrum is not constant . in other words ,the approximation , equation ( [ piapprox ] ) , is not an exact equality .although the bias is likely to be small , it contributes systematically to estimates of deviations of power , causing the covariance of power to be systematically over - estimated .the second strategy , equation ( [ dphati2 ] ) , is unaffected by this bias , but the statistical uncertainty is slightly larger . probably the sensible thing to dois to apply both strategies , and to check that they yield consistent results . to allow a concise expression for the covariance of power to be written down , it is convenient to introduce , defined to be the fourier transform of the squared real - space weighting , , the normalization condition ( [ wnorm ] ) on the weightings is equivalent to requiring in terms of , the factors , equation ( [ fij ] ) , relating the expected covariance matrix of power spectra of weighted densities to the true covariance matrix of power are an expression is desired for the covariance of power in terms of the deviations , equations ( [ dphati1 ] ) or ( [ dphati2 ] ) , instead of . for this ,a modified version of is required . for strategy one ,equation ( [ dphati1 ] ) , whereas for strategy two , equation ( [ dphati2 ] ) , in either case , the expected covariance of estimates of shell - averaged power spectra is related to the true covariance of shell - averaged power by ( compare eq .( [ dphatidphatjf ] ) ) where the factors are ( compare eq .( [ fijv ] ) ) the approximation ( [ dphatidphatjfp ] ) is valid under the same assumptions made in deriving the approximation ( [ dphatidphatjf ] ) , namely the slowly - varying approximations ( [ papprox ] ) and ( [ tapprox ] ) , and the broad - shell approximation ( [ broadshellapprox ] ) . the expression ( [ gij ] ) for the covariance of covariance of power must likewise be modified to allow for the fact that the deviations in power must be measured as deviations not from the true power spectrum but from either ( strategy 1 ) the power spectrum of the unweighted density , or ( strategy 2 ) the averaged power spectrum of the weighted densities . for this purposeit is convenient to define to be the fourier transform of the fourth power of the real - space weighting , , in terms of , the factors , equation ( [ gij ] ) , relating the expected covariance of covariance of power spectra of weighted densities to the true covariance of covariance of power are to write down an expression for the covariance of the covariance of the deviations instead of , define a modified version of by which is the same as equation ( [ ui ] ) but with primed , equations ( [ vpi1 ] ) or ( [ vpi2 ] ) , in place of . then the covariance of the covariance of the deviations is related to the true covariance of covariance of shell - averaged power by ( compare eq .( [ dphati2dphatj2 g ] ) ) } & & \nonumber \\ & & \times \bigl [ \delta{\hat{p}}^\prime_j(k_3 ) \delta{\hat{p}}^\prime_j(k_4 ) - \left\langle \delta{\hat{p}}^\prime_j(k_3 ) \delta{\hat{p}}^\prime_j(k_4 ) \right\rangle \bigr ] \bigr\rangle \nonumber \\ & \approx & g^\prime_{ij } \bigl\langle \bigl [ \delta{\hat{p}}(k_1 ) \delta{\hat{p}}(k_2 ) - \left\langle \delta{\hat{p}}(k_1 ) \delta{\hat{p}}(k_2 ) \right\rangle \bigr ] \nonumber \\ & & \times \bigl [ \delta{\hat{p}}(k_3 ) \delta{\hat{p}}(k_4 ) - \left\langle \delta{\hat{p}}(k_3 ) \delta{\hat{p}}(k_4 ) \right\rangle \bigr ] \bigr\rangle\end{aligned}\ ] ] where the factors are ( compare eq .( [ giju ] ) ) equation ( [ dphati2dphatj2gp ] ) gives the expected covariance of the difference between the estimate of covariance and its expectation value , but this latter expectation value is again an unknown quantity .what can actually be measured is the difference between the estimate and its average over weightings . to write down an expression for the covariance of the covariance relative to the weightings - averaged covariance rather than the expected covariance ,define a modified version of , equation ( [ upi ] ) , by then the covariance of the covariance of the deviations is related to the true covariance of covariance of shell - averaged power by ( compare eqs .( [ dphati2dphatj2 g ] ) and ( [ dphati2dphatj2gp ] ) ) } & & \nonumber \\ & & \times \bigl [ \delta{\hat{p}}^\prime_j(k_3 ) \delta{\hat{p}}^\prime_j(k_4 ) - \frac{1}{n } \sum_l \delta{\hat{p}}^\prime_l(k_3 ) \delta{\hat{p}}^\prime_l(k_4 ) \bigr ] \bigr\rangle \nonumber \\ & \approx & g^{\prime\prime}_{ij } \bigl\langle \bigl [ \delta{\hat{p}}(k_1 ) \delta{\hat{p}}(k_2 ) - \left\langle \delta{\hat{p}}(k_1 ) \delta{\hat{p}}(k_2 ) \right\rangle \bigr ] \nonumber \\ & & \times \bigl [ \delta{\hat{p}}(k_3 ) \delta{\hat{p}}(k_4 ) - \left\langle \delta{\hat{p}}(k_3 ) \delta{\hat{p}}(k_4 ) \right\rangle \bigr ] \bigr\rangle\end{aligned}\ ] ] where the factors are ( compare eqs .( [ giju ] ) and ( [ gpiju ] ) ) approximations ( [ dphati2dphatj2gp ] ) and ( [ dphati2dphatj2gpp ] ) are valid under the same approximations as approximations ( [ dphatidphatjf ] ) and ( [ dphati2dphatj2 g ] ) , namely the slowly - varying approximations ( [ papprox ] ) and ( [ tapprox ] ) , and the broad - shell approximation ( [ broadshellapprox ] ) .it was shown in [ estimate ] that the expected covariance between shell - averaged power spectra of weighted densities is proportional to the true covariance of shell - average power , equation ( [ dphatidphatjfp ] ) .it follows that the scatter in estimates of power from different weightings can be used to estimate the true covariance of power . in this sectionwe use minimum variance arguments to derive a set of weightings , equation ( [ wkminvar ] ) , which we recommend , [ recommend ] , for practical application . in this section as in the previous one , [ estimate ] , we continue to ignore the beat - coupling contributions to the ( covariance of ) covariance of power .these beat - couplings are discussed in [ beatcoupling ] , which in [ notquiteweightings ] concludes that the minimum variance weightings derived in the present section , although no longer precisely minimum variance , should be satisfactory for practical use . in the first place, we choose to use weightings that contain only combinations of fundamental modes , that is , with , , running over . by restricting the weightings to fundamental modesonly , we ensure that the two approximations required for equation ( [ dphatidphatjfp ] ) to be valid are as good as can be .the first approximation was the slowly - varying approximation , that both the power spectrum and the trispectrum remain approximately constant , equations ( [ papprox ] ) and ( [ tapprox ] ) , when their arguments are displaced by the extent of the weightings , that is , by amounts for which is non - zero .the second approximation was the broad - shell approximation , that the shells over which the estimated power is averaged are broad compared to the extent of the weightings , which reduces the relative importance of smearing of gaussian variance from the edges of adjacent shells into covariance between the shells .in the second place , we choose to use weightings that are symmetrically related to each other , which seems a natural thing to do given the cubic symmetry of a periodic box . choosing a symmetrically related set of weightings not only simplifies practical application of the procedure , but also simplifies the mathematics of determining a best set of fourier coefficients , as will be seen in [ minvar ] below .there are rotational and reflectional transformations of a cube , corresponding to choosing the -axis in any of 6 directions , then the -axis in any of 4 directions perpendicular to the -axis , and finally the -axis in either of the 2 directions perpendicular to the - and -axes . to the rotational and reflectional transformations we adjoin the possibility of translations by a fraction ( half , quarter , eighth ) of a box along any of the 3 axes , for a net total of possible transformations .in practice , however , the minimum variance weightings presented in [ theweightings ] prove to possess a high degree of symmetry , greatly reducing the number of distinct weightings . for brevity ,let denote an estimate of the covariance of shell - averaged power from the weighted density ( the arguments and on are suppressed , since they play no role in the arguments that follow ) the quantity here is any diagonal element of the matrix of factors defined by equation ( [ fpijv ] ) ; the diagonal elements are identically equal for all because the weightings are by assumption symmetrically related .the factor in equation ( [ xhati ] ) ensures that is , in accordance with equation ( [ dphatidphatjfp ] ) , an estimate of the true covariance of shell - averaged power , which we abbreviate , the approximation ( [ xi ] ) is valid under the assumptions made in deriving equation ( [ dphatidphatjfp ] ) , namely the slowly - varying approximations ( [ papprox ] ) and ( [ tapprox ] ) , and the broad - shell approximation ( [ broadshellapprox ] ) .let denote the number of weightings .because the weightings are by assumption symmetrically related , it follows immediately that the best estimate of the true covariance of shell - averaged power will be a straight average over the ensemble of weightings it remains to determine the best fourier coefficients for a representative weighting .the best set is that which minimizes the expected variance of the estimate ( [ xhat ] ) . according to equation ( [ dphati2dphatj2gp ] ) ,this expected variance is approximately proportional to a factor that depends on the weightings \sim \end{array}\ ! } \frac{1}{(f^\prime n)^2 } \sum_{ij } g^\prime_{ij}\ ] ] multiplied by another factor that is independent of weightings , namely the true covariance of covariance of power , the expression to the right of the coefficient in equation ( [ dphati2dphatj2gp ] ) .note that the variance is the expected variance about the true value , so it is , equation ( [ gpiju ] ) , not , equation ( [ gppiju ] ) , that appears in equation ( [ dxhat2 ] ) . equation ( [ dxhat2 ] ) shows that minimizing the variance with respect to the coefficients of the weightings is equivalent to minimizing the quantity on the right hand side of the proportionality ( [ dxhat2 ] ) . from equations ( [ fpijv ] ) , ( [ upi ] ) , and ( [ gpiju ] )it follows that this factor can be written where denotes the average of , equation ( [ upi ] ) , over weightings note that .equation ( [ dxhat2u ] ) shows that minimizing the variance involves computing , equation ( [ up ] ) .we evaluate using an algebraic manipulation program ( mathematica ) as follows . a representative weighting contains non - zero fourier coefficients , since by assumption it contains only combinations of fundamental modes . the coefficients and , which are complex conjugates of each other , effectively contribute two coefficients , the real and imaginary parts of .first , evaluate , equation ( [ vi ] ) , in terms of the coefficients of the representative weighting .the are non - zero for 125 values of , those whose components , , run over .each is a quadratic polynomial in the fourier coefficients .next , modify to get , equation ( [ vpi1 ] ) , by setting the coefficient for to zero .again , each is a quadratic polynomial in the fourier coefficients . for definiteness , we adopt strategy one , equation ( [ vpi1 ] ) , rather than strategy two , equation ( [ vpi2 ] ) . that is , we assume that the deviation in the power spectrum of the weighting of density is being measured relative to the power spectrum of the unweighted density , rather than relative to the average of the power spectra of the weighted densities . in the end it turns out , [ theweightings2 ] , that the minimum variance solution is the same for both strategies , so there is no loss in restricting to strategy one .next , evaluate , equation ( [ upi ] ) .the are non - zero for 729 values of , those whose components , , run over .each is a quartic polynomial in the fourier coefficients . next ,evaluate , equation ( [ up ] ) , the average of over weightings .consider first averaging over the 48 different rotational and reflectional transformations of the weighting .the averaged result possesses rotational and reflectional symmetry , so that is equal to its value at with components permuted and reflected in such a way that , of which there are distinct cases .the rotationally and translationally symmetrized function can be computed by averaging the values of at values of into distinct bins .the symmetrized function satisfies , so is necessarily real .thus the absolute value sign around in equation ( [ dxhat2u ] ) can be omitted .now consider averaging the over translations by half a box in each dimension .there are such translations , and each translation is characterized by a triple , , giving the number of half boxes translated in each dimension , either zero or one for each component .the effect of the translation is to multiply each coefficient by , that is , by according to whether is even or odd .the sign change carries through the definitions ( [ vi ] ) of and ( [ vpi1 ] ) of to the definition ( [ upi ] ) of , and thence to the definition ( [ up ] ) of .that is , the effect of a translation by half a box is to multiply by .it follows that , after averaging over translations , vanishes if any component of is odd , leaving only cases where all components of are even .consequently , need be evaluated only at the wavevectors all of whose components are even .the symmetrized function can be computed by averaging the values of at the values of into the distinct bins with even .it is amusing that increasing the number of weightings ( by a factor , if all translations yield distinct weightings ) actually decreases the computational work required to find the best fourier coefficients .adjoining translations by a quarter of a box simplifies the problem of finding the minimum variance solution for the coefficients even further .there are such translations , and each translation is characterized by a triple , , , each component running over to , giving the number of quarter boxes translated in each dimension .the effect of the translation is to multiply each coefficient by .the effect propagates through to the symmetrized function , which is therefore non - zero only for the wavevectors all of whose components are multiples of . the symmetrized function can be computed by averaging the values of at the values of into the distinct bins with and each component a multiple of .one more step , adjoining translations by an eighth of a box , reduces the problem of finding the minimum variance solution to a triviality . after adjoining translations by an eighth of a box ,the symmetrized function vanishes except at .the function to be minimized , the right hand side of equation ( [ dxhat2u ] ) , is therefore identically equal to , and any arbitrary weighting therefore yields a minimum variance solution .though amusing , the result is not terribly useful , because it involves a vast number , , of weightings .physically , if there are enough weightings , then together they exhaust the information about the covariance of power , however badly crafted the weightings may be . as will be seen in [ theweightings ] , there are much simpler solutions that achieve the absolute minimum possible variance , for which the right hand side of equation ( [ dxhat2u ] ) equals , with far fewer weightings . the argument above has shown that the problem of finding the minimum variance solution for attains its simplest non - trivial form if the weightings are generated from a representative weighting by rotations , reflections , and translations by quarter of a box , a total of symmetries . in this case , the weighting - dependent factor in the variance of covariance of power , the right hand side of equation ( [ dxhat2u ] ) , becomes a rational function , a ratio of two order polynomials in the fourier coefficients , the numerator being a sum of squares of quartics , and the denominator the square of a quartic .it is this function that we minimize in [ theweightings ] to find a best set of weightings .the minimum variance solution is independent of the overall normalization of the coefficients , since the quantity being minimized , the ratio on the right hand side of equation ( [ dxhat2u ] ) , is independent of the normalization of .once the minimum variance solution for the coefficients has been found , the coefficients can be renormalized to satisfy the normalization condition ( [ wnorm ] ) that ensures that the estimates of the shell - averaged power spectra of weighted densities are estimates of the true shell - averaged power , equations ( [ pi ] ) and ( [ piapprox ] ) . the previous subsection , [ minvar ] , described how to obtain the coefficients that minimize the expected variance of the estimate of covariance of shell - averaged power that comes from averaging over an ensemble of weightings that contain only combinations of fundamental modes , and that are symmetrically related to each other by rotations , reflections , and translations by quarter of a box .representative minimum variance weightings , equation ( [ wrminvar ] ) , for the cases ( top ) , and ( bottom ) .they are just single fourier modes , appropriately scaled and phased ., title="fig:",width=264 ] representative minimum variance weightings , equation ( [ wrminvar ] ) , for the cases ( top ) , and ( bottom ) .they are just single fourier modes , appropriately scaled and phased ., title="fig:",width=264 ] numerically , we find not one but three separate sets of minimum variance weightings ( with hindsight , the sets are simple enough that they might perhaps have been found without resort to numerics ) .each set consists of symmetrical transformations of a weighting generated by a single mode , namely , , and respectively for each of the three sets .because each individual weighting has a rather high degree of symmetry , each set has far fewer than the weightings expected if all symmetrical transformations yielded distinct weightings .each of the three sets is generated by the weighting where is one of the three possibilities in real space , the weighting corresponding to of equation ( [ wkminvar ] ) is \ .\ ] ] the complete set of ( , ) weightings for each set is obtained as follows .in set one ( two , three ) , a factor of ( , ) comes from the cubic ( dodecahedral , octohedral ) symmetry of permuting and reflecting the components , , of , or equivalently the components , , of .a further factor of comes from multiplying by , equivalent to translating by quarter of a box , or in equation ( [ wrminvar ] ) .the three minimum variance solutions are absolute minimum variance , in the sense that each set not only minimizes the expression on the right hand side of equation ( [ dxhat2u ] ) , but it solves for .this means that it is impossible to find better solutions in which all the weightings are symmetrically related to each other , which is the condition under which equation ( [ dxhat2u ] ) was derived . with the minimum variance solutions in hand , it is possible to go back and examine the covariance , equation ( [ dphatidphatjfp ] ) , between estimates of power from different weightings and , either within the same set , or across two different sets .estimates of power between two different sets are uncorrelated : the covariance is zero if and are drawn from two different sets .if on the other hand the weightings and are drawn from the same set , then it turns out that only half of the weightings , the ( , ) weightings related by the cubic ( dodecahedral , octohedral ) symmetry of permuting and reflecting , , , yield distinct estimates of deviation in power .the covariance matrix of estimates of power between the ( , ) cubically ( dodecahedrally , octohedrally ) related weightings is proportional to the unit matrix .however , translating a weighting by quarter of a box , , yields an estimate of deviation of power that is minus that of the original weighting , .actually , this is exactly true only if the slowly - varying and thick - shell approximations are exactly true ( of course , the thick - shell approximation is never exactly true ) . thus translating a weighting by quarter of a boxshould yield an estimate of deviation in power that is highly anti - correlated with the original ; which should provide a useful check of the procedure . translating a weighting by halfa box simply changes its sign , .this yields an estimate of deviation of power that equals exactly ( irrespective of approximations ) that of the original weighting , so yields no distinct estimate of deviation in power .these redundant translations by half a box have already been omitted from the set of ( , ) weightings .the value of , the factor that converts , equation ( [ xhati ] ) , estimates of the covariance of power from a weighted density field to an estimate of the true covariance of power is the same factor for each of the three sets .the expected covariance matrix of estimates of covariance of power equals times the true covariance of covariance of power , according to equation ( [ dphati2dphatj2 g ] ) .the factors , equation ( [ gpiju ] ) , are equation ( [ gpijminvar ] ) is valid for weightings , both within the same set and across different sets .the case in equation ( [ gpijminvar ] ) occurs not only when , but also when the weightings and are related by translation by quarter of a box .the case in equation ( [ gpijminvar ] ) occurs not only when the weightings and are parity conjugates of each other , but also when they are parity confugates translated by quarter of a box .the factors , equation ( [ gppiju ] ) , which relate the covariance of estimates relative to their measured mean , equation ( [ xhat ] ) , as opposed to their expected mean , equation ( [ xi ] ) , are an estimate of the uncertainty in the estimate can be deduced by measuring the variance in the fluctuations about the measured mean .there is of course no point in attempting to estimate the uncertainty from , which is identically zero .the true variance can be estimated from the measured variance by in which the factor of comes from ( but note the caveat at the end of [ notquiteweightings ] ) which corrects for the neglected covariance in the measured variance .the minimum variance weightings derived above assumed , for definiteness , strategy one , in which the deviation in power is taken to be relative to the power spectrum of the unweighted density , equation ( [ dphati1 ] ) , an alternative strategy , strategy two , is to take the deviation in power to be relative to the average of the power spectra of the weighted densities , equation ( [ dphati2 ] ) .strategy two yields an estimate of covariance of power that has potentially less systematic bias , but potentially greater statistical uncertainty .as it happens , the minimum variance solution for strategy one , [ theweightings ] , proves also to solve the minimum variance problem for strategy two .thus the minimum variance solution weightings are the same for both strategies .mathematically , expectation values of covariances for the two methods differ in that is given for strategy one by equation ( [ vpi1 ] ) , and for strategy two by equation ( [ vpi2 ] ) .however , for the minimum variance weightings of strategy one , equation ( [ wkminvar ] ) and its symmetrical transformations , it turns out that , the term subtracted from in strategy two , equation ( [ vpi2 ] ) , is equal to if , and zero otherwise .this is exactly the same as the term subtracted from in strategy one , equation ( [ vpi1 ] ) .it follows that is the same for the two strategies .although the minimum variance set of weightings is the same for both strategies , the two strategies will in general yield different estimates of the covariance of power .the three minimum variance sets of weightings found ( numerically ) in [ theweightings ] all take the same form , equation ( [ wkminvar ] ) , differing only in that they are generated by a different single mode , with wavevectors , , and respectively .one can check that the result generalizes to higher order weightings , in which the wavevector in equation ( [ wkminvar ] ) is any wavevector with integral components ( such as , , and so on ) .that is , for any wavevector with integral components , the weightings generated from the weighting of equation ( [ wkminvar ] ) by rotations , relections , and translations by quarter of a box , form a minimum variance set .all the results of [ theweightings ] ( and [ theweightings2 ] ) carry through essentially unchanged .in particular , all equations ( [ wrminvar])([ggminvar ] ) remain the same .the disadvantage of including higher order weightings is that the estimates of the covariance of power become increasingly inaccurate as the wavenumber of the weighting increases , because the slowly - varying approximations ( [ papprox ] ) and ( [ tapprox ] ) , and the broad - shell approximation ( [ broadshellapprox ] ) , become increasingly poor as increases .the advantage of including higher order weightings is that the more weightings , the better the statistical estimate , at least in principle .however , the gain from more weightings is not as great as one might hope .the cramr - rao inequality ( ; see e.g. for a pedagogical derivation ) states that the inverse variance of the best possible unbiassed estimate of the parameter must be less than or equal to the fisher information ( see ) in the parameter where is the likelihood function .to the extent that the estimates are gaussianly distributed ( that is , the likelihood function is a gaussian in the estimates \sim \end{array}\ ! } \exp \bigl [ - \frac{1}{2 } \sum_{ij } \langle \delta{\widehat{x}}_i \delta{\widehat{x}}_j \rangle^{-1 } ( { \widehat{x}}_i - x ) ( { \widehat{x}}_j - x ) \bigr]\ ] ] with covariance independent of ) , which could be a rather poor approximation , the fisher information in the parameter approximates the sum of the elements of the inverse covariance matrix , in the present case , the covariance matrix is proportional to , so in approximation that are gaussianly distributed , the fisher information is proportional to \sim \end{array}\ ! } \sum_{ij } { g^\prime_{ij}}^{-1 } \ .\ ] ] with the coefficients given by equation ( [ gpijminvar ] ) , the quantity on the right hand side of equation ( [ fisherg ] ) proves to be a constant , independent of the number of estimates this constancy of the fisher information with respect to the number of estimates suggests that there is no gain at all in adjoining more and more estimates . however , this conclusion is true only to the extent , firstly , that the slowly - varying and broad - shell approximations are good , and , secondly , that the estimates are gaussianly distributed , neither of which assumptions necessarily holds .all one can really conclude is that the gain in statistical accuracy from including more estimates is likely to be limited .there is however another important consideration besides the accuracy of the estimate of the covariance matrix of power : it is desirable that the estimated covariance matrix be , like the true covariance matrix , strictly positive definite , that is , it should have no zero ( or negative ) eigenvalues . as noted by ,if a matrix is estimated as an average over estimates , then its rank can be no greater than .thus , to obtain a positive definite covariance matrix of power for shells of wavevector , at least distinct estimates are required . in [ recommend ] belowwe recommend estimating the covariance of power from an ensemble of weightings .this will yield a positive definite covariance matrix only if the covariance of power is estimated over no more than shells of wavenumber .since , as noted in [ theweightings ] , weightings related by translation by quarter of a box yield highly anti - correlated estimates of power , hence highly correlated estimates of covariance of power , a more conservative approach would be to consider that the weightings yield only effectively distinct estimates of covariance of power , so that the covariance of power can be estimated over no more than shells of wavenumber .if ( strategy two ) the deviation of power is measured relative to the measured mean over symmetrically related weightings , a ( slightly ) different mean for each of the sets of weightings , then degrees of freedom are lost , and the covariance of power can be estimated over no more than shells of wavenumber , or more conservatively over no more than shells of power . here is a step - by - step recipe for applying the weightings method to estimate the covariance of power from a periodic simulation . 1 .select the weightings .we recommend the minimum variance sets of weightings given by equation ( [ wkminvar ] ) and its symmetrical transformations .if the weightings are restricted to contain only combinations of fundamental modes , then there are three such sets of weightings , equation ( [ kminvar ] ) , and the three sets together provide distinct weightings .2 . for each weighting ,measure the shell - averaged power spectrum of the weighted density field , equations ( [ phati ] ) and ( [ phati ] ) .3 . for each weighting , evaluate the deviation in the shell - averaged power as the difference between and , either ( strategy one ) the shell - averaged power of the unweighted density , or ( strategy two ) the mean over symmetrically related weightings .the advantage of strategy one is that the statistical error is potentially smaller , whereas the advantage of strategy two is that the systematic bias is potentially smaller . in strategy two, it makes sense to subtract the mean separately for each symmetrically related set of weightings , because the systematic bias is ( slightly ) different for each set .we recommend trying both strategies one and two , and checking that they yield consistent results .4 . estimate the covariance matrix of shell - averaged power from the average over all ( ) weightings the factor of in equation ( [ xest ] ) is , equation ( [ fpminvar ] ) , necessary to convert the average over weightings to an estimate of the true covariance of power , equation ( [ xhati ] ) .comparison between the normalized variance of power measured from 25 art simulations by ( symbols with error bars , indicating median and quartiles ) the weightings method , and ( plain symbols ) the ensemble method .the two methods disagree substantially at nonlinear scales .lines show the normalized variance predicted by perturbation theory both with ( solid line ) , and without ( dashed line ) the large - scale beat - coupling contribution .the dotted line shows the expected gaussian contribution to the variance .this figure is a condensed version of figure 5 of ., width=312 ]this paper should have ended at this point . unfortunately , numerical tests , described in detail in the companion paper revealed a serious problem .figure [ vartwofig ] shows the problem .it shows the median and quartiles of variance of power measured by the weightings method in each of 25 art simulations of box size , compared to the variance of power measured over the ensemble of the same 25 simulations .although the two methods agree at linear scales , the weightings method gives a systematically larger variance at nonlinear scales .the discrepancy reaches almost an order of magnitude at the smallest scales measured , .the reader is referred to for details of the simulations and their results .this section diagnoses and addresses the problem .the next section , [ discussion ] , discusses the problem and its relevance to observations . the physical cause of the problem illustrated in figure [ vartwofig ] traces to a nonlinear coupling of products of fourier modes closely spaced in wavenumber to the large - scale beat mode between them .this beat - coupling , as we refer to it , occurs only when power is measured from fourier modes with a finite spread in wavevector , and therefore appears in the weightings method ( and in observations see [ observation ] below ) but not in the ensemble method .the beat - coupling is surprisingly large , to the point that , as seen in figure [ vartwofig ] , it actually dominates the variance of power at nonlinear scales .more specifically , in the ensemble method , the power spectrum of a periodic simulation is measured from the variance of fourier modes . in the weightings method on the other hand, the power spectrum receives contributions not only from the variance , but also from the covariance between modes a small wavevector apart .this covariance vanishes in the mean , but it couples to large - scale modes through quadratic nonlinearities .that is , the correlation between the product and the large - scale mode is the bispectrum the bispectrum is zero for gaussian fluctuations , but is driven away from zero by nonlinear gravitational growth .the place where , prior to this section , we inadvertently discarded the large - scale beat - coupling , is equation ( [ tapprox ] ) , where we made the seemingly innocent approximation that the trispectrum is a slowly varying function of what appears to be its arguments , to .this assumption is false , as we now show . for a statistically isotropic field ( as considered in this paper ) , the trispectrum depends on six scalar arguments .this follows from the fact that a spatial configuration of four points is determined by the six lengths of the sides of the tetrahedron whose vertices are the four points . in fourier space ,the configuration is an object four of whose sides are equal to the wavevectors to .the object forms a closed tetrahedron ( because ) , whose shape is determined by the six lengths of the sides of the tetrahedron .four - point configuration of wavevectors for the trispectrum in equation ( [ dphatidphatj ] ) , which describes the covariance of power spectra of weighted densities .the short leg , equation ( [ epsilon ] ) , produces a beat - coupling to large scales ., width=192 ] figure [ tetrahedronfig ] illustrates the configuration of interest in the present paper , that for the trispectrum in equation ( [ dphatidphatj ] ) .rewritten as a function of six scalar arguments , the trispectrum of equation ( [ dphatidphatj ] ) is where the wavevector is defined by which is small but not necessarily zero .the invalid approximation ( [ tapprox ] ) is equivalent to approximating the problem with this approximation is apparent . although primed wavenumbers are small compared to unprimed ones , so that the approximation in the first five arguments is reasonable , in the last argument it is not valid to approximate a finite wavenumber , however small , by zero .a valid approximation is , rather , as an example of the large - scale beat - coupling contributions to the trispectrum that arise from the beat wavevector , consider perturbation theory . in perturbation theory ( pt ) , the trispectrum can be split into snake and star contributions ( ; ) \nonumber \\ & & \!\!\!\!\!\ ! \\mbox { } + \mbox{cyclic ( 4 star terms)}\end{aligned}\ ] ] where , and the second - order pt kernel is given by with .in the case of interest , where the trispectrum is that of equation ( [ tsix ] ) , 4 of the 12 snake terms produce a coupling to large scales , those where the beat wavenumber in equation ( [ tab ] ) is small . in the ( valid ) approximation ( [ tapprox2 ] ) , the pertinent pt trispectrum is in which the term on the last line represents the large - scale beat - coupling contribution incorrectly ignored by the approximation ( [ tapprox1 ] ) . in equation ( [ dphatidphatj ] ) for the covariance of shell - averaged power , this trispectrum , equation ( [ tapproxpt ] ) , is angle - averaged over the directions of and .the angle - averaged second - order pt kernel is and it follows that the last line of equation ( [ tapproxpt ] ) , when angle - averaged , is . following the same arguments that led from equation ( [ dphatidphatj ] ) to equation ( [ dphatidphatjf ] ) , and then to equation ( [ dphatidphatjfp ] ) , but with the beat - coupling term now correctly retained in the trispectrum , one finds that equation ( [ dphatidphatjfp ] ) for the expected covariance of shell - averaged power spectra of weighted densities is modified to where is defined by equations ( [ vi ] ) and ( [ vpi1 ] ) or ( [ vpi2 ] ) , and the constant is the reason for writing equation ( [ dphatidphatjflpt ] ) in this form , with the constant separated out , is that , as will be seen in [ hierarchical ] , the same expression remains valid in the hierarchical model , but with the 4-point hierarchical snake amplitude .figure [ vartwofig ] includes lines showing the predicted pt result for the variance of shell - averaged power of weighted density , equation ( [ dphatidphatj ] ) , both with ( solid lines ) and without ( dashed lines ) beat - coupling . the pt variance with beat - couplingwas obtained by numerically integrating the pt expression ( [ tab ] ) for the trispectrum ( [ tsix ] ) in equation ( [ dphatidphatj ] ) ( that is , without making the approximations ( [ tapprox2 ] ) or ( [ dphatidphatjflpt ] ) ) , with the minimum variance weightings ( [ wkminvar ] ) , and then multiplying by the factor , equation ( [ fpminvar ] ) . from thisthe pt variance without beat - coupling was obtained by setting .the variance without beat - coupling agreed well with a direct pt evaluation of equation ( [ dphatdphat ] ) .figure [ vartwofig ] shows that the beat - coupling contribution predicted by perturbation theory seems to account reasonably well for the extra variance that appears at nonlinear scales in the weightings versus the ensemble method .we will return to equation ( [ dphatidphatjflpt ] ) in [ largescale ] below , but first consider the hierarchical model as a prototype of the trispectrum beyond perturbation theory .perturbation theory is valid only in the translinear regime .the behaviour of the trispectrum in the fully nonlinear regime is less well understood .available observational and -body evidence ( ; ; ; ; ) is consistent with a hierarchical model of higher order correlations . in the hierarchical model ,the trispectrum is a sum of snake and star terms \nonumber \\ & & \!\!\!\!\!\ !\mbox { } + r_b \bigl [ p(k_1 ) p(k_2 ) p(k_3 ) + \mbox{cyclic~(4 star terms ) } \bigr ] \ .\end{aligned}\ ] ] the pt trispectrum , equation ( [ tab ] ) , shows a hierarchical structure with hierarchical amplitudes and that are not constant , but rather depend on the shape of the trispectrum tetrahedron . at highly nonlinear scales ,scoccimarro & frieman ( 1999 ) suggested an ansatz , dubbed hyperextended perturbation theory ( hept ) , that the hierarchical amplitudes go over to the values predicted by perturbation theory for configurations collinear in fourier space .for power law power spectra , hept predicts 4-point amplitudes as pointed out by and , hept is not entirely consistent because it predicts a covariance of power that violates the schwarz inequality when . in the hierarchical model with constant hierarchical amplitudes ,4 of the 12 snake terms produce a coupling to large scales in the trispectrum of interest , equation ( [ tsix ] ) . inthe ( valid ) approximation ( [ tapprox2 ] ) , the hierarchical trispectrum is in which the term on the last line represents the large - scale beat - coupling contribution .the hierarchical trispectrum ( [ tapproxhier ] ) looks similar to ( slightly simpler than ) the pt trispectrum ( [ tapproxpt ] ) .following the same arguments as before , one recovers the same expression ( [ dphatidphatjflpt ] ) for the expected covariance of shell - averaged power spectra of weighted densities .suppose that either perturbation theory , [ pt ] , or the hierarchical model , [ hierarchical ] , offers a reliable guide to the coupling of the nonlinear trispectrum to large scales , so that equation ( [ dphatidphatjflpt ] ) is a good approximation to the expected covariance of shell - averaged power spectra of weighted densities .make the further assumption that the power spectrum is approximately constant over the large - scale wavevectors represented in where is the wavenumber at the box scale .the factor in in equation ( [ papproxl ] ) appears as a reminder that the wavevectors in are , equations ( [ vi ] ) and ( [ vpi1 ] ) or ( [ vpi2 ] ) , sums of pairs of wavenumbers represented in the weighting . for example , if the weightings are taken to be the minimum variance weightings given by equation ( [ wkminvar ] ) , then where is the wavenumber of the weighting .approximation ( [ papproxl ] ) is in the same spirit as , but distinct from , the earlier approximation ( [ papprox ] ) that the power spectrum is a slowly varying function .note that equation ( [ papproxl ] ) does _ not _ require that ( which would certainly not be correct , because ) , because is zero , which is true a priori in strategy one , equation ( [ vpi1 ] ) , and ends up being true a posteriori in strategy two , equation ( [ vpi2 ] ) , by the argument in [ theweightings2 ] . in the approximation ( [ papproxl ] ) ,the summed expression on the right hand side of equation ( [ dphatidphatjflpt ] ) is and equation ( [ dphatidphatjflpt ] ) reduces to \end{aligned}\ ] ] with the term on the last line being the large - scale beat - coupling contribution .equation ( [ dphatidphatjfl ] ) provides the fundamental justfication for the weightings method when beat - coupling is taken into account .it states that the covariance of shell - averaged power spectra of weighted densities is proportional to the sum of the true covariance of shell - averaged power , and a beat - coupling term proportional to power at ( twice ) the box wavenumber .the crucial feature of equation ( [ dphatidphatjfl ] ) is that the constant of proportionality , equation ( [ fpijv ] ) , depends only on the weightings , and is independent either of the power spectrum or of the wavenumbers and . in the limit of infinite box size, the beat - coupling contribution to the covariance of power spectra of weighted densities in equation ( [ dphatidphatjfl ] ) goes to zero , as , and the covariance becomes proportional to the true covariance of power . however , in cosmologically realistic simulations , such as illustrated in figure [ vartwofig ] and discussed further in [ discussion ] , the beat - coupling contribution , far from being small , is liable to dominate at nonlinear scales . beyond perturbation theory or the hierarchical model ,the weightings method remains applicable just so long as the hierarchical amplitude in equation ( [ dphatidphatjfl ] ) is independent of the weightings .in general , could be any arbitrary function of , , and the box wavenumber .section [ weightings ] derived sets of minimum variance weightings valid when the covariance , and the covariance of covariance , of power spectra of weighted densities took the separable forms given by equations ( [ dphatidphatjfp ] ) and ( [ dphati2dphatj2gp ] ) .when beat - scale coupling is included , the covariance of power , equation ( [ dphatidphatjfl ] ) , still takes the desired separable form ( as long as the hierarchical amplitude is independent of the weightings ) , but the covariance of covariance of power ( eq . ( [ dphati2dphatj2gpl ] ) of appendix [ minvarl ] ) does not . in appendix[ minvarl ] , we discuss what happens to the minimum variance derivation of [ weightings ] when beat - coupling is included .we argue that the minimum variance weightings of [ theweightings ] are no longer exactly minimum variance , but probably remain near minimum variance , and therefore fine to use in practice .the factor on the right hand side of equation ( [ dxhat2est ] ) is no longer correct when beat - coupling is included , but may remain a reasonable approximation .as shown in [ beatcoupling ] , the covariance of nonlinear power receives beat - coupling contributions from large scales whenever power is measured from fourier modes that have a finite spread in wavevector , as opposed to being delta - functions at single discrete wavevectors .physically , the large - scale beat - coupling arises because a product of fourier amplitudes of closely spaced wavevectors couples by nonlinear gravitational growth to the beat mode between them .the beat - coupling contribution does not appear when covariance of power is measured from ensembles of periodic box simulations , because in that case power is measured from products of fourier amplitudes at single discrete wavevectors . herethe `` beat '' mode is the mean mode , , whose fluctuation is by definition always zero , .there is on the other hand a beat - coupling contribution when covariance of power is measured by the weightings method , because the fourier modes of weighted density are spread over more than one wavevector . for weightings constructed from combinations of fundamental modes , as recommended in [ weightings ] , the covariance of power spectra of weighted densities receives beat - coupling contributions from power near the box fundamental .the beat - coupling and normal contributions to the variance of nonlinear power are in roughly the ratio of power at the box scale to power at the nonlinear scale , according to equation ( [ dphatidphatjfl ] ) . in cosmologically realistic simulations ,box sizes are typically around the range .this is just the scale at which the power spectrum goes through a broad maximum .for example , in observationally concordant models , power goes through a broad maximum at ( e.g. ) , corresponding to a box size .power at the maximum is about times greater than power at the onset of the translinear regime , , and the ratio of power increases at more nonlinear wavenumbers .it follows that in cosmologically realistic simulations the beat - coupling contribution to the covariance of power is liable to dominate the normal contribution .this is consistent with the numerical results illustrated in figure [ vartwofig ] and discussed by , which show that the variance of power measured by the weightings method ( which includes beat - coupling contributions ) substantially exceeds , at nonlinear scales , the variance of power measured by the ensemble method ( which does not include beat - coupling contributions ) . in real galaxy surveys ,measured fourier modes inevitably have finite width , where is a characteristic linear size of the survey .the characteristic size varies from to a few ( an upper limit is set by the comoving horizon distance , which is about in the concordant model ) .it follows that the covariance of nonlinear power measured in real galaxy surveys is liable to be dominated not by the `` true '' covariance of power ( the covariance of power in a perfect , infinite survey ) , but rather by the contribution from beat - coupling to power at the scale of the survey .this means that one must take great care in using numerical simulations to estimate or to predict the covariance of nonlinear power expected in a galaxy survey .the scatter in power over an ensemble of periodic box simulations will certainly underestimate the covariance of power by a substantial factor at nonlinear scales , because of the neglect of beat - coupling contributions .a common and in principle reliable procedure is to estimate the covariance of power of a galaxy survey from mock surveys `` observed '' with the same selection rules as the real survey from numerical simulations large enough to encompass the entire survey ( e.g. ; ; ; ; ; ; ; ; ) .it is important that numerical simulations be genuinely large enough to contain a mock survey .one should be wary about estimating covariance of power from mock surveys extracted from small periodic boxes replicated many times ( e.g. ) , since such boxes are liable to be missing power at precisely those wavenumbers , the inverse scale size of the mock survey , where beat - coupling should in reality be strongest .beat - coupling arises from a real gravitational coupling to large scale modes , and the simulation from which a mock survey is extracted must be large enough to contain such modes .further , it would be wrong to take , say , a volume - limited subsample of a galaxy survey , and then to estimate the covariance of power from an ensemble of periodic numerical simulations whose size is that of the volume - limited subsample .a volume - limited subsample of observational data retains beat - coupling contributions to the covariance of power , whereas periodic box simulations do not .this paper falls into two parts . in the first part , [ estimate ] and [ weightings ], we proposed a new method , the weightings method , that yields an estimate of the covariance of the power spectrum of a statistically homogeneous and isotropic density field from a single periodic box simulation .the procedure is to apply a set of weightings to the density field , and to measure the covariance of power from the scatter in power over the ensemble of weightings . in [ estimate ] we developed the formal mathematical apparatus that justifies the weightings method , and in [ weightings ] we derived sets of weightings that achieve minimum variances estimates of covariance of power .section [ recommend ] gives a step - by - step recipe for applying the weightings method .we recommend a specific set of 52 minimum variance weightings containing only combinations of fundamental modes . in the second part of this paper , [ beatcoupling ] and [ discussion ], we discuss an unexpected glitch in the procedure , that emerged from the periodic box numerical simulations described in the companion paper .the numerical simulations showed that , at nonlinear scales , the covariance of power measured by the weightings method substantially exceeded that measured over an ensemble of independent simulations . in [ beatcoupling ]we argue from perturbation theory that the discrepancy between the weightings and ensemble methods arises from `` beat - coupling '' , in which products of closely spaced fourier modes couple by nonlinear gravitational growth to the large - scale beat mode between them .beat - coupling is present whenever nonlinear power is measured from fourier modes that have a finite spread in wavevector , as opposed to being delta - functions at single discrete wavevectors .beat - coupling affects the weightings method , because fourier modes of weighted densities have a finite width , but not the ensemble method , because the fourier modes of a periodic box are delta - functions of wavevector . as discussed in [ discussion ] , beat - coupling inevitably affects real galaxy surveys , whose fourier modes necessarily have a finite width of the order of the inverse scale size of the survey . surprisingly , at nonlinear scales , beat - coupling is liable to dominate the covariance of power of a real survey .one would have thought that the covariance of power at nonlinear scales would be dominated by structure at small scales , but this is not true .rather , the covariance of nonlinear power is liable to be dominated by beat - coupling to power at the largest scales of the survey . a common and valid procedure for estimating the covariance of power from a real survey is the mock survey method , in which artificial surveys are `` observed '' from large numerical simulations , with the same selection rules as the real survey .it is important that mock surveys be extracted from genuinely large simulations , not from many small periodic simulations stacked together , since stacked simulations miss the large - scale power essential to beat - coupling .finally , it should be remarked that , although this paper has considered only the covariance of the power spectrum , it is likely that , in real galaxy surveys and cosmologically realistic simulations , beat - coupling contributions dominate the nonlinear variance and covariance of most other statistical measures , including higher order -point spectra such as the bispectrum and trispectrum , and -point correlation functions in real space , including the -point correlation function .we thank nick gnedin , matias zaldarriaga and max tegmark for helpful conversations , and anatoly klypin and andrey kravtsov for making the mpi implementation of art available to us and for help with its application .grafic is part of the cosmics package , which was developed by edmund bertschinger under nsf grant ast-9318185 .the simulations used in this work were performed at the san diego supercomputer center using resources provided by the national partnership for advanced computational infrastructure under nsf cooperative agreement aci-9619020 .this work was supported by nsf grant ast-0205981 and by nasa atp award nag5 - 10763 .baugh c. m. et al .( 29 authors ; 2dfgrs team ) , 2004 , mnras , 351 , l44 blaizot j. , wadadekar y. , guiderdoni b. , colombi s. , bertin e. , bouchet f. r. , devriendt j. e. g. , hatton s. , 2005 , mnras , 360 , 159 coil a. l. , davis m. , szapudi i. , 2001 , pasp , 113 , 1312 cole s. et al .( 31 authors ; 2dfgrs team ) , 2005 , mnras , 362 , 505 colombi s. , bouchet f. r. , hernquist l. , 1996 , apj , 465 , 14 cooray a. , 2004 , mnras , 348 , 250 croton d. j. et al .( 29 authors ; 2dfgrs team ) , 2004 , mnras , 352 , 1232 eisenstein d. j. et al .( 48 authors ; sdss collaboration ) , 2005 , apj , 633 , 560 frith w. j. , shanks t. , outram p. j. , 2005 , mnras , 361 , 701 hamilton a. j. s. , 2000 , mnras , 312 , 257 hamilton a. j. s. , 2005 , in data analysis in cosmology " , v. martinez , ed , proceedings of an international summer school , 6 - 10 september , valencia , spain , springer verlag lecture notes in physics , 17 pages , to appear ( astro - ph/050360 ) hoekstra h. , yee y. , gladders m. , 2002 , apj , 577 , 595 hui l. , gaztaaga e. , 1999 , apj , 519 , 622 komatsu e. et al .( 15 authors ; wmap team ) , 2003 , apjs , 148 , 119 knsch h. r. , 1989 , ann .17(3 ) , 1217 lidz a. , heitmann k. , hui l. , habib s. , rauch m. , sargent w. l. w. , 2005 , apj , submitted ( astro - ph/0505138 ) kendall m. g. , stuart a. , 1967 , the advanced theory of statistics ( hafner publishing , new york ) meiksin a. , white m. , 1999 , mnras , 308 , 1179 padilla n. d. , baugh c. m. , 2003 , mnras , 343 , 796 pan j. , szapudi i. , 2005 , mnras , 362 , 1363 park c. , choi y .- y ., vogeley m. , gott j. r. iii , kim j. , hikage c. , matsubara t. , park m .-g . , suto y. , weinberg d. h. ( sdss collaboration ) , 2005 , apj , 633 , 11 peebles p. j. e. , 1980 , the large scale structure of the universe ( princeton university press ) pen u .-l . , lu t. , van waerbeke l. , mellier y. , 2003 , mnras , 346 , 994 rimes c. d. , hamilton a. j. s. , 2005 ,mnras , 360 , l82 rimes c. d. , hamilton a. j. s. , 2006 , mnras , submitted ( companion paper ) sanchez a. g. , baugh c. m. , percival w. j. , peacock j. a. , padilla n. d. , cole s. , frenk c. s. , norberg p. , 2005, mnras , submitted ( astro - ph/0507583 ) scoccimarro r. , frieman j. a. , 1999 , apj , 520 , 35 scoccimarro r. , zaldarriaga m. , hui l. , 1999 , apj , 527 , 1 sefusatti e. , scoccimarro r. , 2005 , phys . rev .d , 71 , 063001 seljak u. et al .( 21 authors ; sdss team ) , 2005 , phys . rev .d , 71 , 103515 sheldon e. s. , johnston d. e. , frieman j. a. , scranton r. , mckay t. a. , connolly a. j. , budavari t. , zehavi i. , bahcall n. , brinkmann j. , fukugita m. , 2004 , aj , 127 , 2544 spergel d. n. et al .( 17 authors ; wmap team ) , 2003 , apjs , 148 , 175 takada m. , jain b. , 2004 , mnras , 348 , 897 tegmark m. et al . ( 62 authors ; sdss collaboration ) , 2004a , apj , 606 , 702 tegmark m. et al . ( 64 authors ; sdss collaboration ) , 2004b , phys . rev .d , 69 , 103501 tegmark m. , taylor a. , heavens a. , 1997 , apj , 480 , 22 yan r. , white m. , coil a. l. , 2004 , apj , 607 , 739 verde l. , heavens a. f. , 2001 , apj , 553 , 14 viel m. , haehnelt m. g. , mnras , in press ( astro - ph/0508177 ) yang x. , mo h. j. , jing y. p. , van den bosch f. c. , chu y. , 2004 , mnras , 350 , 1153 white m. , hu w. , 2000 , apj , 537 , 1 zhan h. , eisenstein d. , 2005 , mnras , 357 , 1387 zhan h. , dav r. , eisenstein d. , katz n. , 2005 , mnras , in press ( astro - ph/0504419 )this appendix describes how the beat - coupling contributions to covariance of power discussed in [ beatcoupling ] modify the minimum variance arguments in [ weightings ] .the conclusion is that the minimum variance weightings given in [ theweightings ] are no longer exact minimum variance , but are probably near minimum variance , and therefore fine to use in practice . with no approximations at all , the covariance of covariance of shell - averaged power spectra of weighted densities ( with the deviations in power being taken relative to the measured rather than the expected mean power , eqs .( [ dphati1 ] ) or ( [ dphati2 ] ) ) , takes the generic form } & & \nonumber \\ & & \times \bigl [ \delta{\hat{p}}^\prime_j(k_3 ) \delta{\hat{p}}^\prime_j(k_4 ) - \left\langle \delta{\hat{p}}^\prime_j(k_3 ) \delta{\hat{p}}^\prime_j(k_4 ) \right\rangle \bigr ] \bigr\rangle \nonumber \\ & = & \sum_{\bvarepsilon_1 + \bvarepsilon_2 + \bvarepsilon_3 + \bvarepsilon_4 = { { \bmath 0 } } } v^\prime_i(\bvarepsilon_1 ) v^\prime_i(\bvarepsilon_2 ) v^\prime_j(\bvarepsilon_3 ) v^\prime_j(\bvarepsilon_4 ) \\\nonumber \lefteqn { \ \ \sum_{{{\bmath k}}_1 \in v_{k_1 } , ... , \ { { \bmath k}}_4 \in v_{k_4 } } e({{\bmath k}}_1 { - } { { \bmath k}}_1^\prime , -{{\bmath k}}_1 { - } { { \bmath k}}_1^{\prime\prime } , ... , { { \bmath k}}_4 { - } { { \bmath k}}_4^\prime , -{{\bmath k}}_4 { - } { { \bmath k}}_4^{\prime\prime } ) } & & \ ] ] where is defined by equations ( [ vi ] ) and ( [ vpi1 ] ) or ( [ vpi2 ] ) , the wavevectors are defined by and is an 8-point object , a sum of products of -point functions adding up to 8 points , as enumerated in equation ( [ eightpt ] ). eight - point configuration of wavevectors contributing to the covariance of covariance of power spectra of weighted densities , equation ( [ dphati2dphatj2 ] ) .the central tetrahedron of short legs produces beat - couplings to large scales ., width=192 ] figure [ eightpointfig ] illustrates the configuration of the -point function that contributes to the -point object in equation ( [ dphati2dphatj2 ] ) .the short legs , ... , of the configuration constitute a tetrahedron , whose 6 sides generate beat - couplings to large scale . none of the legs is zero , because a zero leg would make zero contribution to equation ( [ dphati2dphatj2 ] ) , since , equation ( [ vpi1 ] ) or , a posteriori , equation ( [ vpi2 ] ) ; but it is possible for the sum of a pair of the short legs to be zero . it is these configurations , where the sum of a pair of short legs is zero , that prevent equation ( [ dphati2dphatj2 ] ) from being separated , as in equation ( [ dphati2dphatj2gp ] ) , into a product of a factor that depends only on the weightings and a factor that is independent of the weightings .note that it is only the 8-point function itself , not lower - order functions , that prevent separability : lower - order functions depend on at most three of the four , and a triangle with three non - zero sides has ( of course ) no zero sides . in either perturbation theory or the hierarchical model , and in the various valid approximations made in this paper ( firstly , that -point spectra are slowly varying functions of their arguments _ except _ that small arguments _ not _ replaced by zero , and secondly , that shells are broad ) , the covariance of covariance of shell - averaged power spectra of weighted densities , equation ( [ dphati2dphatj2 ] ) , reduces to } & & \nonumber \\ & & \times \bigl [ \delta{\hat{p}}^\prime_j(k_3 ) \delta{\hat{p}}^\prime_j(k_4 ) - \left\langle \delta{\hat{p}}^\prime_j(k_3 ) \delta{\hat{p}}^\prime_j(k_4 ) \right\rangle \bigr ] \bigr\rangle \nonumber \\ & \approx & \lambda g^\prime_{ij } - \mu f^\prime_{ii } f^\prime_{jj } - \nu f^{\prime \ , 2}_{ij}\end{aligned}\ ] ] where and are given by equations ( [ fpijv ] ) and ( [ gpiju ] ) .the quantities , , in equation ( [ dphati2dphatj2gpl ] ) are each functions of , and the box wavenumber , but , importantly , are independent of the weightings . the ( respectively ) term in equation ( [ dphati2dphatj2gpl ] ) arises from terms where ( respectively or ) in equation ( [ dphati2dphatj2 ] ) .all three terms on the right hand side of equation ( [ dphati2dphatj2gpl ] ) contain large - scale beat - coupling contributions , proportional to one , two , or three factors of large - scale power .the second ( ) and third terms in equation ( [ dphati2dphatj2 ] ) are written with negative signs because their effect is such as to cancel some of the beat - coupling terms appearing in the first ( ) term ( that is , some of the beat - coupling terms proportional to in should really be proportional to ; the and terms remove these terms ) .it is to be expected that , , and are all positive . at linear scales , where fluctuations are gaussian , the beat - couplings generated by nonlinear evolution are small , so that . at nonlinear scales ,however , and could be an appreciable fraction of . the derivation of minimum variance weightings in [ weightings ] involved summing over weighings , equation ( [ dxhat2 ] ) .consider the corresponding double sum over of equation ( [ dphati2dphatj2gpl ] ) .the sum over yields the same result as before , equation ( [ dxhat2u ] ) . adjoining the sum over from equation ( [ dphati2dphatj2gpl ] ) modifies equation ( [ dxhat2u ] ) to } \nonumber \\ & = & \frac{1}{u^\prime({{\bmath 0}})^2 } \left [ \left ( 1 - \frac{\mu}{\lambda } \right ) u^\prime({{\bmath 0}})^2 + \sum_{{{\bmath k}}\neq { { \bmath 0 } } } \left| u^\prime({{\bmath k } } ) \right|^2 \right]\end{aligned}\ ] ] because for all .the minimum variance weightings given in [ theweightings ] were absolute minimum variance in the sense that for .the same minimum variance weightings continue to achieve absolute minimum variance for equation ( [ dxhat2ul ] ) , reducing its right hand side to the irreducible minimum .thus the minimum variance weightings of [ theweightings ] remain minimum variance as long as only the first two terms ( and ) of equation ( [ dphati2dphatj2gpl ] ) are considered .the third ( ) term breaks the minimum variance derivation .however , this third term is likely to be subdominant compared to the first two .the quantity in the third term of equation ( [ dphati2dphatj2gpl ] ) is proportional to the covariance of power between weightings and , equation ( [ dphatidphatjfl ] ) , and the schwarz inequality guarantees that so that there is a natural tendency for the third term of equation ( [ dphati2dphatj2gpl ] ) to be dominated by the second . the only way for the third term to be large is for the power spectra from different weightings to be highly correlated with each other .physically , however , the most accurate estimate of covariance of power should come from averaging over many uncorrelated weightings , in which case for most weightings .thus , as just stated , it is to be expected that the third term should be subdominant compared to the first two . in summary , to the extent that either perturbation theory or the hierarchical model provide a reliable guide to the behaviour of high - order correlations , and to the extent that the third term of equation ( [ dphati2dphatj2gpl ] ) is subdominant , as it should be, the minimum variance weightings of [ theweightings ] should remain near minimum variance , good enough for practical application . | we show how to estimate the covariance of the power spectrum of a statistically homogeneous and isotropic density field from a single periodic simulation , by applying a set of weightings to the density field , and by measuring the scatter in power spectra between different weightings . we recommend a specific set of weightings containing only combinations of fundamental modes , constructed to yield a minimum variance estimate of the covariance of power . numerical tests reveal that at nonlinear scales the variance of power estimated by the weightings method substantially exceeds that estimated from a simple ensemble method . we argue that the discrepancy is caused by beat - coupling , in which products of closely spaced fourier modes couple by nonlinear gravitational growth to the beat mode between them . beat - coupling appears whenever nonlinear power is measured from fourier modes with a finite spread of wavevector , and is therefore present in the weightings method but not the ensemble method . beat - coupling inevitably affects real galaxy surveys , whose fourier modes have finite width . surprisingly , the beat - coupling contribution dominates the covariance of power at nonlinear scales , so that , counter - intuitively , it is expected that the covariance of nonlinear power in galaxy surveys is dominated not by small scale structure , but rather by beat - coupling to the largest scales of the survey . large - scale structure of universe methods : data analysis |
at first glance , the study of the very first stars in the universe might appear to be a rather academic and even quixotic endeavor .after all , we have never directly seen such stars . since we believe that heavy elements are synthesized almost exclusively in stars , the very first `` population iii '' stars were , by definition , made of zero metal gas .intensive searches for metal poor stars in the halo galaxy , however , have only turned up stars with metallicities ( beers 2000 ) . furthermore , the formation of a zero metal star is in some sense a rather singular event .once that star goes supernova and pollutes its environment with metals , the stars that subsequently form in its vicinity will no longer be population iii ( with zero metallicity ) .population iii may also be suicidal in the sense that uv radiation from the first stars could destroy the molecular hydrogen that allows primordial gas to cool and form these stars in the first place .there has therefore been considerable speculation , e.g. , cayrel ( 1986 ) and haiman , rees , & loeb ( 1997 ) , that the population iii ( pop .iii ) phase in the evolution of our universe was very brief indeed . nonetheless , while the pop .iii phase may have been a brief one , it seems to have been a pervasive one . even in the least evolved regions of the universe that we can probe today , the high redshift lyman clouds, we still find evidence for a non - zero metallicity ( cowie & songaila 1998 ) .more importantly , the epoch of the pop .iii stars probably represents the first substantial input of energy , photons , and metals into the universe since the big bang and marks the end of the `` dark ages '' ( e.g. , loeb 1999 , rees 1999 ) that started when the cosmic background light redshifted out of the visible range ( at ) .because structure formation depends critically on the ability of baryons to cool , and this in turn depends critically on the metallicity and ionization state of the baryonic gas , the `` feedback '' effects of the first stars on the intergalactic medium ( igm ) in fact play a subtle but key role in determining the subsequent evolution of the universe .the significant interest in primordial star formation at this conference is therefore not an accident .large - scale cosmological simulations ( e.g. , ostriker & gnedin 1996 ; abel et al .1998 ; fuller & couchman 2000 ) are just now becoming good enough to resolve the scales of the first objects in the universe that can cool and collapse ( tegmark et al . 1997 ) . understanding the star formation that will occur in these objects and accurately incorporating its effects into cosmological codes therefore represents the next major challenge in our quest to integrate forwards from the big bang .the interest in population iii , however , is not purely a theoretical one .studies of cosmic microwave background ( cmb ) anisotropies are now providing us with information on the physical conditions at when the universe became transparent to cmb radiation , while observations of high - redshift quasars and galaxies tell us about the universe at redshifts below .current observations , however , give us little information on the `` dark ages , '' the crucial epoch in between .reaching into this epoch is thus our next observational challenge and is the primary motivation , for example , of the _ next generation space telescope _ ( ngst ) which will provide unprecedented sensitivity at near - infrared wavelengths ( loeb 1998 ) .the study of the first stars is thus timely , providing a theoretical framework for the interpretation of what ngst might discover , less than a decade from now .even if ngst does not directly image the first stars , it will probe the epoch of the reionization of the igm ( e.g. , barkana , this proceedings ) .uv photons from the first stars , perhaps together with an early population of quasars , may have contributed significantly to this reionization ( e.g. , see ciardi et al .2000 ; miralda - escud , haehnelt , & rees 2000 ) .if the reionization occurred early enough , cmb fluctuations on scales will be damped by electron scattering ( e.g , .haiman & loeb 1997 ) and this effect could be detected by other next generation instruments like map and planck .the energy input from the first stars may also have left a small but measurable imprint on the cmb on very small scales ( e.g. , see the contributions here by sugiyama et al . and bruscoli et al . ) .in sum , the implications of pop.iii star formation might be testable in the not too distant future ! before that day arrives , however , a skeptical observer might still question whether concrete progress can actually be made in understanding primordial star formation . in the case of present - day star formation , we can not predict the initial mass function from first principles despite the wealth of observational data available . how could we hope to do something like this for unseen primordial stars ?a scan of the considerable early literature on primordial star formation ( e.g. , yoneyama 1972 ; hutchins 1976 ; silk 1977 , 1983 ; carlberg 1981 ; kashlinsky & rees 1983 ; palla , salpeter , & stahler 1983 ; carr , bond , & arnett 1984 ; couchman & rees 1986 ; uehara et al .1996 , haiman , thoul , & loeb 1996 ; omukai & nishi 1998 ) would tend to support this conclusion .the range of mass estimates for the first stars spans spans six ( ! ) decades , from to there are reasons for hope , however .first , the physics of the first stars is considerably simpler than that of present - day star formation ( larson 1998 ; loeb 1998 ) .the present - day interstellar medium is an exceedingly complex environment , but primordial gas initially has no metals , no dust grains and no cosmic rays to complicate the gas cooling function .since we think we know the primordial abundances well and the number of relevant species is small , the gas chemistry and cooling function are relatively simple and have been extensively studied ( e.g. , see galli & palla 1998 ) . also , because there are no stars yet , the only relevant external radiation is the cosmic background whose behavior we also think is well - understood. additionally , if one believes that galactic strength magnetic fields resulted from dynamo action perhaps enhanced by compression ( e.g. , kulsrud 1997 ) , then magnetic fields at early times are likely to be dynamically insignificant . finally ,before the first supernovae went off , the early igm must also have been a rather quiescent place , with no sources to sustain turbulent motion .the remaining consideration in understanding how primordial gas collapses to form stars is knowledge of the typical initial conditions : the initial gas density and temperature profile , the gas angular momentum distribution , the density and velocity distribution of the dark matter halos containing the gas , and the underlying cosmology . estimating these initial conditions for a specific cosmological scenario is no longer a problem given the current state of simulations . in sum , at least during the initial phases of primordial star formation, we have a well - posed problem where the relevant physics is in hand .secondly , computers can now follow the inherently three dimensional process of primordial gas fragmentation and collapse .this is critical because it is not immediately obvious when gas fragmentation in primordial clouds halts . to appreciate the difficulties , note that since gas can cool and increase its density arbitrarily ( at least until an opacity limit sets in ) , the jeans mass for a collapsing gas cloud , i.e. , the scale below which fragmentation halts , can become extremely small .such behavior is indeed seen in one dimensional simulations of isothermal filament collapse .one might therefore predict primordial stars to have very low masses . however ,if the initial density perturbations in the cloud are not very large or the cloud has a very strong central density concentration , the cloud can collapse into a single object before the perturbations have time to grow and fragment the cloud ( e.g. , tohline 1980 ) .thus , depending on exact initial conditions , one could also predict that the first objects to turn around and cool will collapse directly into very massive stars or black holes ( the so - called `` vmos '' or very massive objects ) .furthermore , one can not straightforwardly apply the intuition on fragmentation developed in the more extensive studies of present - day star formation . in the present - day case ,gas cooling is very efficient and one typically takes the collapsing gas to be isothermal .molecular hydrogen is a very poor coolant , however , and the timescale for zero - metal gas to cool can often be comparable to or longer than the dynamical timescale for the gas to collapse .this has profound consequences , as we show next .the results shown are the thesis work of volker bromm .( see bromm , coppi , & larson 1999 for a more extended discussion of the calculation . )our goal here is to follow the gravitational fragmentation of a cloud to see if there is indeed a characteristic mass scale at which fragmentation stops and gravitational collapse proceeds unhindered .this `` clump '' mass scale and the overall spectrum of runaway clump masses that we find , of course , can not be directly translated into a stellar mass scale or imf , but it is an important first step . in the case of present - day star formation ,at least , there is increasing evidence that the two may in fact be closely related .our calculational approach is intermediate to that of the other two primordial gas collapse calculations shown at this meeting .the first of these ( see contribution by nakamura & umemura ) uses a high resolution 2-d mesh code to follow the evolution of an idealized primordial gas cloud for many different initial conditions and perturbations . the second ( see contribution by abel et al . ) is a full 3-d calculation that starts from a large scale cosmological simulation and uses the adaptive mesh refinement ( amr ) technique to zoom in on the evolution of the first object to undergo collapse in their simulation volume. only a few realizations of the cosmological initial conditions have been explored . in an attempt to increase the number of realizationswe can explore , we instead follow the cosmological evolution of a tophat density perturbation with parameters close to those expected for the first objects that can collapse ( e.g. , see tegmark et al . 1997 ) .we use a 3-d particle code based on the treesph code of hernquist & katz ( 1990 ) that incorporates the full primordial chemistry of galli & palla ( 1998 ) , including the effects of hd cooling .the sph technique does not handle shocks as well as mesh techniques , but strong shocks are not important in the regime considered here and sph is simple and flexible .for example , it easy to turn a group of gas particles into a single `` sink '' particle without having to worry about mesh artifact / resampling issues .this is useful for allowing a simulation to continue beyond the runaway collapse of the first clump ( which ordinarily would halt the calculation because of the courant limit ) .it is also easy to increase our spatial resolution in a desired region ( i.e. , perform a `` poor man s '' version of amr ) by tagging the particles that enter that region , and then restarting the calculation with each of the tagged particles replaced by many lower mass particles , e.g. , see fig . 7 .1 - 4 show results one of our typical top - hat collapse/ fragmentation calculations . at endow a spherical , uniform density halo of total mass ( baryonic plus dark matter ) with a hubble expansion such that virialization occurs at the dark matter is perturbed with a power spectrum expected from cdm on small scales .the baryons are uniformly distributed and have a mass fraction both halo components are initially in solid body rotation about the axis , with angular momentum corresponding to a cosmological spin parameter these are typical parameters for the first objects ( density fluctuation ) that turn around and are massive enough to cool in a hubble time ( e.g. , see tegmark et al .the dark matter initially plays a key role as the baryons fall into the potential wells of the growing small - scale dark matter perturbations ( fig 1 ) .eventually , the dark matter undergoes violent relaxation and starts to lose its substructure .the baryons sink into the center of the overall dark matter potential well and start to fragment ( fig .2 ) . in fig .3 , we plot the properties of the gas particles at note the `` pile up '' of particles at density and temperature k , corresponding to a jeans mass the pile up reflects the fact that gas undergoing collapse `` loiters '' at these values ( see the time history in fig . 4 , and discussion below ) .we have carried out many other runs , varying quantities like the total angular momentum of the cloud , the slope of the dark matter perturbation spectrum , the degree to which the mass is centrally concentrated ( the standard top - hat assumes a uniform density distribution , which is optimal in terms of producing many fragments but may not always be realistic ) , the baryon mass fraction ( ) , and the total mass and turnaround redshift of the cloud .we find two main results .first , in terms of the morphology of the collapsed gas and the overall `` efficiency '' of fragmentation ( the fraction of gas that ends up in clumps ) , we find that varying the initial conditions of the cloud _ does _ make a significant difference , e.g. , compare the gas morphology in fig .2 with that in fig . 6 .similar dependences , e.g. , on the cloud s angular momentum and degree of central mass concentration , are in fact found in gas simulations of present - day star formation ( e.g. , tsuribe & inutsuka 1999 ) .note that this dependence on initial conditions means it is _ not _ possible to make statements about the overall efficiency of primordial star formation without first carrying out a comprehensive survey of the relevant conditions .second , despite the differences in gas morphology , we always find find roughly the _ same _ initial clump masses . here , initial clump mass is defined as the amount of gas that is gravitationally bound and infalling when the center of a clump starts it runaway collapse , i.e. , it does not include any further gas that may eventually accrete onto the clump .the reason for this perhaps surprising second conclusion can be found in fig .if we plot the temperatures and densities of our gas particles when the first clumps start to collapse , we always find an excess of particles with temperatures 200 k and hydrogen densities these two numbers are not accidental and are set molecular hydrogen physics which does _ not _ depend on the initial conditions .specifically , a temperature k is the minimum one attainable via h cooling because of the molecular energy levels . the corresponding critical density ,beyond which the h rotational levels are populated according to lte , is then . at the transition from nlte to lte , the cooling rate changes from being proportional to to merely linear in i.e. , the cooling time required for the gas to lose a significant fraction of its energy now becomes independent of density . due to this inefficient cooling , the gas ` loiters ' and passes through a phase of quasi - hydrostatic , slow contraction before undergoing runaway collapse ( see fig .this loitering appears to be crucial as it allows pressure waves to damp out density anisotropies and inhibits further fragmentation .although our results are still somewhat preliminary , we have carried out higher resolutions runs ( e.g. , fig .7,8 ) to follow the collapse of a clump to much higher densities , and we indeed see no evidence for sub - fragmentation .abel et al .have reached the same conclusion in the even higher resolution runs that they have carried out . although we can not guarantee that some of our clumps will not break up into a few objects , e.g. , a binary system , it seems unlikely they will break up into hundreds or thousands of subclumps . in other words , to astrophysical accuracy , the jeans mass that follows from the typical density and temperature values in fig .3 really is the characteristic clump mass scale for collapsing primordial gas .the fact that three groups at this conference arrived at the same conclusion using rather different codes and initial conditions tells us that a robust explanation , like the physics of molecular hydrogen , must lie behind it .although we are still far from solving the primordial star formation problem , the results presented at this meeting indicate we have made substantial progress .unless our understanding of primordial gas cooling is very wrong ( note that our simulations included hd cooling which some have speculated to be important ) or the typical physical conditions during the early dark ages are very different from current expectations , it appears inescapable that the first typical objects to collapse will fragment into clumps of initial mass it also appears likely that these clumps will not fragment much further as their interiors collapse to form a star or a few stars .therefore , typical primordial protostars are likely to look quite different from those around us today , and in particular , will be surrounded by much more massive envelopes . as noted at this meeting ,it is not at all obvious how much of this mass actually makes it onto the final star .however , given that all the scales are so much larger and resemble those we find around present - day massive stars in the process of being born , it is difficult to see how one can make ordinary solar mass stars from such a gas configuration , i.e. , primordial star formation is probably strongly biased towards massive ( ) and possibly very massive ( ) stars .this would explain why we see no zero - metal stars today and has important consequences that have not been fully explored yet , e.g. , massive primordial stars produce many more ionizing uv photons per unit mass than low mass ones ( see bromm , kudritski , & loeb 2000 for a detailed calculation of the spectrum from a massive zero metal star).also , such massive stars could be good progenitors for hypernovae and gamma - ray bursts , or the seeds for massive black hole formation .there remain important questions that do not require a major leap in computing power to answer .first , at this meeting it became clear we need to deide what are typical , realistic initial conditions , e.g , nakamura & umemura pointed out that it is possible to fragment down to if one can start out with dense enough filaments ( we agree since this would skip the `` loitering '' phase of the collapse , but we do not see how such filaments arise in a realistic scenario . quantifying the relevant initial conditions for primordial gas collapse will let us determine the efficiency of clump formation , which in turn gives us an upper limit on the primordial star formation efficiency and provides a first indication as to the importance of the first stars . secondly , we can consider what happens to gas collapse and fragmentation in the presence of trace metals and uv background radiation from a previous generation of stars .population iii star formation is often considered to be a short - lived event because the pristine conditions required are wiped out once the first stars produce uv light and the first supernovae produce metals . however , as metallicity does not build up instantly , there may well be an extended window of time when star formation either proceeds in the massive clump / star mode described here ( if h2 is present ) or not at all ( if h2 is destroyed by uv radation ) .our preliminary calculations indicate that metal cooling does not become important until the gas metallicity reaches which coincidentally is the range of the lowest observed metallicities and also the range where abundance anomalies begin to appear in metal poor stars .( these anomalies are often interpreted as increased scatter due to enrichment by individual supernova events , but they could also reflect atypical progenitor stars , e.g. , that had much hotter interiors than stars today . ) finally , it should be possible to push the spherically symmetric protostar calculation of omukai and nishi ( 1998 ) through to the accretion phase ( along the lines of masunaga , miyama , and inutskuka 1999 in the present - day star formation case ) .this will enable a first cut at understanding the feedback of the primordial protostar s radiation on its envelope .the feedback is likely to be strong , and abel conjectured at this meeting , for example , that all accretion may stop once the protostar reaches and produces enough ionizing radiation to destroy the envelope s molecular hydrogen , thereby removing its primary means of cooling . without a real calculation , however , it is not clear what the outcome will be .if the envelope gas simply becomes adiabatic , accretion can still occur if a sufficient central mass concentration has already been established ( bondi 1952 ) .abel , t. , anninos , p. , norman , m. l. , & zhang , y. 1998 , , 508 , 518 beers , t. c. 2000 , in `` the first stars '' , eso astrophysics symposia , ed .a. weiss , t. abel , & v. hill ( berlin : springer ) , 3 bromm , v. , coppi , p. s. , & larson , r. b. 1999 , , 527 , l5 bromm , v. , kudritski , r.p . , & loeb , a. , astro - ph/0007248 carlberg , r. g. 1981 , mnras , 197 , 1021 carr , b.j . , bond , j.r . , & arnett , w.d ., 1984 , , 280 , 825 cayrel , r. , 1986 , a&a , 168 , 81 ciardi , b. , ferrara , a. , governato , f. , & jenkins , a. 2000 , mnras , 314 , 611 couchman , h. m. p. , & rees , m. j. 1986 , mnras , 221 , 53 cowie , l. l. , & songaila , a. 1998 , nature , 394 , 44 fuller , t. m. , & couchman , h.m.p .2000 , , submitted ( astro - ph/0003079 ) galli , d. , & palla , f. 1998 , a&a , 335 , 403 haiman , z. , thoul , a. a. , & loeb , a. 1996 , , 464 , 523 haiman , z. , rees , m.j . , &loeb , a. , 1997 , apj , 476 , 458 haiman , z. , & loeb , a. 1997 , , 483 , 21 hutchins , j. b. 1976 , , 205 , 103 kashlinsky , a. , & rees , m. j. 1983 , mnras , 205 , 955 kulsrud , r. m. 1997 , in `` critical dialogues in cosmology '' , ed . n. turok ( singapore : world scientific ) , 328 larson , r. b. 1998 , mnras , 301 , 569 loeb , a. 1998 , in asp conf .133 , science with the next generation space telescope , ed .e. smith & a. koratkar ( san francisco : asp ) , 73 loeb , a. 1999 , astro - ph/9907187 miralda - escud , j. , haehnelt , m. , & rees , m. j. 2000 , , 530 , 1 omukai , k. , & nishi , r. 1998 , , 508 , 141 ostriker , j. p. , & gnedin , n. y. 1996 , , 472 , l63 palla , f. , salpeter , e. e. , & stahler , s. w. 1983 , , 271 , 632 rees , m. j. 1999 , aip conf .470 , 13 silk , j. 1977 , , 211 , 638 silk , j. 1983 , mnras , 205 , 705 tegmark , m. , silk , j. , rees , m. j. , blanchard , a. , abel , t. , & palla , f. 1997 , , 474 , 1 tohline , j.e . , 1980 , apj , 242 , 209 tsuribe , t. & inutsuka , s. , 1999 , apj , 526 , 307 uehara , h. , susa , h. , nishi , r. , yamada , m. , & nakamura , t. 1996 , , 473 , l95 yoneyama , t. 1972 , pasj , 24 , 87 | we briefly review the motivations for studying the formation of the first `` population iii '' stars and present recent results from our numerical simulations in this area . we discuss the new questions raised as a result of the simulations presented by us and others at this meeting . # 1_#1 _ # 1_#1 _ = # 1 1.25 in .125 in .25 in |
within the main paper , we have assumed that the chemoattractant concentration regulates the susceptibility of a cell to contact inhibition of locomotion , with .this models the stabilization of protrusions induced by contact interactions .this is consistent with the results of theveneau et al . , who find that protrusion stabilization is stronger in clusters than in single cells .however , very similar results can be found if we assume that is constant and the signal regulates the time required for the cell s polarity to relax , i.e. . in this case , the mean polarity of a cell is and we find where the mobility matrix is the same as in the main paper , .however , because varies over space , the fluctuations will also vary : , where is the mean signal across the cluster .for this reason , the chemotactic index in the -regulation model will depend on , and will not be constant over a linear gradient .in addition , a single cell with a persistence time that depends on the chemoattractant level will undergo biased motion .this is shown in fig .[ fig : singletau ] below .this drift can be made smaller than the cil - driven cluster drift , as it is independent of , while the cluster drift is proportional to .develop a mean drift*. the mean and velocities for a cell with spatially varying are shown : , with , , .result is average over iterations , each started at the origin ; error bars indicate ^ 2 \rangle^{1/2}/\sqrt{n} ] , where is the number of cells in the cluster . .for each of the configurations shown , nearest - neighbor cells have unit separation .a -layer oligomer has cells . is given for the orientation shown in the left column ; other orientations may be found by transforming the mobility tensor ; ( see section [ sec : mobrotation ] below ) . [ cols="^ " , ] | many eukaryotic cells chemotax , sensing and following chemical gradients . however , experiments have shown that even under conditions when single cells can not chemotax , small clusters may still follow a gradient . this behavior has been observed in neural crest cells , in lymphocytes , and during border cell migration in drosophila , but its origin remains puzzling . here , we propose a new mechanism underlying this collective guidance " , and study a model based on this mechanism both analytically and computationally . our approach posits that the contact inhibition of locomotion ( cil ) , where cells polarize away from cell - cell contact , is regulated by the chemoattractant . individual cells must measure the mean attractant value , but need not measure its gradient , to give rise to directional motility for a cell cluster . we present analytic formulas for how cluster velocity and chemotactic index depend on the number and organization of cells in the cluster . the presence of strong orientation effects provides a simple test for our theory of collective guidance . cells often perform chemotaxis , detecting and moving toward increasing concentrations of a chemoattractant , to find nutrients or reach a targeted location . this is a fundamental aspect of biological processes from immune response to development . many single eukaryotic cells sense gradients by measuring how a chemoattractant varies over their length ; this is distinct from bacteria that measure chemoattractant over time . in both , single cells are capable of net motion toward higher chemoattractant . recent measurements of how neural crest cells respond to the chemoattractant sdf1 suggest that single neural crest cells can not chemotax effectively , but small clusters can . a more recent report shows that at low gradients , clusters of lymphocytes also chemotax without corresponding single cell directional behavior ; at higher gradients clusters actually move in the opposite direction to single cells . in addition , late border cell migration in the _ drosophila _ egg chamber may occur by a similar mechanism . these experiments strongly suggest that gradient sensing in a cluster of cells may be an _ emergent _ property of cell - cell interactions , rather than arising from amplifying a single cell s biased motion ; interestingly , some fish schools also display emergent gradient sensing . in fact , these experiments led to a collective guidance " hypothesis , in which a cluster of cells where each individual cell has no information about the gradient may nevertheless move directionally . in a sense that will become clear , cell - cell interactions allow for a measurement of the gradient across the entire cluster , as opposed to across a single cell . in this paper , we develop a quantitative model that embodies the collective guidance hypothesis . our model is based on modulation of the well - known contact inhibition of locomotion ( cil ) interaction , in which cells move away from neighboring cells . we propose that individual cells measure the local signal concentration and adjust their cil strength accordingly ; the cluster moves directionally due to the spatial bias in the cell - cell interaction . we discuss the suitability of this approach for explaining current experiments , and provide experimental criteria to distinguish between chemotaxis via collective guidance and other mechanisms where clusters could gain improvement over single - cell migration . these results may have relevance to collective cancer motility , as recent data suggest that tumor cell clusters are particularly effective metastatic agents . by contact inhibition of locomotion ( cil ) ; the strength of this bias is proportional to the local chemoattractant value , leading to cells being more polarized at higher . see text for details . * b , * one hundred trajectories of a single cell and * c , * cluster of seven cells . trajectories are six persistence times in length ( 120 min ) . scalebar is one cell diameter . the gradient strength is in these simulations , with the gradient in the direction.,width=321 ] we consider a cluster of cells exposed to a chemical gradient . we use a two - dimensional stochastic particle model to describe cells , giving each cell a position and a polarity . the cell polarity indicates its direction and propulsion strength : an isolated cell with polarity has velocity . the cell s motion is overdamped , so the velocity of the cell is plus the total physical force other cells exert on it , . biochemical interaction between cells alter a cell s polarity . our model is then : where are the intercellular forces of cell - cell adhesion and volume exclusion , and are gaussian langevin noises with , where the greek indices run over the dimensions . the first two terms on the right of eq . [ eq : polarity ] are a standard ornstein - uhlenbeck model : relaxes to zero with a timescale , but is driven away from zero by the noise . this corresponds with a cell that is orientationally persistent over a time of . we have introduced the last term on the right of eq . [ eq : polarity ] to describe contact inhibition of locomotion ( cil ) . cil is a well - known property of many cell types in which cells polarize away from cell - cell contact . we model cil by biasing away from nearby cells , toward , where is the unit vector pointing from cell to cell and the sum over indicates the sum over the neighbors of ( those cells within a distance cell diameters ) . while this is motivated by cil in neural crest , it is also a natural minimal model under the assumption that cells know nothing about their neighbors other than their direction . for cells along the cluster edge , the cil bias points outward from the cluster , but for interior cells is smaller or zero ( fig . [ fig : schematic]a ) . this is consistent with experimental observations that edge cells have a strong outward polarity , while interior cells have weaker protrusions . chemotaxis arises in our model if the chemoattractant changes a cell s susceptibility to cil , , . this models the result of that the chemoattractant sdf1 stabilizes protrusions induced by cil . we also assume that the cell s chemotactic receptors are not close to saturation - i.e. the response is perfectly linear . if cil is present even in the absence of chemoattractant ( ) , as in neural crest , i.e. , this will not significantly change our analysis . similar results can also be obtained if all protrusions are stabilized by sdf1 ( regulated by ) , though with some complications ( _ appendix _ , fig . a1 ) . _ analytic predictions for cluster velocity._our model predicts that while single cells do not chemotax , clusters as small as two cells will , consistent with . we can analytically predict the mean drift of a cluster of cells obeying eqs . [ eq : position]-[eq : polarity ] : where the approximation is true for shallow gradients , . indicates an average over the fluctuating but with a fixed configuration of cells . the matrix only depends on the cells configuration , where , as above , . eq . [ eq : shallow ] resembles the equation of motion for an arbitrarily shaped object in a low reynolds number fluid under a constant force : by analogy , we call the mobility matrix . " there is , however , no fluctuation - dissipation relationship as there would be in equilibrium . to derive eq . [ eq : shallow ] , we note that eq . [ eq : position ] states that , in our units , the velocity of a single cell is equal to the force on it ( i.e. the mobility is one ) . for a cluster of cells , the mean velocity of the cluster is times the total force on the cluster . as , the cluster velocity is . when the cluster configuration changes slowly over the timescale , eq . [ eq : polarity ] can be treated as an ornstein - uhlenbeck equation with an effectively time - independent bias from cil . the mean polarity is then , with gaussian fluctuations away from the mean , . the mean cell cluster velocity is in a constant chemoattractant field , , no net motion is observed , as . for linear or slowly - varying gradients , and we get eq . [ eq : shallow ] . _ cluster motion and chemotactic efficiency depend on cluster size , shape , and orientation. _ within our model , a cluster s motion can be highly anisotropic . consider a pair of cells separated by unit distance along . then by eq . [ eq : matrix ] , , , . if the gradient is in the direction , then and , where . cell pairs move toward higher chemoattractant , but their motion is along the pair axis , leading to a transient bias in the direction before the cell pair reorients due to fluctuations in ( fig . [ fig : pairs ] ) . we compare our theory for the motility of rigid cell clusters ( eq . [ eq : shallow ] ) with a simulation of eq . [ eq : position]-[eq : polarity ] with strongly adherent cell pairs with excellent agreement ( fig . [ fig : pairs ] ) . for the simulations in fig . [ fig : pairs ] and throughout the paper , we solve the model equations eqs . [ eq : position]-[eq : polarity ] numerically using a standard euler - maruyama scheme . we choose units such that the equilibrium cell - cell separation ( roughly 20 m for neural crest ) is unity , and the relaxation time ( we estimate minutes in neural crest ) . within these units , neural crest cell velocities are on the order of , so we choose this corresponds to a root mean square speed of an isolated cell being microns / minute . the typical cluster velocity scale is , which is 0.5 ( 0.5 microns / minute in physical units ) if and , corresponding to changing by 2.5% across a single cell at the origin . cell - cell forces are chosen to be stiff springs so that clusters are effectively rigid ( see _ appendix _ for details ) . depends strongly on the angle between the cell - cell axis and the chemotactic gradient . cell pairs also drift perpendicular to the gradient , . is the velocity scale ; . simulations are of eqs . [ eq : position]-[eq : polarity ] . we compute by tracking the instantaneous angle , then averaging over all velocities within the appropriate angle bin . error bars here and throughout are one standard deviation of the mean , calculated from a bootstrap . over trajectories of 6 ( 120 minutes ) are simulated.,width=340 ] we can also compute and hence for larger clusters ( table s1 , _ appendix _ , fig . a2 ) . for a cluster of layers of cells surrounding a center cell , , with . a cluster with layers has cells ; thus the mean velocity of a -layer cluster is given by , where is the angular average of . we predict that first increases with , then slowly saturates to . this is confirmed by simulations of the full model ( fig . [ fig : clustersize]a ) . we note that is an average over time , and hence orientation ( see below , _ appendix _ ) . we can see why saturates as by considering a large circular cluster of radius . here , we expect on the outside edge , where is a geometric prefactor and is the outward normal , with elsewhere . then , , independent of cluster radius . a related result has been found for circular clusters by malet - engra et al . ; we note that they do not consider the behavior of single cells or cluster geometry . the efficiency of cluster chemotaxis may be measured by chemotactic index ( ) , commonly defined as the ratio of distance traveled along the gradient ( the displacement ) to total distance traveled ; from -1 to 1 . we define , where the average is over both time and trajectories ( and hence over orientation ) . the chemotactic index also be computed analytically , and it depends on the variance of , which is . in our model , only depends on the ratio of mean chemotactic velocity to its standard deviation , where is a generalized laguerre polynomial . when mean cluster velocity is much larger than its fluctuations , and , but when fluctuations are large , and ( _ appendix _ , fig . a3 ) . together , eq . [ eq : shallow ] , eq . [ eq : ci ] and table s1 provide an analytic prediction for cluster velocity and , with excellent agreement with simulations ( fig . [ fig : clustersize ] ) . we note that only depends on cluster configuration , where , so collapses onto a single curve as the gradient strength is changed ( fig . [ fig : clustersize]a ) . by contrast , how with depends on and ( eq . [ eq : ci ] , fig . [ fig : clustersize]b ) . in a cluster increases , the mean velocity increases with but then saturates ; the mean velocity can be collapsed onto a single curve by rescaling by . * b , * the chemotactic index saturates to its maximum value . black squares and lines are the orientationally - averaged drift velocity computed for rigid clusters by eq . [ eq : shallow ] and eq . [ eq : ci ] . colored symbols are full model simulations with strong adhesion . cell cluster shape may influence ( _ appendix _ fig . a4 ) ; our calculations are for the shapes in table s1 . error bars here are symbol size or smaller ; trajectories of are used for each point.,width=321 ] in our model , clusters can in principle develop a spontaneous rotation , but in practice this effect is small , and absent for symmetric clusters ( see _ appendix _ ) . _ motion in non - rigid clusters. _ while we studied near - rigid clusters above , our results hold qualitatively for clusters that are loosely adherent and may rearrange . cell rearrangements are common in many collective cell motions , but we note that in clusters are more rigid . we choose cell - cell forces to allow clusters to rearrange ( see _ appendix _ , ) , and simulate eqs . [ eq : position]-[eq : polarity ] . as in rigid clusters , increases and saturates , while toward unity , though more slowly than a rigid cluster ( fig . [ fig : fluid]ab ) . clusters may fragment ; with increasing , increases and the cluster breaks up ( fig . [ fig : fluid]c ) . cluster breakup can limit guidance if is too large , clusters are not stable , and will not chemotax . in fig . [ fig : fluid]ab , we compute velocity by averaging over all cells , not merely those that are connected . if we track cells ejected from the cluster , they have an apparent , as they are preferentially ejected from the high- cluster edge ( _ appendix _ ) . experimental analysis of dissociating clusters may therefore not be straightforward . anisotropic chemotaxis is present in non - rigid pairs , though lessened because our non - rigid pairs rotate quickly with respect to ( _ appendix _ ) . in a cluster increases , the mean velocity increases with but then saturates . * b , * chemotactic index also approaches unity , but slower than in a rigid cluster . rigid cluster theory assumes the same cluster geometries as in fig . [ fig : clustersize ] . averages in * a - b * are over trajectories ( ranging from for to for ) , over the time to . * c , * breakdown of a cluster as it moves up the chemoattractant gradient . x marks the initial cluster center of mass , o the current center . , in this simulation.,width=340 ] _ distinguishing between potential collective chemotaxis models._our model explains how chemotaxis can emerge from interactions of non - chemotaxing cells . however , other possibilities exist for enhancement of chemotaxis in clusters . coburn et al . showed that in contact - based models , a few chemotactic cells can direct many non - chemotactic ones . if single cells are weakly chemotactic , cell - cell interactions could amplify this response or average out fluctuations . how can we distinguish these options ? in lymphocytes , the motion of single cells oppositely to the cluster immediately rules out simple averaging or amplification of single cell bias . more generally , the scaling of collective chemotaxis with cluster size does not allow easy discrimination . in fig . [ fig : clustersize ] , at large , and . as an alternate theory , suppose each cell chemotaxes noisily , e.g. , where are independent zero - mean noises . in this case , independent of , and , as in our large- asymptotic results and the related circular - cluster theory of . instead , we propose that orientation effects in small clusters are a good test of emergent chemotaxis . in particular , studying cell pairs as in fig . [ fig : pairs ] is critical : anisotropic chemotaxis is a generic sign of cluster - level gradient sensing . even beyond our model , chemotactic drift is anisotropic for almost all mechanisms where single cells do not chemotax , because two cells separated perpendicular to the gradient sense the same concentration . this leads to anisotropic chemotaxis unless cells integrate information over times much larger than the pair s reorientation time . by contrast , the simple model with single cell chemotaxis above leads to isotropic chemotaxis of pairs . how well does our model fit current experiments ? we find increasing cluster size increases cluster velocity and chemotactic index . this is consistent with , who see a large increase in taxis from small clusters ( cells ) to large , but not , who find that similar between small and large clusters , and note no large variations in velocity . this suggests that the minimal version of collective guidance as developed here can create chemotaxis , but does not fully explain the experiments of . there are a number of directions for improvement . more quantitative comparisons could be made by detailed measurement of single - cell statistics , leading to nonlinear or anisotropic terms in eq . [ eq : polarity ] . our description of cil has also assumed , for simplicity , that both cell front and back are inhibitory ; other possibilities may alter collective cell motion . we could also add adaptation as in the legi model to enable clusters to adapt their response to a value independent of the mean chemoattractant concentration . we will treat extensions of this model elsewhere ; our focus here is on the simplest possible results . in summary , we provide a simple , quantitative model that embodies a minimal version of the collective guidance hypothesis and provides a plausible initial model for collective chemotaxis when single cells do not chemotax . our work allows us to make an unambiguous and testable prediction for emergent collective guidance : pairs of cells will develop anisotropic chemotaxis . although there has been considerable effort devoted to models of collective motility , ours is the first model of how collective chemotaxis can emerge from single non - gradient - sensing cells via collective guidance and regulation of cil . bac appreciates helpful discussions with albert bae and monica skoge . this work was supported by nih grant no . p01 gm078586 , nsf grant no . dms 1309542 , and by the center for theoretical biological physics . bac was supported by nih grant no . f32gm110983 . |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.